“1- The Raspberry Pi 4 is the minicomputer which the program will be running on. This is where the result will be generated after a user voice inputted response.
2- The LCD screen will be connected to the Raspberry Pi, this will allow for navigation throughout the tool, as well as showing an output (text form) of the result.
3- The LED represents the on/off status of the tool. This will be powered by a button, which will trigger the buttons on and off status.
4- The second LED represents the microphone module, once the button (for the microphone) is pressed, the microphone will be ready for use, the user will then speak into the microphone and the product will develop a response.
We have identified 3 APIs for the software backend of MindMentor. These 3 APIs were mentioned in this opportunistic plan earlier but in detail:
Whisper- Whisper is essentially OpenAI’s automatic speech recognition system which has been trained on 680,000 hours of multilingual and multitask data. Upon researching Whisper, we discovered that Whisper allows for translation, which is very effective when a student does not speak much or any English. Whisper’s function within MindMentor is for speech recognition.
GPT 3.5 Turbo- This API’s function within MindMentor will be used for generating a response, via OpenAI.
TTS- The TTS API will be used to output the GPT3.5 Turbo’s generated response via audio, through a built-in. Within the TTS API, there are 6 voices, all of which are very intelligible. An effective feature of the TTS API is it can output a result in different languages.
“