ScholarSync aims to design, develop, and create an AI powered education device to support the students of tomorrow in optimising their learning experience. Utilising OpenAI’s API alongside our innovative product design, we are looking to provide a portable educational experience for those who struggle to engage with their learning or may not have access to directed learning.


Empowered and Driven:
Delivering Solutions for Global Issues

Scholar Sync aims to make education more accessible and personalized, particularly for learners with disabilities or learning difficulties. By providing tailored learning experiences, it empowers individuals to take control of their education and learn at their own pace. This aligns with the objective of Goal 4 to ensure inclusive and equitable quality education for all, promoting lifelong learning opportunities that foster personal growth and development.

The device addresses the educational attainment gap between privileged and underprivileged communities by democratising access to high-quality educational resources. Regardless of socio-economic status or geographic location, learners can benefit from Scholar Sync’s non-judgmental and supportive environment, building confidence and self-esteem as they progress in their learning journey. This contributes to reducing inequalities in education and promoting social inclusion, in line with the aspirations of Goal 10.

Scholar Sync recognizes the importance of collaborative partnerships in achieving sustainable development objectives. Looking to the future, by fostering partnerships among governments, educational institutions, and NGOs (Non-Governmental Organizations), it leverages collective expertise and resources to enhance the effectiveness of educational interventions. Through global cooperation and knowledge-sharing, Scholar Sync aims to create a more inclusive and sustainable education ecosystem, advancing the principles of Goal 17 as well as providing several opportunities to expand our market reach in the future.


Despite our strengths, communication can be a weakness, as we occasionally rely on individual exchanges rather than comprehensive group discussions. This could lead to miscommunication on tasks and objectives. Additionally, our group tends to deliberate extensively before making decisions, potentially causing delays. Lastly, our aversion to appointing a designated leader may hinder clear guidance and direction for the team and delays progress when it comes to work being done.
James excels in programming, a crucial skill for coding essential elements of our product. Eva's expertise lies in design, ensuring the creation of visually appealing and high-quality content and product to be produced. Keshav's organizational prowess facilitates effective task delegation and deadline management between members, streamlining our project's workflow and progress. Lastly, Daniyal's well-rounded skills make him a valuable asset, capable of tackling any task with precision and quality.
Use Case
Scholar Sync's mission is to create a device that empowers individuals facing learning challenges or less fortunate circumstances in education. By aligning with UN Sustainable Development Goals 4, 10, and 17, it seeks to democratize access to education, reduce inequalities, and foster collaborative partnerships for sustainable development.


List of Essential Tools and Technologies



“1- The Raspberry Pi 4 is the minicomputer which the program will be running on. This is where the result will be generated after a user voice inputted response.

2- The LCD screen will be connected to the Raspberry Pi, this will allow for navigation throughout the tool, as well as showing an output (text form) of the result.

3- The LED represents the on/off status of the tool. This will be powered by a button, which will trigger the buttons on and off status.

4- The second LED represents the microphone module, once the button (for the microphone) is pressed, the microphone will be ready for use, the user will then speak into the microphone and the product will develop a response.

We have identified 3 APIs for the software backend of MindMentor. These 3 APIs were mentioned in this opportunistic plan earlier but in detail:

Whisper- Whisper is essentially OpenAI’s automatic speech recognition system which has been trained on 680,000 hours of multilingual and multitask data. Upon researching Whisper, we discovered that Whisper allows for translation, which is very effective when a student does not speak much or any English. Whisper’s function within MindMentor is for speech recognition.

GPT 3.5 Turbo- This API’s function within MindMentor will be used for generating a response, via OpenAI.

TTS- The TTS API will be used to output the GPT3.5 Turbo’s generated response via audio, through a built-in. Within the TTS API, there are 6 voices, all of which are very intelligible. An effective feature of the TTS API is it can output a result in different languages.


Stage 1


Stage 2


Stage 3

Online Marketing
Creative agency
Web development


Impact on the market

``Remote and Rural Communities: In remote or rural areas with limited access to traditional educational resources, the AI-based learning device can serve as a valuable tool for bridging the gap in educational opportunities. With potentially lightweight bandwidth capabilities and localized content, it can enhance its usefulness in underserved communities.

Special Needs Students: Students with special needs, including those with learning disabilities or neurodiverse conditions, may require personalized educational resources and support. Designing inclusive features and accessibility options can make the device more accessible and beneficial for special needs students.

Lifelong Learners: Individuals who have a passion for learning and self-improvement may benefit from the AI-based learning device. By offering engaging and interactive learning experiences, the device can support lifelong learners in pursuing their interests and expanding their knowledge.



Our end user would begin by powering on the device from the implemented power button on the side of the device. Once powered, our user will be introduced to the start menu for our interface, which will be displayed on a 3-inch LCD display, displaying options to either utilise the tutor or support persona, depending on their required use case. To interact with the device, the user will need to press the microphone button located on the front of the device to provide input for their request. The input received will be forwarded to the whisper API to be converted to text and is then forwarded to GPT 3.5 Turbo for processing. The GPT API will take the request from the user and process it via the requested AI persona for a personalised response. Once completed, the output will be processed via OpenAI Text to Speech, providing a voice representation of the response to their request, as well as providing the text output on the integrated display.


Results 1


Results 2


Results 3