Student Self-Labelling Question

Designing a question to improve the AI tutor's feedback system and accuracy
Project Overview
During my co-op at Korbit Tech Inc. as a UX Designer Intern, I worked on developing one of the main type of questions, called the student self-labelling question, to improve the feedback accuracy of the AI tutor.

I looked into user data and ratings to confirm the problem space, which further helped me gain insights and ideas. After developing a solution, I conducted usability and a/b testing to make sure that the solution contributed to improving the student's learning experience.

The student self-labelling question was successfully implemented and decreased the user frustration (14% to 2 %) as well as increasing the accuracy of the AI tutor's classifying free-form text answers provided by the students.
About Korbit Tech Inc.
Founded in 2017, Korbit Technologies is an edu-tech startup located in Montreal, QC. Backed by world-renowned scientists and scholars, including AI pioneer Dr. Yoshua Benjio, Korbit is developing a real-time interactive and personalized intelligent tutor for learning data science and AI.

Korbit has been recognized as one of the world’s top 100 AI startups (CB Insights) and has been featured in the New York Times, Forbes, Bloomberg, Financial Times, and National Post.
My Contributions
User interview, Usability testing, Prototyping

Timeline

May ~ July 2021, September ~ November 2021 (15 hours per week)
Instead of watching lectures and and going through multiple choice question, at Korbit, students learn with a conversational AI tutor that gives real-time feedback.

Due to this conversational aspect of the AI tutor, free-form text answers are important for student learning outcomes. In fact, they allow the AI tutor to estimate the student’s knowledge level, which further helps the AI tutor to adapt the feedback and curriculum dynamically to provide more personalized learning experience to the students.

However, from the data extracted from the learning platform, it was found that the AI tutor often struggles with classifying free-form text answers accurately, with the accuracy is estimated to be around 86%, which caused a lot of frustration for student by not being able to provide feedback properly.
Problem Space 🔍
who am I working for?
what is the goal?
What makes Korbit sets apart from other online learning platforms is that students can receive AI-driven feedback on their answers to exercise problems in real-time from our 24/7 available AI tutor to keep them progressing. That is why it is significant that Korbi, our AI tutor, should be able to provide accurate feedback.

However, our machine learning and data science team could easily identified mislabelled cases (i.e. low confidence cases), at least 50% of the time. Additionally, collecting more data about mislabelled or misunderstood labels (i.e. free-form text answers) would be also advantageous to advancing the AI tutor's machine learning models, which aligned with one of the core objectives of the learning platform.

What if we come up with a solution leveraging the fact that mislabelled cases can be collected and further used to improve the quality of Korbi's feedback?

When Korbi is unsure of the student's free-form text answers, meaning that Korbi does not have enough data to understand these answers, why don't make the student label them their own answers?

This way, the AI tutor’s feedback system is changed to avoid giving the impression that Korbi doesn’t understand. Instead, Korbi presents 3-5 more example answers that the it already knows whether they are correct or incorrect. Now, the student is asked to label all of these answers including their own.
Ideation💡
what do we need?
what is our solution?
For an intuitive and straightforward user interface (UI) design, a drag-and-drop interaction was chosen to implement the student self-labeling question. In fact, it is also one of the most well-known types of direct manipulation (interaction style in which the objects of interest in the UI are visible and can be acted upon via physical actions that receive immediate feedback), particularly useful for grouping and moving items.

The design decision to use drag-and-drop was further strengthened as it can work as a gamification technique creating a more engaging and rewarding learning experience.

However, the downside to drag–and–drop is that it often results in errors — the user drops an item in the wrong spot, and has to start all over again. In order to address this, visual signifiers for grabbability were added in the prototype: (1) a subtle drop shadow and (2) grab-handle icons.
Prototype ✏️
Creating wireframes
Designing UI
Low-fidelity sketches & drag-and-drop UI
High-fidelity Prototype & Sequential Storyboard
In order to improve the learning experience of the users as well as fix the problem with the AI tutor’s ability to provide feedback correctly, we designed a new type of exercise to be triggered when the AI tutor is unsure whether the free-form text answer provided by the student is correct or incorrect, where the student is asked to label all of these answers including their own, which is why the feature is called student self-labelling question.

With the student self-labelling question, Korbi can:
  • almost always be right when it gives feedback
  • handle low confidence cases gracefully
  • avoid criticism from the students about “Korbit not understanding their answer”
  • make the student continue to think about the problem and concept being asked
  • automatically collect labels to improve the system
A clickable high-fidelity prototype can be found here.
Solution Space ✨
how does the solution work?
what value does it bring?
We gathered formative feedback from users to validate the design assumptions made in the prototype. 5 user interview sessions and a/b testing were conducted with the representative users who were actively learning on the Korbit's learning platform.
Evaluation ✔️
is the solution working?
how can I improve it more?
After two months of the launch, due to the horizontal layout, it was identified that the current UI of the student self-labelling question was partially invisible on smaller screen sizes (i.e.  1280 x 960 and 1024 x  768), thereby showing only the “correct bin”.

This created confusion for the users to assume that only the correct bin is available to drag and drop items, or only to find out there were another two bins after they complete the question.

With the current layout of the bins vertically stacked up, it was highly likely that the users on smaller devices and screen sizes didn't see the other two bins unless they scrolled down to view below.

Therefore, as a UI improvement, a new layout where the three bins are present all together at the same view can help the users understand that there is a total of three different bins available.
Iterations 🛠
is the solution working?
how can I improve it more?
2nd iteration to address a UI issue
As the UI and design system of the learning platform was revamped, the student self-labelling question was in need of a UI improvement to match the new chat and design system.

At the same time, because drag-and-drop is inherently a tricky physical interaction, it was pointed out that an alternative interaction could be beneficial for users. Also, the three bins seemed to occupy too much space unnecessarily of the layout. Therefore, a different type of interaction component and design was needed to order to make a better use of the limited space available on the screen.

After a few brainstorming sessions, for the third iteration, checkboxes were chosen to replace the drag-and-drop interaction and the bins as it is one of the most common UI controls that allow users to make selection. They are well understood and quickly adopted by users while taking much less space in the layout. However, this is still an assumption, which will be tested and confirmed in the next coming user interview as well as usability testing.
Throughout planning and developing the student self-labelling question, I was very fortunate to work on a question that is part of a platform used by more than 20,000 users. In fact, it was one of the first times that I could see whether the users liked the product that I developed by talking to them and looking into the data.

Although I had some knowledge and experience in UX design, such as conducting user research and creating prototypes previously, I was yet to understand the full progress of how prototypes were further developed into final products until I worked on this project. I also learned how to run an A/B testing and how this type of usability testing can validate designs for the best results. Also, creating design mockups and prototypes across multiple screen sizes, I was fortunate to cultivate my skills in designing user interfaces. 

As Korbit are rapidly growing, their structures and procedures are constantly evolving, which allows the company to adapt to changes and new directions quickly and efficiently. However, in my first weeks, I found it confusing when the plan and timeline of a project suddenly changed. Sometimes, it was necessary to start all over from the beginning, where I had to make changes to most of the designs I had already finished.

At first, it was rather frustrating to see that my work would not be developed any further despite all confirmations and positive results, but I now believe it was a good experience to learn that it is important to be ready to change plans and directions of a project as needed, even if I have invested a lot of time and resources in it.

Last but not least, I  got to work with so many team members across the company, such as front-end developer, machine learning team, and learning content creator, which made me learn that UX is a highly collaborative process where communication and collaboration is the key ingredient to a successful product.
Takeaways 🧩
what did I learn and realize?
what can I improve next time?
Back to Top

See other projects