This website uses cookies and similar technologies to understand visitors' experiences. By continuing to use this website, you accept our use of cookies and similar technologies,Terms of Use, and Privacy Policy.

Nov 04 2011 - 11:58 AM
Assessing Student Mastery with Machine Learning
In their campaign to become the dominant force in online math learning environments, Khan Academy has recently announced some sweeping changes to the way they evaluate student mastery. Fundamental as it may seem, the simple act of identifying when a student has gained a complete understanding of a given concept has proven to be something of a subtle art. I would personally go so far as to argue that no learning app that I've used (in my admittedly meager experience) has ever really exhibited any real capacity to judge student mastery. However, the ability to make a binary decision regarding whether or not a student has mastered a lesson will almost certainly become a cardinal feature of any modern learning app. Previously, Khan Academy worked with a Streak-based mastery model. Users would answer questions about a concept until they got ten in a row right. When their streak hit ten, they were considered to have mastered the concept. This model was terrible. A simple typo could take you from a streak of nine right answers all the way down to zero, complete ignorance of the concept in the eyes of the application. It felt mechanical, rote, and downright annoying, completely betraying the "artificialness" of the user's judge. Worse still, it FELT dumb. When users entrust a piece of software with their education, they'd like to be able to believe that the software is smarter than they are, at least in its own domain of operation. With a transparent mastery scheme like "get ten right to move on" the user quickly realizes that they're a better judge of their mastery than the software, defeating the point of having the mastery metric in the first place. Khan Academy has wisely recognized this weakness as one of the most immediate flaws to their model. This article explains Khan Academy's new evaluation system in gory detail. When selecting this system, developers formed a pool of multiple different decision algorithms and parameters, and then ran statistical tests on the models generated to decide which maximized the value of their central mastery metric: the probability that the user gets the next question right, given that they have mastered the concept. One might even say that they used machine learning to learn their learning algorithm. For those who are statistically inclined, the champion model is based on logistic regression. Their testing so far indicates that the new model is far and away superior to the old. Keep an eye on Khan Academy in the upcoming weeks, if they weren't ready for primetime before, they're certainly one step closer now.
Posted in: Technology|By: Stephen Pratt|1753 Reads