My analysis of Khan Academy’s eighth-grade curriculum was viewed ~20,000 times over the last ten days. Several math practice web sites have asked me to perform a similar analysis on their own products. All of this gives me hope that my doctoral work may be interesting to people outside my small crowd at Stanford.
Two follow-up notes, including the simplest way Khan Academy can improve itself:
One. Several Khan Academy employees have commented on the analysis, both here and at Hacker News.
Justin Helps, a content specialist, confirmed one of my hypotheses about Khan Academy:
One contributor to the prevalence of numerical and multiple choice responses on KA is that those were the tools readily available to us when we began writing content. Our set of tools continues to grow, but it takes time for our relatively small content team to rewrite item sets to utilize those new tools.
But as another commenter pointed out, if the Smarter Balanced Assessment Consortium can make interesting computerized items, what’s stopping Khan Academy? Which team is the bottleneck: the software developers or the content specialists? (They’re hiring!)
Two. In my mind, Khan Academy could do one simple thing to improve itself several times over:
Ask questions that computers don’t grade.
A computer graded my responses to every single question in eighth grade.
That means I was never asked, “Why?” or “How do you know?” Those are seriously important questions but computers can’t grade them and Khan Academy didn’t ask them.
At one point, I was even asked how m and b (of y = mx + b fame) affected the slope and y-intercept of a graph. It’s a fine question, but there was no place for an answer because how would the computer know if I was right?
So if a Khan Academy student is linked to a coach, make a space for an answer. Send the student’s answer to the coach. Let the coach grade or ignore it. Don’t try to do any fancy natural language processing. Just send the response along. Let the human offer feedback where computers can’t. In fact, allow all the proficiency ratings to be overridden by human coaches.
Khan Academy does loads of A/B testing right? So A/B test this. See if teachers appreciate the clearer picture of what their students know or if they prefer the easier computerized assessment. I can see it going either way, though my own preference is clear.