SRI’s report on Khan Academy usage, released earlier this month, has the potential to make us all a lot wiser. They studied Khan Academy use at nine sites over two years, recording field notes, survey results, usage logs, and achievement measures, all well-specified in an 82-page implementation report and summarized in a shorter briefing. Their report has sharpened some of my concerns about the use of Khan Academy in math classrooms while blunting others.
First, there is irony to be found in SRI’s reporting of usage rather than efficacy. The Gates Foundation underwrote the SRI report and while Gates endorses value-added models of quality for teachers it doesn’t extend the same scrutiny towards its portfolio company here. After reading SRI’s report, though, I’m convinced this exploratory study was the right study to run. SRI found enormous variation in Khan Academy use across the nine sites. We gain a great deal of insight through their study of that variation and we’d be much poorer had they chosen to study one model exclusively.
SRI found some results that are favorable to the work of Khan Academy. Other results are unfavorable and other results seem to contradict each other. You can find many of the favorable results summarized at Khan Academy’s blog. I intend to summarize, instead, the concerns and questions the SRI report raises.
It isn’t clear which students benefit from Khan Academy.
Over the two years of the study, 74% of teachers (63 teachers in SY 2011-12 and 60 teachers in SY 2012-13) said Khan Academy was “very effective” at meeting the learning needs of “students whose academic work is ahead of most students their age.” Meanwhile, only 25% of teachers gave Khan Academy the same rating for students who are behind most students their age.
One teacher reports that “the same students who struggled in her classroom before the introduction of Khan Academy also struggled to make progress in Khan Academy.” She continues to state that those students “were less engaged and less productive with their time on Khan Academy [than their peers].”
Participating teachers don’t seem to have a great deal of hope that Khan Academy can close an achievement gap directly, though they seem to think it enhances the learning opportunities of advanced learners.
But that hypothesis is contradicted by the surveys from Site 1, a site which SRI states “had some of the highest test scores in the state [of California], even when compared with other advantaged districts.” In question after question regarding Khan Academy’s impact on student learning, Site 1 teachers issued a lower rating than the other less-advantaged sites in the study. For example, 21% of Site 1 teachers reported that Khan Academy had “no impact” on “students’ learning and understanding of the material.” 0% of the teachers from the less-advantaged sites shared that rating.
SRI writes: “Whatever the reason, teachers in sites other than Site 1 clearly found greater value in their use of Khan Academy to support their overall instruction.” SRI is strangely incurious about that reason. Until further revelation there, we should file this report alongside notices of Udacity’s struggles in serving the needs of lower-achieving students in their pilot course with San Jose State University in 2013. Their struggles likely relate.
Khan Academy use is negatively associated with math interest.
I’m going to jump quickly to clarify that a) Khan Academy use was positively associated with anxiety reduction, self-concept, and self-efficacy, b) all of these non-achievement measures are measures of correlation, not causation, and c) the negative association with interest isn’t statistically significant.
But I’m calling out this statistically-insignificant, non-causal negative association between Khan Academy and interest in math because that measure matters enormously to me (as someone who has a lot of interest in math) and its direction downward should concern us all. It’s very possible to get very good at something while simultaneously wishing to have nothing to do with that thing ever again. We need to protect against that possibility.
Teachers don’t use the videos.
While Khan Academy’s videos get lots of views outside of formal school environments, “more than half the teachers in SY 2011-12 and nearly three-quarters in SY 2012-13 reported on the survey that they rarely or never used Khan Academy videos to support their instruction.”
One teacher explains: “Kids like to get the interaction with me. Sal is great at explaining things, but you can’t stop and ask questions, which is something these kids thrive on.”
Khan Academy seems to understand this and has recently tried to shift focus from its videos to its exercises. In a recent interview with EdSurge, Sal Khan explains this shift as a return to roots. “The original platform was a focus on interactive exercises,” he says, “and the videos were a complement to that.”
Elizabeth Slavitt, Khan Academy’s math content lead, shifts focus in a similar direction. “For us, our goal isn’t necessarily that Khan introduces new concepts to students. We want to give practice.”
Khan Academy is shifting its goal posts here, but we should all welcome that shift. In his speech for TED and his interview with 60 Minutes and my own experiences working with their implementation team, Khan Academy’s expressed intent was for students to learn new concepts by watching the video lectures first. Only 10% of the teachers in SY 2012-13 agreed and said that “Khan Academy played a role in introducing new concepts.” Khan Academy seems to have received this signal and has aligned their rhetoric to reflect reality.
The exercises are Khan Academy’s core classroom feature, but teachers don’t check to see how well students perform them.
73% of teachers in SY 2012-13 said “Khan Academy played its greatest role by providing students with practice opportunities.” Over both years of the study, SRI found that 85% of all the time students spent on Khan Academy was spent on exercises.
Given this endorsement of exercises, SRI’s strangest finding is that 59% of SY 2012-13 teachers checked Khan Academy reports on those exercises “once a month or less or not at all.” If teachers find the exercises valuable but don’t check to see how well students are performing them, what’s their value? Students have a word for work their teachers assign and don’t check. Are Khan Academy’s exercises more than busywork?
SRI quotes one teacher who says the exercises are valuable as a self-assessment tool for students. Another teacher cites the immediate feedback students receive from the exercises as the “most important benefit of using Khan Academy.” But at Site 2, SRI found “the teachers did not use the Khan Academy reports to monitor progress,” electing instead to use their own assessments of student achievement.
SRI’s report is remarkably incurious about this difference between the value teachers perceive of a) the exercises and b) the reports on the exercises, leaving me to speculate:
Students are working on individualized material, exercises that aren’t above their level of expertise. They find out immediately how well they’re doing so they get stuck less often on those exercises. This makes for less challenging classroom management for teachers. That’s valuable. But in the same way that teachers prefer their own lectures to Khan’s videos, they prefer their own assessments to Khan’s reports.
One hypothesis here is that teachers are simply clinging to their tenured positions, refusing to yield way to the obvious superiority of computers. My alternative hypothesis is that teachers simply know better, that computers aren’t a natural medium for lots of math, that teacher lectures and assessments have lots of advantages over Khan Academy’s lectures and assessments. In particular, handwritten student work reveals much about student learning that Khan Academy’s structured inputs and colored boxes conceal.
My hypothesis that teachers don’t trust Khan Academy’s assessment of student mastery is, of course, extremely easy to test. Just ask all the participating teachers something like, “When Khan Academy indicates a student has attained mastery on a given concept, how does your assessment of the student’s mastery typically compare?”
Which it turns out SRI already did.
Unfortunately, SRI didn’t report those results. At the time of this posting SRI hasn’t returned my request for comment.
Conclusion
It isn’t surprising to me that teachers would prefer their own lectures to Khan Academy’s. Their lectures can be more conversational, more timely, and better tailored to their students’ specific questions. I’m happy those videos exist for the sake of students who lack access to capable math teachers but that doesn’t describe the majority of students in formal school environments.
I’m relieved, then, to read Elizabeth Slavitt’s claim that Khan Academy doesn’t intend any longer for its video lectures to introduce new concepts to students. Slavitt’s statement dials down my anxiety about Khan Academy considerably.
SRI minimizes Khan Academy’s maximal claims to a “world-class education,” but Khan Academy clearly has a lot of potential as self-paced math practice software. It’s troubling that so many teachers don’t bother to check that software’s results, but Khan Academy is well-resourced and lately they’ve expanded their pool of collaborators to include more math teachers, along with the Illustrative Mathematics team. Some of the resulting Common Core exercises are quite effective and I expect more fruit from that partnership in the future.
But math practice software is a crowded field and, for totally subjective reasons, not one that interests me all that much. I wish Khan Academy well but going forward I suspect I’ll have as much to say about them as I do about Cognitive Tutor, TenMarks, ALEKS, ST Math, and others, which is to say, not all that much.
BTW. Read other smart takes on SRI’s report: