SRI’s report on Khan Academy usage, released earlier this month, has the potential to make us all a lot wiser. They studied Khan Academy use at nine sites over two years, recording field notes, survey results, usage logs, and achievement measures, all well-specified in an 82-page implementation report and summarized in a shorter briefing. Their report has sharpened some of my concerns about the use of Khan Academy in math classrooms while blunting others.
First, there is irony to be found in SRI’s reporting of usage rather than efficacy. The Gates Foundation underwrote the SRI report and while Gates endorses value-added models of quality for teachers it doesn’t extend the same scrutiny towards its portfolio company here. After reading SRI’s report, though, I’m convinced this exploratory study was the right study to run. SRI found enormous variation in Khan Academy use across the nine sites. We gain a great deal of insight through their study of that variation and we’d be much poorer had they chosen to study one model exclusively.
SRI found some results that are favorable to the work of Khan Academy. Other results are unfavorable and other results seem to contradict each other. You can find many of the favorable results summarized at Khan Academy’s blog. I intend to summarize, instead, the concerns and questions the SRI report raises.
It isn’t clear which students benefit from Khan Academy.
Over the two years of the study, 74% of teachers (63 teachers in SY 2011-12 and 60 teachers in SY 2012-13) said Khan Academy was “very effective” at meeting the learning needs of “students whose academic work is ahead of most students their age.” Meanwhile, only 25% of teachers gave Khan Academy the same rating for students who are behind most students their age.
One teacher reports that “the same students who struggled in her classroom before the introduction of Khan Academy also struggled to make progress in Khan Academy.” She continues to state that those students “were less engaged and less productive with their time on Khan Academy [than their peers].”
Participating teachers don’t seem to have a great deal of hope that Khan Academy can close an achievement gap directly, though they seem to think it enhances the learning opportunities of advanced learners.
But that hypothesis is contradicted by the surveys from Site 1, a site which SRI states “had some of the highest test scores in the state [of California], even when compared with other advantaged districts.” In question after question regarding Khan Academy’s impact on student learning, Site 1 teachers issued a lower rating than the other less-advantaged sites in the study. For example, 21% of Site 1 teachers reported that Khan Academy had “no impact” on “students’ learning and understanding of the material.” 0% of the teachers from the less-advantaged sites shared that rating.
SRI writes: “Whatever the reason, teachers in sites other than Site 1 clearly found greater value in their use of Khan Academy to support their overall instruction.” SRI is strangely incurious about that reason. Until further revelation there, we should file this report alongside notices of Udacity’s struggles in serving the needs of lower-achieving students in their pilot course with San Jose State University in 2013. Their struggles likely relate.
Khan Academy use is negatively associated with math interest.
I’m going to jump quickly to clarify that a) Khan Academy use was positively associated with anxiety reduction, self-concept, and self-efficacy, b) all of these non-achievement measures are measures of correlation, not causation, and c) the negative association with interest isn’t statistically significant.
But I’m calling out this statistically-insignificant, non-causal negative association between Khan Academy and interest in math because that measure matters enormously to me (as someone who has a lot of interest in math) and its direction downward should concern us all. It’s very possible to get very good at something while simultaneously wishing to have nothing to do with that thing ever again. We need to protect against that possibility.
Teachers don’t use the videos.
While Khan Academy’s videos get lots of views outside of formal school environments, “more than half the teachers in SY 2011-12 and nearly three-quarters in SY 2012-13 reported on the survey that they rarely or never used Khan Academy videos to support their instruction.”
One teacher explains: “Kids like to get the interaction with me. Sal is great at explaining things, but you can’t stop and ask questions, which is something these kids thrive on.”
Khan Academy seems to understand this and has recently tried to shift focus from its videos to its exercises. In a recent interview with EdSurge, Sal Khan explains this shift as a return to roots. “The original platform was a focus on interactive exercises,” he says, “and the videos were a complement to that.”
Elizabeth Slavitt, Khan Academy’s math content lead, shifts focus in a similar direction. “For us, our goal isn’t necessarily that Khan introduces new concepts to students. We want to give practice.”
Khan Academy is shifting its goal posts here, but we should all welcome that shift. In his speech for TED and his interview with 60 Minutes and my own experiences working with their implementation team, Khan Academy’s expressed intent was for students to learn new concepts by watching the video lectures first. Only 10% of the teachers in SY 2012-13 agreed and said that “Khan Academy played a role in introducing new concepts.” Khan Academy seems to have received this signal and has aligned their rhetoric to reflect reality.
The exercises are Khan Academy’s core classroom feature, but teachers don’t check to see how well students perform them.
73% of teachers in SY 2012-13 said “Khan Academy played its greatest role by providing students with practice opportunities.” Over both years of the study, SRI found that 85% of all the time students spent on Khan Academy was spent on exercises.
Given this endorsement of exercises, SRI’s strangest finding is that 59% of SY 2012-13 teachers checked Khan Academy reports on those exercises “once a month or less or not at all.” If teachers find the exercises valuable but don’t check to see how well students are performing them, what’s their value? Students have a word for work their teachers assign and don’t check. Are Khan Academy’s exercises more than busywork?
SRI quotes one teacher who says the exercises are valuable as a self-assessment tool for students. Another teacher cites the immediate feedback students receive from the exercises as the “most important benefit of using Khan Academy.” But at Site 2, SRI found “the teachers did not use the Khan Academy reports to monitor progress,” electing instead to use their own assessments of student achievement.
SRI’s report is remarkably incurious about this difference between the value teachers perceive of a) the exercises and b) the reports on the exercises, leaving me to speculate:
Students are working on individualized material, exercises that aren’t above their level of expertise. They find out immediately how well they’re doing so they get stuck less often on those exercises. This makes for less challenging classroom management for teachers. That’s valuable. But in the same way that teachers prefer their own lectures to Khan’s videos, they prefer their own assessments to Khan’s reports.
One hypothesis here is that teachers are simply clinging to their tenured positions, refusing to yield way to the obvious superiority of computers. My alternative hypothesis is that teachers simply know better, that computers aren’t a natural medium for lots of math, that teacher lectures and assessments have lots of advantages over Khan Academy’s lectures and assessments. In particular, handwritten student work reveals much about student learning that Khan Academy’s structured inputs and colored boxes conceal.
My hypothesis that teachers don’t trust Khan Academy’s assessment of student mastery is, of course, extremely easy to test. Just ask all the participating teachers something like, “When Khan Academy indicates a student has attained mastery on a given concept, how does your assessment of the student’s mastery typically compare?”
Which it turns out SRI already did.
Unfortunately, SRI didn’t report those results. At the time of this posting SRI hasn’t returned my request for comment.
Conclusion
It isn’t surprising to me that teachers would prefer their own lectures to Khan Academy’s. Their lectures can be more conversational, more timely, and better tailored to their students’ specific questions. I’m happy those videos exist for the sake of students who lack access to capable math teachers but that doesn’t describe the majority of students in formal school environments.
I’m relieved, then, to read Elizabeth Slavitt’s claim that Khan Academy doesn’t intend any longer for its video lectures to introduce new concepts to students. Slavitt’s statement dials down my anxiety about Khan Academy considerably.
SRI minimizes Khan Academy’s maximal claims to a “world-class education,” but Khan Academy clearly has a lot of potential as self-paced math practice software. It’s troubling that so many teachers don’t bother to check that software’s results, but Khan Academy is well-resourced and lately they’ve expanded their pool of collaborators to include more math teachers, along with the Illustrative Mathematics team. Some of the resulting Common Core exercises are quite effective and I expect more fruit from that partnership in the future.
But math practice software is a crowded field and, for totally subjective reasons, not one that interests me all that much. I wish Khan Academy well but going forward I suspect I’ll have as much to say about them as I do about Cognitive Tutor, TenMarks, ALEKS, ST Math, and others, which is to say, not all that much.
BTW. Read other smart takes on SRI’s report:
19 Comments
Ken Tilton
March 31, 2014 - 11:58 am -I was surprised but pleased by news that teachers do not dive into the reports by practice software on student activity.
As a developer myself of one such tool, I am much more interested in students driving the process, for in the end if they are just so many passive cattle to be herded unwillingly… besides, I hope the teacher is now taking time freed up by practice software to cruise the room working intimately with students such that they do not need the reports.
I started working on reports for one teacher, btw, and altho I knock off such stuff in my sleep as a corporate programmer, doing so for my student-centered app was depressing.
We seem to have compulsion behind in schooling. Great! Now it is up to the students to make the effort. Interesting that that is what Reich highlighted from the study, though disparagingly: effort leads to success.
Steve Leinwand
March 31, 2014 - 1:25 pm -Thanks for a very fair, helpful and insightful review of the SRI report. There are many takeaways for all of us trying to get blended learning right in both the report and your review. Thanks!
timfc
March 31, 2014 - 2:11 pm -Let me offer a slightly more pessimistic reason that teachers don’t use KA’s dashboard:
They don’t really look at student exercises in general and this usage of KA is consistent with their daily practice with written assignments.
If it’s not part of your normal practice, why would you make an effort to do something different?
Kevin Hall
March 31, 2014 - 6:46 pm -The reason most teachers didn’t look at the reports is that that were almost completely useless during the period in which the study took place. The study took place in 2011-12 and 2012-13. During that time, KA’s method of determining student proficiency in a skill was fundamentally flawed. Here’s an example of the amount of progress (shown in green on the progress bar) a student would have earned for getting 7 questions in a row correct, and then making a mistake on the last question.
It was REALLY hard to earn a proficiency in a skill until this year, which simultaneously de-motivated students who made occasional mistakes and meant that teachers had no real information on who was proficient in what. This year, KA has their new system which only requires 3-5 questions in a row correct, plus accuracy on delayed retention quizzes (“mastery challenges”), and I find that students and I look at the reports much more often.
Andy Mitchell
April 1, 2014 - 4:19 am -I passed along a link to this analysis to the president of our district’s Board of Education. He’s a former teacher and a very smart guy.
A balanced review of the good, bad, and ugly of any well-resourced education initiative is very important.
All things in moderation!
Denis Roarty
April 1, 2014 - 5:10 am -So, Khan Academy aside, one aspect of the results intrigues me. If I am reading this correctly, using any self paced, self help, type of software seems to be beneficial to only a certain type of student who is motivated (for a variety of reasons) and capable of learning in this way. And they would tend to be in the upper quartiles of student performance.
Dan you mentioned “interest” as an important factor here and your blog has been great about putting the onus of developing student “interest” on teachers. I wonder if there are other aspects of this gap between the self helpers and the non-self helpers that are worth tending to… Not so they can help Khan’s survey numbers but so that students are in a position to help themselves independently.
Sammy Lindgren
April 1, 2014 - 7:07 am -I appreciate your thoughtful reading and synthesis of SRI’s report. I share the same concerns, and am also relieved that Khan Academy has shifted its goals and rhetoric toward practice not concept introduction.
I’ve reblogged your post at:
http://blog.mste.illinois.edu
Andrew Coulson
April 1, 2014 - 10:04 am -Not all digital math content is just for practice: parsing sentences, recognizing a problem type from lecture or book, and then recalling and repeating memorized standard procedures, again as shown by lecture and book.
Like you, I agree that this is not of great interest. Nothing transformational can come of it.
However, I work at MIND Research Institute which publishes ST Math, so I know how one can design to get past “just practice”. I know how one can design a rigorous math learning environment which, with the help of a teacher, defies meaningless memorization, and which motivates students who have neither interest in math per se, nor confidence in their abilities. This also benefits the high-scoring students who excel at memorization, and too often don’t mind that math is ‘just’ meaningless recognitions and procedures.
I invite you to take another look at ST Math’s conceptual, non-language-based, visual, interactive puzzles design, ideally in a classroom and through the eyes of a teacher.
Dan Meyer
April 2, 2014 - 4:22 am -@Kevin Hall, thanks for passing along your perspective. I’m curious if Khan’s newer mastery algorithm has resulted in more usage across the board or just in your case. The assessments themselves haven’t changed, to the best of my knowledge, which also influences a teacher’s trust in Khan’s evaluations. In any case, these hypotheses can and should be tested.
Andrew Coulson:
Not what I said. I’m specifically referring to “math practice software” here, not “all digital math content.”
Ken Tilton
April 2, 2014 - 6:06 am -[Disclaimer: I certainly do have a horse in the crowded practice field.]
Wow, “practice” sure is taking a beating in this blog post and thread. :) And it sounds like it is being equated with memorization. If I see (a^2-b^2) and without thinking also see its factorzation, that is a math skill built up over long practice. That is not recall from memory.
Anyone adept at algebra is not sitting there summoning up rules to apply from some memorized lookup table, they are simply recognizing patterns and applying the rule for transforming that pattern.
How is the right rule associated with the right pattern? How are patterns even recognized as the concrete terms vary? The same way one gets to Carnegie Hall: practice, practice, practice.
Prior to that we need an engaging teacher who loves math for itself, and instruction that goes deep, deriving rules from more fundamental rules. But then we need practice, at anything we wish to learn which does not yield to simple rote learning.
If my horse does not fare well in the practice derby I’ll live. What concerns me here is the prospect of our long-suffering young (beginning with me and others enduring the Sputnik-inspired New Math) being short-changed by educators who would eschew practice thinking practice must be unpleasant.
In fact, again, anyone who did well in Algebra enjoyed every problem for what it was: a transformation puzzle.
Kids do not need less practice, they need better instruction and, as the study noted, immediate feedback when they do the practice. Looks like Khan understands that now so I better get back to work. :)
Andrew Coulson
April 2, 2014 - 9:43 am -@Ken Tilton – good points! The nuances of practice merit a discussion, for sure. Oft heard about practice: Practice doesn’t make Perfect, Perfect Practice makes Perfect.
Those few adept at algebra are practicing in a different way, I posit, than those many just struggling to get through math without failing. It is of course possible to have students productively practice solving thousands of problems per year, many of them very challenging, with quality instructional design of the digital learning environment and content. On ST Math, we saw thousands of students firing up their school-based digital math homework on new iPads on Dec 25th to ‘play’.
Software can be good for good math practice. I agree that lots of good math practice is a requirement for math literacy. As @Dan Meyer notes in his comment above, digital math content also extends beyond practice, of course. And that i.m.o. is a very interesting place to look for how digital content can help bring about transformative outcomes in student learning.
Marie Bjerede
April 6, 2014 - 7:30 pm -I’ve found Kahn to be a great reference book in homeschooling. When we (my kids and I) want to remember or figure out some particular thing we already have context for (wait..how do torque vectors work again?) it is a thing of beauty.
To learn something new from it, or ANY OTHER VIDEO/ON-LINE INDEPENDENT STUDY TOOL… we start the video (or text or multimedia whatever), run until the problem is posted, then stop the video and THINK. Maybe for a minute, maybe for a day, maybe for a week. Once we appreciate the problem (and probably have some understanding of the solution) listening to a clean top-down description is kind of cool and helpful.
So yeah, helpful. Transformational? Not so much but, really, that doesn’t stop us from using it. It is better than what we were using before. (Not as good as AoPS, ST Math, or Wuzzit Trouble though)
Kelly
April 9, 2014 - 8:41 pm -I love Khan Academy. I teach advanced honors 6th grade math. Many of my students come in already having mastered 6th grade math content and beyond. I have found Khan Academy to be supremely beneficial in challenging these students to delve more deeply into concepts far beyond their grade level. Recently, my students completed the math MAPS (Measurement of Academic Progress) test. The two students who made the most growth over the course of our school year were my two students who devoted the most time to working on their Khan Academy outside of school. I do not use the video tutorials for instruction, nor do I check to see who has mastered what content. I see it as a self-paced, extra practice, remediation, reinforcement, and enrichment activity. I am very thankful for this site.
Ian
July 19, 2014 - 3:00 am -It is no surprise that students who are already performing at a higher level benefit the most from this type of ‘instruction’, after all, they are likely to be more correlated with self-motivation, self-learning, etc, etc. To these students, the Khan Academy is therefore ‘another resource’. In its absence, they would likely seek others without prompting.
So, a valid issue remains, how do you engage the under-performers? Far more difficult as they typically lack motivation. Also, high achievers almost always get instant rewards, under-performers tend to struggle along behind, often with negative reinforcement more than offsetting ‘rewards’.
With regards to the latter, I also dislike the apparent need for the ‘gamification’ of these things. Yes, I understand incentives for achievement, but is this what we are really deriving here? I fully understand comments from teachers who say that students race through the tests racking up their badges without really absorbing very much. The game tokens thus become more important than the real objective.
Much remains to be done in the field of online tuition, therefore.