Get Posts by E-mail

First, Michael Goldstein:

Khan Academy alone gives the following information: time spent per day on each skill, total time spent each day, time spent on each video, time spent on each practice module, level of mastery of each skill, which ‘badges’ have been earned, a graph of skills completed over number of days working on the site, and a graphic showing the total percentage of time spent by video and by skill.

Second, Jose Ferreira, CEO of Knewton:

So Knewton and any platform built on Knewton can figure out things like, “You learn math best in the morning between 8:32 and 9:14 AM. You learn science best in 40-minute bite-sizes. At the 42-minute mark your clickrate always begins to decline. We should pull that and move you to something else to keep you engaged. That thirty-five minute burst you do at lunch every day? You’re not retaining any of that. Just hang out with your friends and do that stuff in the afternoon instead when you learn better.”

I don’t have a lot of hope for a system that sees learning largely as a function of time or time of day, rather than as a function of good instruction and rich tasks. It isn’t useless. But it’s the wrong diagnosis. For instance, if a student’s clickrate on multiple-choice items declines at 9:14 AM, one option is to tell her to click multiple-choice items later. Another is to give her more to do than click multiple-choice items.

These systems report so much time data because time is easy for them to measure. But what’s easy to measure and what’s useful to a learner aren’t necessarily the same thing. What the learner would really like to know is, “What do I know and what don’t I know about what I’m trying to learn here?” And adaptive math systems have contributed very little to our understanding of that question.

For example, a student solves “x/2 + x/6 = 2” and answers “48,” incorrectly. How does your system help that student, apart from a) recommending another time in the day for her to do the same question or b) recommending a lecture video for her to watch, pause, and rewind?

Meanwhile, these trained meatsacks have accurately diagnosed the student’s misunderstanding and proposed specific follow-ups. That’s the kind of adaptive learning that interests me most.

Featured Comments

Chris Lusto:

But then we’d need like an entire army of trained meatsticks, each assigned to a manageably small group of students, possibly even personally invested in their success, with real-time access to their brains and associated thoughts, perhaps with a bank of research-based strategies to help guide those students toward a deeper understanding of…something.

That seems an awful lot like a world without clickrates, and I’m not sure it’s a world I want to live in. Or maybe I’m just cynical between 11:30 and 12:00, on average, and should think about it later.

Dan Anderson:

A big advantage with meatsacks over computers is the ability of a human to look at the work. Computers can only indirectly evaluate where the student went wrong; they can only look at the shadow on the ground to tell where the flyball is going. Meatsacks can evaluate directly where the student is going awry.

37 Responses to “What Do Adaptive Math Systems Really Know About What You Know?”

  1. on 29 Oct 2012 at 7:11 amcb1601ej

    The system should say: you did A because you probably thought B. And scaffold:
    -Review this resource
    -Get hint for next step
    -Study worked out example

    And then try another one.
    This is feasible, but not in Knewton or Khan. Go to Europe.

  2. on 29 Oct 2012 at 7:22 amJason Dyer

    Automated error analysis at the content level of the kind you describe has certainly been done before. I’m not sure why the current systems aren’t using it more. Surely they have a rich data bank of wrong answers now, even moreso than when this was first being done back in the 70s.

  3. on 29 Oct 2012 at 7:24 amBob Lochel

    Dan, unfortunately you are mostly on the mark in your assessment of adaptive systems. In my middle school, we have about 50 pre-algebra students using Aleks as a complement to classroom instruction. The positive aspect I see is that students can now self-identify their strengths and weaknesses, and led their own parent-teacher conference. BUT, do we have any evidence that these students will develop better algebra skills as a result? The jury is out on that. The negative aspect is what you describe here for remediation: students who are not “proficient” are only asked to repeat the lesson and then do more problems. Let’s beat them with the stick again, and perhaps eventually they will get it.

    It would be fascinating work to take the best aspect of, that teachers can diagnose and discuss the nature of errors and bring it into an adaptive system. I wonder how that student who made the error with the rational equation would react to the information provided by teachers in the mathmistakes thread.

    Imagine a student who works through a problem on a iPad. Incorrect solutions are sorted, perhaps by the solution the student provides, or by looking for landmark errors, and are opened for discussions. Not only could teachers contribute their ideas, but proficient students could also peer-asses the work and provide guidance.

  4. on 29 Oct 2012 at 7:31 amChris Robinson

    There in lies the problem with adaptive math systems. Computers are relatively cheap and easy to deploy, but they can’t give rich feedback to the student. A large scale implementation of Michael’s would certainly provide rich feedback for students, but would require a substantial force of teachers as the numbers of students grew. Not something to give up on, but we are just not there. The small-scale classroom of a good teacher and students is still the best place for learning and growth to take place.

  5. on 29 Oct 2012 at 7:36 amChris Lusto

    But then we’d need like an entire army of trained meatsticks, each assigned to a manageably small group of students, possibly even personally invested in their success, with real-time access to their brains and associated thoughts, perhaps with a bank of research-based strategies to help guide those students toward a deeper understanding of…something.

    That seems an awful lot like a world without clickrates, and I’m not sure it’s a world I want to live in. Or maybe I’m just cynical between 11:30 and 12:00, on average, and should think about it later.

  6. on 29 Oct 2012 at 7:45 amTimfc

    To link this a bit to the meatsacks…

    It’s a bit strange that our measure for whether the meatsacks are highly qualified is to check-off that they’ve taken a certain set of courses and then give them a multiple choice exam, with maybe a few short answer questions thrown in. Even a lot of the very good ones ( are all concentrated on making a better test it seems…

    If only there was a way to determine if said meatsacks were good at, oh, I dunno, error analysis and remediation?

  7. on 29 Oct 2012 at 7:51 amChris Robinson


    If only there was a way to determine if said meat sacks were good, at, oh, I dunno, error analysis and remediation?

    Have you even read the comments on math

  8. on 29 Oct 2012 at 8:02 amChris Lusto

    What we can determine right now: meatsticks are better at error analysis and remediation than “not at all.” Which is roughly the current adaptive systems level, at least any that are in wide use. Even at that ridiculously low bar, it’s not even a contest.

  9. on 29 Oct 2012 at 8:16 amTimfc

    Sorry… it’s clear from the subsequent two comments that irony didn’t translate to print.

    I’m on the meatsack side here…

  10. on 29 Oct 2012 at 8:27 amAshli

    A teacher friend of mine dug into the back-programming of a common math test generator and created his own multiple choice questions rigged in such a way that based on student response he knew the error they most likely made. From looking at the errors, he could figure out where to go next with the student/class.
    If one person with a small slice of programming knowledge could do that, why aren’t all the ‘adaptive learning’ folk on that bandwagon and advertising it loudly? Shouldn’t understanding student misconceptions be one of the main drives of assessments? I would love a online assignment that came back and told me “based on these errors, the student seems to be forgetting to distribute negatives”. It’s going to be a bit fuzzy data as you don’t really know why an answer was entered without talking to the kid, but it’s still more helpful than ‘time spent on a topic’. Knewton seems to do a lot more than ‘time of day’ stuff, but that video only hints at some of it’s adaptability instead of hammering out exactly what they system is doing. Khan I’ve used in class and didn’t find the metrics helpful in the breakdown.
    So much technology; why are people so focused on punching a timecard still? Time is a variable for baking, not student learning.

  11. on 29 Oct 2012 at 8:27 amChris Robinson


    No problem. I can definitely agree that teacher education programs need to start incorporating this type of instruction for preservice teachers. I know that I had to learn how to evaluate student errors on the fly, and then come up with appropriate strategies to remediate those errors. This is probably the most important part of teaching, and it takes time to learn, especially when it’s not part of your teacher ed program.

  12. on 29 Oct 2012 at 8:33 amTimfc


    -> I was aiming as much at the policy makers who chose easy to measure traits to determine “Highly Qualified” rather than traits that might lead to “high quality” instruction.

    But, the same holds true for teacher ed programs, yep…

    Really though, how hard is it for uni faculty to add some systematic “why might a kid make X error” to their content and methods classes?

  13. on 29 Oct 2012 at 8:35 amDennis Ashendorf

    Dan, You really need to spend some time with ST Math (Mind Research) and ALEKS. Both of these are “old,” but they set benchmarks in our business; especially ALEKS.

    ALEKS’s network and AI engine adapts well for students trying to achieve “procedural” math fluency. It is very carefully constructed.

    ST Math entices students to “adapt” to it; like a good video game does. It’s an immersive world.

    I realize your frustrations; especially with the limitations of Khan, but as assessment evolves from simple multiple choice (Even “Every Answer Counts” would be an improvement); you will get better software. Furthermore, I believe you’re starting to go down the road of “the best is the enemy of the better.” Reflection is a good thing.

  14. on 29 Oct 2012 at 8:44 amKate Nowak

    +1000000 Lusto’s comment #5

    Except stop saying meatstick.

  15. on 29 Oct 2012 at 9:00 amJesse Duffey

    OK, regarding the final paragraphs, why can’t the learner simply have a teacher diagnose errors missed by a computer? It’s called blended learning, after all.

  16. on 29 Oct 2012 at 9:06 amChris Robinson


    It could also be possible that there is a conceptual misunderstanding even with correct problems. You really need to take the problem set (and work) as a whole to diagnose and remediate these misunderstandings. At least IMO.

  17. on 29 Oct 2012 at 10:07 ammr bombastic

    Not a fan of the adaptive systems I have seen, but I don’t think the meatsack example you provided is helping your case.

    The meatsacks give some partial explanations involving some possible mechanical issues. However, each of the issues would indicate a very poor understanding of equivalent fractions and/or equivlaent equations, and a possible inability to compute 48/2 + 48/6. So, identifying the specific mechancial issues is probably not important anyway – major remediation is probably required.

    The following phrases in the comments are concerning to me: “combine factors”, “get rid of”, “put a three on top”, “ignore the bottom”. All of these phrases move us away from an understanding in terms of equivalent fractions & equations and towards some sort of gimic to remember.

    Finally, one meatsack suggests that the student would not be able to do 48/2 + 48/6, but then suggests seeing if the student understands what they are trying to solve and why. Not claiming I haven’t done this sort of thing, but I hope we all recognize the absurdity involved.

  18. on 29 Oct 2012 at 10:39 amDan Anderson

    A big advantage with meatsacks over computers is the ability of a human to look at the work. Computers can only indirectly evaluate where the student went wrong; they can only look at the shadow on the ground to tell where the flyball is going. Meatsacks can evaluate directly where the student is going awry.

  19. on 29 Oct 2012 at 1:42 pmtorusmug101

    This is a fascinating discussion. I have no programming or hands-on adaptive systems experience.

    HOWEVER, one thing I noticed in the original post is that some of these programs attempt to guide student study habits based on partial evidence of student thinking cycles (eg time of day, endurance). Developing study habits is good! And they should to a certain extent match student endurance and students’ ‘natural’ schedule of concentration.

    But always matching study habits to pre-existing abilities elides the possibility of challenging the limits, strengthening endurance, developing new capacities to think about math at different times of the day etc. Balance!

    I wanted to post this comment because we should adapt to students while students also work to adapt to new challenges and develop new capacities. I am thankful for the periods during which I pushed beyond my comfortable thinking habits and grew.

  20. on 29 Oct 2012 at 2:03 pmJason Dyer

    @Kate Nowak:

    Except stop saying meatstick.

    How about “bags of mostly water“?

  21. on 29 Oct 2012 at 3:30 pmlouise

    Why did Dan call us meatsacks in the first place? I felt as if he was saying we were stupid, and personally I was offended because it was a triumph of LaTex for me. I have a sack of meat in the fridge, along with the sheep’s brain in formaldehyde, and I can assure you it can’t do LaTex at all – I have tried so hard to fob off this work, but to no avail.
    All of this computer measurement strikes me as looking for your keys under the streetlight, where you can see, rather than across the road in the dark, where you actually dropped them.
    I don’t care at all about my students’ “click rate.” I do care about how they can translate between real life and math statements. Some of that might be better when the click rate is lower – learners who actually read the question, for example.

  22. on 30 Oct 2012 at 1:26 amJulia Tsygan

    As far as I’m aware, based on some research I read last year but don’t have handy for citation right now, student homework/practice effort (not time) is positively correlated with achievement, so why obsess with time at all?

    Also, I’m thinking that part of the digital instruction problem is that machines are just not intelligent enough to understand human learning (hell, as if WE understand human learning on anything other than an intuitive level!) and give relevant feedback. For a machine to do that effectively, I suspect it would have to pass the Turing test and qualify for the status of artificial consciousness/intelligence. Our machines are still machines – while we are to some extent unpredictable, social creatures. How long will it be until we find talking to a machine as rewarding as talking to another human? That’s how long until machines can replace good teachers. But of course they could become more adaptive, give more relevant feedback than currently, and be a good complement, though not replacements, to human teachers.

  23. on 30 Oct 2012 at 4:49 amMary Bourassa

    So is this the personalized learning of the future? “I have to take my math test at 8:32 and only for 40 minutes.”

  24. on 30 Oct 2012 at 5:11 amJim Doherty


    What you just described is the reality (sad, though it may be) of many students’ daily life, isn’t it?

  25. on 30 Oct 2012 at 5:15 amBelinda Thompson

    I’ve always been more interested in a student’s wrong answers and figuring out the kernel of it. In the student’s work in the link, my first concern is that 48 doesn’t make the equation true, and that should bother the student (the scratching out of work may be a clue that she’s over it anyway). I also noticed a very common adding fractions error with the denominators that’s rooted in an equivalent fractions bug. I’ve had the good fortune to look at lots of incorrect work on fractions, and I feel like I add something to my repertoire each time. I have seen some common wrong answers for which I still can’t figure out the underlying misconception (which I define differently from “mistake”). But, those stick in my head, and I keep thinking about them.
    I know that there are people working to create diagnostic assessments or games that “learn” from students’ input. The goal of these systems is to get better at identifying strategies, and they’re set up by knowledgeable meatsacks who are looking to diagnose particular misconceptions at particular points in the work. This is really hard work because very few of us approach even very similar problems with the same strategy every time. That said, there is a finite number of things you can do on any problem. There’s a lot of probability involved, and a lot of design expertise in the set up. I’m working (very slowly) on one for comparing fractions, and my ultimate goal is to provide useful information to teachers.

  26. on 30 Oct 2012 at 6:14 amjosh g.

    mr bombastic: Yes, major remediation would be great. But you (we) only know that now *because* we can see the extent of the problem in that small sample of work.

    KA’s practice sessions wouldn’t figure that out until they’d hammered you with another 20 of the exact same type of problem, and even then … does KA ever suggest moving you down a level from where you’ve started? From what I’ve seen of it, I assume it would only do so by hammering you with even more repetition of things you don’t really get until it blindly found something you can do.

    Also, woo, I’m a meatsack.

  27. on 30 Oct 2012 at 8:33 ammr bombastic

    @Belinda, what is the common adding fractions error that you see? In your experience does this error tend to be persistent or easily remedied?

    @Josh, The fact that this student needs major remediation is why I think it is a poor example for the value of a meatsack. If a meatsack can isolate the aspect of the problem creating issues and quickly remediates, great! Big advantage for meatsacks over computers. In this case, though, the meatsacks haven’t “pinpointed” the issue because it isn’t a pinpoint – it is likely a giant gaping hole in the students understanding of fractions. Maybe the meatsack determines that major remediation is needed a little quicker than the computer, but that is not much of an advantage in my mind.

  28. on 30 Oct 2012 at 9:24 amBelinda Thompson

    Hi Mr. Bombastic :) I was referring to making like denominators, but not addressing the numerators. It’s a bit different with this student (we might expect 2x/12), so I would would want to see what the student did on a problem with just numbers. I would expect them to add 2/3 and 4/5 and get 2/15 +4/15 =6/15. The tricky part from a diagnosis standpoint is whether to address the 2/3 = 2/15 part or the part where the result should be more than 1. I would probably address the equivalence issue because it’s related to equivalent equations. Of course with this student it might be necessary to go way, way back both with fractions and solving equations.

  29. on 30 Oct 2012 at 10:06 amJim Doherty

    @ Belinda #25
    The idea of a student being bothered by a wrong answer has nagged at me my whole teaching career (25+ years) The experience of a student getting a paper back and berating themselves for an answer that makes NO sense is reassuring at a certain level – it’s nice that they see the impossibility – but it sure would be a WHOLE lot better if there was some alarm that went off while they were still taking the test. How do we help instill that sort of early warning system for kids so that they self check more effectively during the assessment process?

  30. on 30 Oct 2012 at 12:05 pmZachary Wissner-Gross

    To be truly adaptive, a system would have to collect a lot more data about the user than right/wrong answers and time spent on each problem. The richer the user experience, the higher the quality of data collected will be, and the more useful the feedback can potentially be.

  31. on 01 Nov 2012 at 12:40 pmjosh g.

    @mr bombastic: Okay, you’re right, there are better examples. But I think it’s still indicting to note that the computer-based system can’t even tell the difference between small flaws and major, gaping holes. I’d suggest that “a little quicker” is an understatement.

  32. on 02 Nov 2012 at 11:22 amKevin Hall

    Curious what you all think of the Carnegie Learning software, which does track students throughout their work (it doesn’t just check the answer).

  33. on 02 Nov 2012 at 11:34 amKevin Hall

    Here is a link to the Carnegie Learning demo page in case you haven’t seen it.

  34. on 02 Nov 2012 at 11:41 amDennis Ashendorf

    Kevin, Carnegie Learning and ALEKS were the two early leaders in sophisticated math software. Carnegie used to charge well over $100 ($140 if memory serves) per license; so I stopped using it. Also, while Carnegie claimed to be adaptive; I found that all of my students went through the same pathway. (Carnegie disagreed with me at the time.) In short, I didn’t see it adapt; although I appreciated its design greatly.

    At its lower price ($15!!!!), it may be a great buy. The remaining issue is one of “student choice.” It was linear in presenting the next topic. If a student was stuck, he or she stayed stuck. This was a problem that had no resolution.

    My above response may be obsolete. It was five years ago. Also, Carnegie may be falling on tough times.

  35. […] of thousands of users. That clickstream can tell a teacher how many hints the learner requested, how long she spent on a given problem, whether she's more apt to score well on machine-scored exercises in the morning or evening. But […]

  36. on 11 Nov 2012 at 5:32 pmlouise

    Re. Carnegie Learning
    It was too expensive for our school. It’s cheaper for us to have failing students. We tried it for a semester in my classroom, but we didn’t have the part to reduce the level down to where our students are (about 4th grade for our 9th graders).
    The cost is in having non-reusable books and licenses. If you buy a traditional textbook you can use it for (15) years. The Carnegie course has to be paid annually.
    I saw a major improvement among students who used it, but there were also a lot of students who decided they would simply do nothing ( Facebook, facebook , facebook). The gap got larger. The course is very traditional, and our district decided they wanted to do “discovery learning.”
    One thing I did like about Carnegie – they were honest. They told our school district that we were in no way ready for common core, and pretending we had students ready for common core in high school was setting kids up for failure. Nobody listened, but it’s the first time I’ve heard an honest sales team.

  37. […] fail to develop mathematical intuition and appreciation for the beauty of the subject.  In his second post, Meyer referenced quotes about two developers that built adaptive engines around analysis of […]