October 29th, 2012 by Dan Meyer
First, Michael Goldstein:
Khan Academy alone gives the following information: time spent per day on each skill, total time spent each day, time spent on each video, time spent on each practice module, level of mastery of each skill, which ‘badges’ have been earned, a graph of skills completed over number of days working on the site, and a graphic showing the total percentage of time spent by video and by skill.
Second, Jose Ferreira, CEO of Knewton:
So Knewton and any platform built on Knewton can figure out things like, “You learn math best in the morning between 8:32 and 9:14 AM. You learn science best in 40-minute bite-sizes. At the 42-minute mark your clickrate always begins to decline. We should pull that and move you to something else to keep you engaged. That thirty-five minute burst you do at lunch every day? You’re not retaining any of that. Just hang out with your friends and do that stuff in the afternoon instead when you learn better.”
I don’t have a lot of hope for a system that sees learning largely as a function of time or time of day, rather than as a function of good instruction and rich tasks. It isn’t useless. But it’s the wrong diagnosis. For instance, if a student’s clickrate on multiple-choice items declines at 9:14 AM, one option is to tell her to click multiple-choice items later. Another is to give her more to do than click multiple-choice items.
These systems report so much time data because time is easy for them to measure. But what’s easy to measure and what’s useful to a learner aren’t necessarily the same thing. What the learner would really like to know is, “What do I know and what don’t I know about what I’m trying to learn here?” And adaptive math systems have contributed very little to our understanding of that question.
For example, a student solves “x/2 + x/6 = 2″ and answers “48,” incorrectly. How does your system help that student, apart from a) recommending another time in the day for her to do the same question or b) recommending a lecture video for her to watch, pause, and rewind?
Meanwhile, these trained meatsacks have accurately diagnosed the student’s misunderstanding and proposed specific follow-ups. That’s the kind of adaptive learning that interests me most.
But then we’d need like an entire army of trained meatsticks, each assigned to a manageably small group of students, possibly even personally invested in their success, with real-time access to their brains and associated thoughts, perhaps with a bank of research-based strategies to help guide those students toward a deeper understanding of…something.
That seems an awful lot like a world without clickrates, and I’m not sure it’s a world I want to live in. Or maybe I’m just cynical between 11:30 and 12:00, on average, and should think about it later.
A big advantage with meatsacks over computers is the ability of a human to look at the work. Computers can only indirectly evaluate where the student went wrong; they can only look at the shadow on the ground to tell where the flyball is going. Meatsacks can evaluate directly where the student is going awry.