Stephanie H. Chang, one of Khan Academy’s software engineers:
I observed how some students made progress in exercises without necessarily demonstrating understanding of the underlying concepts. The practice of â€œpattern matchingâ€ is something that Ben Eater and Sal had mentioned on several occasions, but seeing some of it happening firsthand made a deeper impression on me.
The question of false positives looms large in any computer adaptive system. Can we trust that a student knows something when Khan Academy says the student knows that thing? (Pattern matching, after all, was one of Benny’s techniques for gaming Individually Prescribed Instruction, Khan Academy’s forerunner.)
It is encouraging that Khan Academy is aware of the issue, but machine-scorers remain susceptible to false positives in ways that skilled teachers are not. If we ask richer questions that require more than a selected response, teachers get better data, leading to better diagnoses. That’s not to say we shouldn’t put machines to work for us. We should. One premise of my work with Dave Major is that the machines should ask rich questions but not assess them, instead sending the responses quickly and neatly over to the teacher who can sequence, select, and assess them.
BTW. Also from Chang’s blog: a photo of Summit San Jose’s laptop lab, a lab which seems at least superficially similar to Rocketship’s Learning Lab. My understanding is that Summit’s laptop lab is staffed with credentialed teachers, not hourly-wage tutors as with Rocketship. Which is good, but I’m still uncomfortable with this kind of interaction between students and mathematics.
[via reader Kevin Hall]
Stephanie H. Chang responds:
We think the work youâ€™re doing with Dave Majors is really exciting and inspiring. Open-ended questions and peer- or coach-graded assignments are incredibly powerful learning tools and my colleagues at KA donâ€™t disagree. We definitely have plans to incorporate them in the future.
My old school last year relied on a teaching model where the students had to try and teach themselves a lot of math by utilizing classroom resources. A lot of the practice was through Khan Academy or by students completing practice problems with accessible answer keys. Ultimately what happened was that the students only looked for patterns and had no conceptual understanding of the math at all. Even worse was that students who had â€œmasteredâ€ the concept were encouraged to teach the other students how to solve problems but they could only do so in the most superficial manner posssible.
One way sites like Khan (and classroom teachers) can deal with this is by retesting â€” say, three months later, can a student solve the same problem they solved today? If not, they clearly only had a surface-level understanding or worse.
Iâ€™d like to see Khan or other sites force students to retest on topics that were marked as â€œcompletedâ€. But then again, I feel pretty much the same way about miniquiz-style Standards Based Grading.
Reminds me of the story about the tank-recognizing computer. I doubt we’ll have worthwhile computer scoring that isn’t susceptible to pattern-matching until we have genuine artificial intelligence.
And then the computers will want days off, just as teachers do.
KA does force review of concepts after mastery is achieved, generally a few weeks after completion. Problem is, doesnâ€™t take students long to do the pattern matching again.
We instituted a policy where students must make their own KA style videos explaining how to solve a set of problems that they struggled with. Best way we found to deal with the issue.
Zack Miller, comments on the laptop lab at Summit where he teachers math:
Our math model as described as concisely as possible: students spend two hours per day on math; one hour in breakout rooms and one hour in the big room (seen in your picture) where students are working independently. In the breakout rooms, students work on challenging tasks and projects (many of which we can thank you for) that develop the standards of math practice, often in groups and with varying amounts of teacher structure. Development of cognitive skills via frequent exposure to these types of tasks is paramount to our program. It is also in the breakout rooms where studentsâ€™ independent work â€“ which is mostly procedural practice â€“ is framed and put in context. Studentsâ€™ know that their work in the big room supports what they do in the seminar rooms and vice versa.