October 3rd, 2013 by Dan Meyer
I took machine-graded learning to task earlier this week for obscuring interesting student misconceptions. Kristen DiCerbo at Pearson’s Research and Innovation Network picked up my post and argued I was too pessimistic about machine-graded systems, posing this scenario:
Students in the class are sitting at individual computers working through a game that introduces basic algebra courses. Ms. Reynolds looks at the alert on her tablet and sees four students with the “letters misconception” sign. She taps “work sample” and the tablet brings up their work on a problem. She notes that all four seem to be thinking that there are rules for determining which number a letter stands for in an algebraic expression. She taps the four of them on the shoulder and brings them over to a small table while bringing up a discussion prompt. She proceeds to walk them through discussion of examples that lead them to conclude the value of the letters change across problems and are not determined by rules like “c = 3 because c is the third letter of the alphabet.”
My guess is we’re decades, not years, away from this kind of classroom. If it’s possible at all. Three items in this scenario seem implausible:
- That four students in a classroom might assume “c = 3 because c is the third letter of the alphabet.” I taught Algebra for six years and never saw this conception of variables. (Okay, this isn’t a big deal.)
- That a teacher has the mental bandwidth to manage a classroom of thirty students and keep an eye on her iPad’s Misconception Monitor. Not long ago I begged people on Twitter to tell me how they were using learning dashboards in the classroom. Everyone said they were too demanding. They used them at home for planning purposes. This isn’t because teachers are incapable but because the job demands too much attention.
- That the machine grading is that good. The system DiCerbo proposes is scanning and analyzing handwritten student work in real-time, weighing them against a database of misconceptions, and pairing those up with a scripted discussion. Like I said: decades, if ever.
This also means you have to anticipate all the misconceptions in advance, which is tough under the best of circumstances. Take Pennies. Even though I’ve taught it several times, I still couldn’t anticipate all the interesting misconceptions.
The Desmos crew and I had students using smaller circles full of pennies to predict how many pennies fit in a 22-inch circle.
But I can see now we messed that up. We sent students straight from filling circles with pennies to plotting them and fitting a graph. We closed off some very interesting right and wrong ways to think about those circles of pennies.
Some examples from reader Karlene Steelman via e-mail:
They tried finding a pattern with the smaller circles that were given, they added up the 1 inch circle 22 times, they combine the 6, 5, 4, 3, 2, 1, and 1 circles to equal 22 inches, they figured out the area of several circles and set up proportions between the area and the number of pennies, etc. It was wonderful for them to discuss the merits and drawbacks of the different methods.
Adding the 1-inch circle 22 times! I never saw that coming. Our system closed off that path before students had the chance even to express their preference for it.
So everyone has a different, difficult job to do here, with different criteria for success. The measure of the machine-graded system is whether it makes those student ideas invisible or visible. The measure of the teacher is whether she knows what to do with them or not. Only the teacher’s job is possible now.
This doesn’t even touch the students who get questions RIGHT for the wrong reasons.
Dashboards of the traditional ‘spawn of Satan & Clippy the Excel assistant’ sort throw way too much extremely specific information straight to the surface for my liking (and brain). That information is almost always things that are easy for machines (read. programmers) to work out, and likely hard or time consuming yet dubiously useful for humans to do. I wonder how many teachers, when frozen in time mid-lesson and placed in the brain deli slicer would be thinking “Jimmy has 89% of this task correct and Sally has only highlighted four sentences on this page.”