In math education, the fields of handwriting recognition and adaptive feedback are stuck. Maybe they’re stuck because the technological problems they’re trying to solve are really, really hard. Or maybe they’re stuck because they need some crank with a blog to offer a positive vision for their future.

I can’t help with the technology. I can offer my favorite version of that future, though. Here is a picture of the present and the future of handwriting recognition and adaptive feedback, along with some explanation.

**In the future, the computer will recognize my handwriting.**

Here I am trying hopelessly to get the computer to understand that I’m trying to write 24. This is low-hanging fruit. No one needs me to tell them that a system that recognizes my handwriting more often is better than a system that doesn’t.

But I don’t worry about a piece of paper recognizing my handwriting. If I’m worried about the computer recognizing my handwriting, that worry goes in the cost column.

**In the future, I won’t have to learn to speak computer while I’m learning to speak math.**

In this instance, I’m learning to express myself mathematically â€“ hard enough for a novice! â€“ but I also have to learn to express myself in ways that *the computer will understand*. Even when the computer recognizes my numbers and letters, it doesn’t recognize the way I have *arranged* them.

Any middle school math teacher would recognize my syntax here. I’ll wager most would sob gratefully for my aligned operations. (Or that I bothered to show operations *at all*.) If the computer is confused by that syntax, that confusion goes in the cost column.

**In the future, I’ll have the space to finish a complete mathematical thought.**

Here I am trying to finish a mathematical thought. I’m successful, but only barely. That same mathematical thought requires only a fraction of the space on a piece of paper that it requires on a tablet, where I always feel like I’m trying to write with a bratwurst. That difference in space goes in the cost column.

That’s a lot in the cost column, but lots of people eagerly accept those costs in other fields. Computer programmers, for example, eagerly learn to speak unnatural languages in unusual writing environments. They do that because the costs are dwarfed by the benefits.

What is the benefit here?

Proponents of these handwriting recognition systems often claim their benefit is *feedback* â€“Â the two-sigma improvement of a one-on-one human tutor at a fraction of the cost. But let’s look at the feedback they offer us and, just as we did for handwriting recognition, write a to-do list for the future.

**In the future, I’ll have the time to finish a complete mathematical thought.**

If you watch the video, you’ll notice the computer interrupts my thought process incessantly. If I pause to consider the expression I’m writing for more than a couple of seconds, the computer tries to convert it into mathematical notation. If it misconverts my handwriting, my mathematical train of thought derails and I’m thinking about notation instead.

Then I have to check every mathematical thought before I can write the next one. The computer tells me if that step is mathematically correct or not.

It offers *too much* feedback *too quickly*. A competent human tutor doesn’t do this. That tutor will interject if the student is catastrophically stuck or if the student is moving quickly on a long path in the wrong direction. Otherwise, the tutor will let the student work. *Even if the student has made an error*. That’s because a) the tutor gains more insight into the nature of the error as it propagates through the problem, and b) the student may realize the error on her own, which is great for her sense of agency and metacognition.

No ever got fired in edtech for promising immediate feedback, but in the future we’ll promise *timely* feedback instead.

**In the future, computers will give me useful feedback on my work.**

I have made a very common error in my application of the distributive property here.

A competent human tutor would correct the error *after* the student finished her work, let her revise that work, and then help her learn the more efficient method of dividing by four first.

But the computer was never programmed to anticipate that anyone would use the distributive property, so its feedback only confuses me. It tells me, “Start over and go down an entirely different route.”

The computer’s feedback logic is brittle and inflexible, which teaches me the untruth that *math* is brittle and inflexible.

**In the future, computers will do all of this for math that matters.**

I’ve tried to demonstrate that we’re a long way from the computer tutors our students need, even when they’re solving equations, a highly structured skill that should be *very* friendly to computer tutoring. Some of the most interesting problems in K-12 mathematics are *far less* structured. Computers will need to help our students there also, just as their human tutors already do.

We want to believe our handwriting recognition and adaptive feedback systems result in something close to a competent human tutor. But competent tutors place little extraneous burden on a student’s mathematical thinking. They’re patient, insightful, and their help is timely. Next to a competent human tutor, our current computer tutors seem stuttering, imposing, and a little confused. But that’s the present, and the future is bright.

**Need A Job?**

I work for Desmos where we’re solving some of the biggest problems in math edtech. Teachers and students love us and we’re hiring. Come work with us!

## 20 Comments

## Howard Phillips

February 4, 2016 - 11:20 am -Hello Dan

First thing is that any computer handwriting system for math might encourage the kids to write legibly. I am assuming that the writing is done with a stylus, not a finger – correct?

Secondly, a complete redesign of what is being tested in each problem may be required, and in this case would be a good idea.

I am thinking of asking for actions to be performed by the computer, such as “expand the bracket” or “multiply out the bracket by the four”, and these commands could be typed in (boring) or taken in as voice commands. Alternatively the mathematical statements themselves could be offered by voice to the machine. There would then be no need for handwriting recognition.

Thirdly, yes, the feedback is useless. For example, “What is required to isolate the x?” is a)unnecessary, as it presumes that the user doesn’t know what “solve the equation” actually means, and b) for an ELL student a better choice of words is in order. “isolate” “reverse the operation..”

On a lighter note it seems that the Queen of England is no longer the only one who refers to herself as “we”.

Keep up the work!

Oh, and “adaptive” used to mean that the way the computer interacts with the user develops over time in the light of earlier interactions.

ps. I am retired and won’t be applying for a job, but I will be very happy to offer my opinions on work in progress.

## Kenneth Tilton

February 4, 2016 - 11:23 am -[Full disclosure: I am a vendor in the same space.]

I agree with *a lot* of the issues raised, but then a lot of them are amenable to a little engineering.

The fact that the recognition jumps in while you are still thinking could be addressed with a user-specifiable delay, or quick-keys that say “parse now” (in which case the app would never recognize until asked) or “lose that parse”.

And (no longer defending recogniton) as the Desmos calculator demonstrates, a WYSIWYG maths editor is a fine solution. We have never heard one complaint about ours, and such editors have an added benefit over pencil and paper: the students can now reliably read their own work.

The immediate feedback similarly could be put under the user’s control. We just did that to support a study involving our product — well, what we did was put it under the teacher’s control, but the engineering we did would support the user controlling that.

Now let us look closer at when help should be offered. When I tutored I did not wait until they were done when a new skill was being learned. I did not even wait until they finished a step. Why watch them slog through the rest of the problem (burning up tutoring time) when a teachable moment has already appeared? But, yes, when mastery seems to be at hand we give them less support, throw them curves, etc.

This question of when a good tutor steps in brings us to a surprising benefit of automated tutoring: we humans are not necessarily all that good at it (VanLehn, 2011). http://www.tandfonline.com/doi/abs/10.1080/00461520.2011.611369

VanLehn found on average that human and computer tutoring yielded 0.79 and 0.76 sigma improvements. One way human tutors go wrong? They treat sessions as if they were 1-to-1 chalk talks. You can also search YouTube for “algebra” to see some rather unfortunate human instruction. As we continue to refine our computer tutors, we are building a permanent asset of quality tutoring available to all 24×7 at negligible cost compared to human tutoring.

Finally, it is not clear that machine tutoring must address fuzzier problems. Millions drop out of high school and two-year colleges because they cannot learn clearly structured Algebra. Fixing that seems like a substantial standalone win, especially since some of us old-skoolers see that as a prerequisite to tackling the fuzzy stuff. But this is a different debate.

## Dave

February 4, 2016 - 11:39 am -This post seems to look at current K-12 math as The Math, Period. Really, current math is more like “math communicated in a way that evolved specifically for the strengths and limitations of physical writing.”

You’re looking forward to when computers can handle {math ed that evolved for physical writing} as well as physical writing can. That will always be somewhat of a square peg in a round hole.

I think we have to look forward to modifying math ed to become “math evolved for a digital world”.

Maybe in “math for digital world”, we don’t write “+24” under both sides of the equation. Maybe we use a new type of touch-based user interface to tell the device that we want to add 24 to both sides, and the computer displays it for us. Maybe we tap the “-24” term and a ring menu with many choices appears, which includes “+24 to both sides”. Maybe it’s something else entirely.

## Dan Meyer

February 4, 2016 - 2:02 pm -@

Dave, I think it’s wise to ask, “What math should students learn today?” I don’t think it’s wise to ask, “What math should students learn todayassuminga digital medium?” There is lots of important math thinking that, in 2015, is easiest to express on paper than computers and easier for humans to assess than computers. Natural text arguments, sketches, diagnosis of reasoning, etc.## Chester Draws

February 5, 2016 - 1:06 am -Maybe we tap the â€œ-24â€ term and a ring menu with many choices appears, which includes â€œ+24 to both sidesâ€.This, to me, is the worst possible option.

We’re trying to teach them to solve a problem by themselves. Giving them prompts like that will teach them to rely on the system, not themselves.

## Kenneth Tilton

February 5, 2016 - 5:05 am -“Weâ€™re trying to teach them to solve a problem by themselves. Giving them prompts like that will teach them to rely on the system, not themselves.”

Agreed. DragonBox Algebra has a big problem here in that it prompts the user for missing elements once a tool has been selected, then prevents the student from misapplying it. Too much help!

We considered bestowing students with, say, an “add to both sides” power (as in a fantasy game) once they had demonstrated proficiency, but worried their proficiency would atrophy.

Instead, we again turned the keyboard into an asset via engineering: hitting single-quote duplicates the expression above (think “ditto”) and then a key-combo (control-+) puts the editor in “both sides” mode, putting a + at the end of both and then duplicating the typing on both sides.

Mind you, many today deplore repetition and are happy to let students move on after getting a few problems right, but we think something useful gets wired up in our heads when we do more problems in a greater variety.

Again, a different debate, but one that puts pressure on software designers to make entry easy if decided on the side of more practice.

## Jocelyn Dagenais

February 5, 2016 - 8:49 am -Hi Dan

Just curious, what’s the name of the app you use in the video ?

Thanks

## Demetrius Olsen

February 5, 2016 - 10:22 am -After a brief but grateful sob and a twenty-minute drop-in tutoring session with five middle school students and three unique needs, I can see that having a “feedback machine” would have helped me to juggle the discussions a little better. Although, I’m still wondering about what point in the process the feedback is most appropriate? And how much? And what would this teach students about problem solving? Will they come to expect feedback at the touch of a button? Which exists already via the web… but only if you know how to ask the right questions. And what about those problems that are “worth solving” — that don’t have a set of algorithmically generated steps, problems that we want our students to go out into the world to solve for us someday? At what point will the transition happen between immediate feedback with suggestions to failure without suggestions… only more questions… Anyways, thank you for sharing one of your visions for the future. It’s a good one, especially after I move beyond the selfish part of me that doesn’t like the idea of something taking my morning tutoring/feedback time because it’s one of the precious times in my day where I get to connect with my Title 1 students who care enough about learning to stop by and get help… albeit slowly and inefficiently.

## Kevin Hall

February 5, 2016 - 11:09 am -The feedback in these examples is terrible. It’s also terrible in the Carnegie Learning software, which actually has the capacity to follow a student down any possible solution path. Even though Carnegie Learning can detect that you’ve chosen the solution path of distributing, for example, its hints are so canned they don’t even reference anything you wrote.

Will Desmos be trying to integrate some feedback into its activities? I think Central Park could be made much better by making it a little adaptive.

## Kevin Hall

February 5, 2016 - 12:24 pm -To be specific, when students are trying to write (w-3p)/4, they often write w-3p/4. I think some automated feedback there might be productive.

## Dan Meyer

February 5, 2016 - 2:26 pm -Kevin Hall:Thanks for the suggestion, Kevin. We’re starting to make some tentative steps into computer feedback. We are just really skittish about making false claims about student learning. We’d rather our system make no evaluation at all than make a false or misleading evaluation of the sort you see in this video.

## Carl Malartre

February 6, 2016 - 11:59 pm -I like this post. I’d like to add a twist.

In the future, computers will connect me to humans when there is no added value in guessing me.

In the future, humans will learn when a computer can add value to a task. I like Audrey Watters on that topic.

I love how handwriting is getting interesting again. People are still clamoring “you don’t need to learn cursive writing, we are in a keyboard world”.

Some think they are even more progressive, saying “you don’t need to learn the keyboard, we are in a touch keyboard world”.

Should humans adapt to input devices? It’s the other way around and the pen will make a comeback.

And next year, No UI is the New UI:

https://medium.com/swlh/no-ui-is-the-new-ui-ab3f7ecec6b3

And the following, it’s going to be teaching with VR. And after that, they are going to wire your brain. Etc. etc.

But until it thinks like a human, could it be a far better approach to connect people and give them better communication/sharing tools?

Do we overinvest in autonomy and not enough in collaboration between peers? I have that feeling.

Thanks for the generous posting Dan!

## Xavier B. (@somenxavier)

February 7, 2016 - 3:32 am -Joking: the future is now. We (teachers) are, at least, universal Turing machines and we fit this behaviour. The only problem is the ratio (1 teacher – 20 students) and our part-time availability ;-)

## Kevin Hall

February 7, 2016 - 11:28 am -@Dan: In that case, maybe making Desmos a teensy bit more like Pear Deck would make sense. With Pear Deck the teacher advances all students to the same slide and can provide feedback on everyone’s answers immediately (via class discussion), before letting everyone try the next question. You don’t need any adaptive software–you just need everyone on the same question at the same time so that when you display student answers for discussion, everyone feels the value in tuning in. With Desmos, I find myself running around the room to help groups that are stuck, and when I pause in the middle of the lesson to display selected answers for a particular question, I always have 1 group that’s not even on that question yet and 3 more that are way past it and don’t want to tune in. Could you add a feature to Desmos that lets teachers advance all students through a lesson synchronously?

Let me back up and say that as I see it, there are two very diferent types of Desmos lessons: 1) conceptual explorations like Central Park, Function Carnival, Penny Circle; and 2) skills practice that’s more open-ended than a typical worksheet, like Des-Man, Polygraph, and the Marbleslides. You might think automated feedback would fit best in the latter category, but I think it would run into all the problems your blog post describes.

Instead, I’ve longed for little tutorials that I can turn on to assist specific groups who are stuck during conceptual explorations. For example, lets say a group is stuck making the mistake I mentioned above: writing w-3p/4 when either they need to write (w-3p)/4 or they need to hit the “division” button first to create numerator and denominator textboxes before starting to enter their expression. As teacher, I’m busy going around to different groups when I notice that Sammy’s group is stuck on this. I type in a little teacher code and activate the Desmos tutorial that intervenes for precisely this misconception; then I skitter off to the next group with a plan to circle back to Sammy’s group later. I think the intervention itself could have lots of closed-ended question-answer loops, ideal for automated feedback, as long as a human teacher certifies that this intervention is the one that’s needed. In fact, I think teachers wouldn’t mind writing the interventions themselves in the activity builder…you could probably crowd-source them!

## Dan Meyer

February 7, 2016 - 2:39 pm -Carl:+1

Love these values. I’m curious how you put them to work in BuzzMath, Carl.

Kevin:On the way, and for exactly the use case you’re describing.

I like your faith in humans. We share it. Over time, I’d like those tutorials be suggested automatically, but right now we just don’t have faith that computers are

smartenough to make the right suggestion.Taking the future you’re describing and running with it:

We’d know every intervention that a teacher ran, and when she ran it, and what student work

precededthat intervention. At whatever point we feel confident enough to train a computer to suggest the intervention, we can run that suggestion algorithm back through all those interventions and see how many times the computer gets it right. Which is an a priori assumption that theteachergot it right originally. I feel more comfortable making that assumption, though, than the a priori assumption that computers will get it right out of the box.## Kevin Hall

February 7, 2016 - 5:15 pm -@Dan, wow, that sounds awesome. You just made my day/week/month/year!

Actually, I don’t think you’ll need to make any kind of a priori assumptions at all. Rather than assuming that all humans make good judgments, you should be able to use your data stream to identify the teachers who have the best judgement. Then you can train your algorithm just on their decisions. Very exciting–I wish your team all the best.

## Andrea Patelis

February 8, 2016 - 12:39 pm -In response to comment 6. Kenneth Tilton that “DragonBox Algebra has a big problem here in that it prompts the user for missing elements once a tool has been selected, then prevents the student from misapplying it. Too much help!”, my own thoughts as a high school math teacher who has experience in Educational Technology but is only employed in the class are:

It all depends.

I have used DragonBox Algebra with students that haven’t mastered solving complicated equations (i.e. their Dragon Box interest outpaced their math studies) and have been able to flash up, in the middle of instruction, a screen from DragonBox Algebra and ask “why did the DragonBox program prompt look like this?” or “What would Dragonbox prompt you to do?”

I personally play DragonBox Algebra as a leisure activity much like I do Suduko puzzles. I don’t think anyone thinks DragonBox is going to replace mathematics education but I think of it as laying the ground work for math education…..the way picture books set the ground work for teaching reading.

Regarding DragonBox math…..a simple reality that all developers need to understand…. is that many school computers aren’t running Microsoft 8 or higher yet. I have purchased personal Apps for interested students to use on their personal electronic devices but too many times their devices do not have enough space to use the App with all the other images/videos they have on them already that they “simply can’t live without.”

But I will forever fully support any video games that support student engagement!

## Maureen Sikora

October 18, 2016 - 8:05 pm -Does Desmos understand set builder notation? If so, how can I type this notation into Desmos? does it understand interval notation?

I am looking for the headache to give my students here: What is a good headache that shows why set builder notation or interval notation is useful?

## Dan Meyer

October 19, 2016 - 2:19 pm -Hi Maureen, thanks for the note. My team just finished an activity called Domain and Range Introduction which tries to problematize why set notation is important. !

## Maureen Sikora

October 20, 2016 - 10:25 am -Hi Dan, Thanks for getting back to me. I had my students do the Desmos Activity “Smiley Face” where they learned to restrict the domain and range and they learned that what they typed effected their picture. Computers are picky about reacting to directions that we type. This was a great lead into the importance of being able to express our ideas in math symbols, and I talked about Set Builder Notation. But…my “wonder” here is this: Does Desmos have a way to type in set builder notation to restrict a domain (or a range) to only “Integer” values as we would in a sequence, or to restrict it to “rational numbers” as we would in a context where values deal with money (which are always rational numbers). Is there a way to connect Fawn Nguyen’s Visual patterns website to a Desmos activity where they have to restrict the domain to integer values? This would allow them to really see how the graphs are either discrete or continuous. If there is already something in Desmos that can do that, can you let me know. Thanks for all the amazing things you do!