## These Horrible Adaptive Math Systems

October 3rd, 2012 by Dan Meyer

Annie Murphy Paul, describing systems that attempt to adapt to what you know and don’t know about math:

Tyler breezed through the first part of his homework, but 10 questions in he hit a rough patch. “Write the equation in function form: 3x-y=5,” read the problem on the screen. Tyler worked the problem out in pencil first and then typed “5-3x” into the box. The response was instantaneous: “Sorry, wrong answer.” Tyler’s shoulders slumped. He tried again, his pencil scratching the paper. Another answer — “5/3x” — yielded another error message, but a third try, with “3x-5,” worked better. “Correct!” the computer proclaimed.

S.H. Erlwanger [pdf] *forty years ago*:

Through using IPI, learning mathematics has become a “wild goose chase” in which [Benny] is chasing particular answers. Mathematics is not a rational and logical subject in which he can verify his answers by an independent process.

See if this describes your adaptive learning startup:

A basic assumption in [your startup’s name here] is that pupils can make progress in individualized learning most effectively if they proceed through sequences of objectives that are arranged in a hierarchical order so that what a student studies in any given lesson is based on prerequisite abilities that he has mastered in preceding lessons.

I don’t have anything against personalization *per se*. But the technology that enables that personalization defines and constrains the math we can personalize. Currently it defines that math very, very narrowly.

Individualization in [your startup’s name here] implies permitting him to cover the prescribed mathematics curriculum at his own rate. But since the objectives in mathematics must be defined in precise behavioral terms,

important educational outcomes, such as learning how to think mathematically, appreciating the power and beauty of mathematics, and developing mathematical intuition are excluded.

Look, if you’re building one of these systems, you *have* to read and understand Benny’s Conception of Rules and Answers in IPI Mathematics. Ask questions here. Let’s figure this out. We’d all love for you to make some interesting new mistakes. Right now you’re just repeating mistakes that are forty years old.

**BTW**. In Education’s Digital Future last night, I said I felt the next Kasparov v. Deep Blue competition would be between a grandmaster teacher and an adaptive learning engine. Give them both some *written* student work. Which one can accurately identify what the student knows and doesn’t know and what to do next?

I said this in a small group and a couple of technologists razzed me. One said that he doesn’t even get that kind of feedback in the lecture halls at Stanford, which is totally fair, though that isn’t the model I’m defending.

Another said, “Actually, computers are already better.” He told me that adaptive systems can tell you the best time of the day for you to study, how much time you spend on problems, the answer you choose most often when you’re stuck, and a bunch of other metrics that are simple enough to parse from a student’s clickstream. Of course, not one of them addresses the student’s most pressing question, “Why am I getting this answer wrong?” So, like Benny, the student clicks a different answer and the wild goose chase begins again.

on 03 Oct 2012 at 5:22 pm1 Chris RobinsonWhen a computer/adaptive learning system/online video learning lesson can build the relationships I can with my students, and understand the small nuances in their lives that affect their learning, I will gladly step aside from teaching. That said, I’m pretty sure I’ll be doing what I’m doing for awhile.

on 03 Oct 2012 at 6:15 pm2 Kirsten SilvermanYes! Teachers talk about struggling students and administrators respond by talking about the latest and greatest system. We roll our eyes…the last one didn’t work, why will this one? How will the computer provide a learning environment where connections can be made?

on 03 Oct 2012 at 6:45 pm3 CraigI really like this Dan and I think I read it in a different way. The personalization aspect always seems to end in students doing solo work, instead of building knowledge through collective memory and action. These online educational resource things, classroom flipping or what have you, always end up in the realm of solo work. In your example, the kid was doing homework (it appears) on his own without the input of others to bounce ideas off of. This bumping of ideas is, to me, central to all learning.

You might like this article:

Davis, B., & Simmt, E. (2003). Understanding learning systems: Mathematics teaching and complexity science. Journal for Research in Mathematics Education, 34(2), 137-167.

PDF here: https://sites.google.com/site/brentdavisyyc/articles/200304JRME.pdf

on 03 Oct 2012 at 6:56 pm4 Jeremy WadhamsI found the book “The Arithmetic of Computers (1960, ASIN B0006AWMPU) to be an interesting positive example. It’s a choose-your-own-adventure book to teach you binary arithmetic. What it does well is listing *interesting* wrong answers, and having corrective advice for *every* wrong answer to get you back on track. (Well, every one of three wrong answers the author could imagine in advance. It’s not magic.) Obviously it would have been a software package instead of a brick-thick book at any point from 1984 on, but as it stands it’s a fascinating paper relic.

52 years later, it’s still a better formative evaluator than Coursera.

on 03 Oct 2012 at 7:51 pm5 Paul GitchosThanks for the post. This reminds me of Seymour Papert’s characterization of computer-aided instruction: “Teaching computers to program children.”

on 03 Oct 2012 at 7:53 pm6 AnaI agree that current adaptive systems are unable to give background, hints, or help to solve the problems – the way a teacher might. That is a problem with the content they are pushing. Your assumption is that the content will not get better over time. So, far, that seems to be true. There is more focus on the system than the content. But, I hope that will shift at some point.

If you look at game systems and game design, many successful games are able to get users from very simple to very complex tasks by increasing difficulty in very small increments. There is definitely something to learn about teaching and content there.

on 03 Oct 2012 at 9:12 pm7 Dennis AshendorfIt’s easy to make the best the enemy of the better. Look for improvement instead.

Consider ALEKS, one of the first, if not still the best, adaptive teaching system for procedural school math. I’ve used dozens of math products, but I always keep a few licenses of ALEKS around for students who need “applied mastery” quickly. Nothing works better. It is carefully structured with millions of data points. Make fun of this all you like, but beware hypocrisy. It is truly data-driven instruction with substantial feedback to the programmers – far more than the anecdotes that drive a blogger.

on 04 Oct 2012 at 5:10 am8 Matt EOn first think, it seems to me that while a computer might be readily able to teach and assess students on, for example, the CCSS content standards, they would have significant difficulty doing so with the mathematical-practice standards. And I’d argue that it’s the latter that contains the heart of what it means to do mathematics.

on 04 Oct 2012 at 5:51 am9 BrianI can see using an adaptive systems as a resource, but I shudder to think of them replacing teachers. Even as a resource, you’ll still run into issues as you mentioned in this post, but you will also run into a few good things as well.

* For some students, working alone on the computer may feel like a safer environment.

* Getting feedback from the computer that you’ve “mastered” a skill can be motivating, especially if there is some visual display in the system to see how you are progressing.

* Depending on the system, the student might be practicing skills in their ZPD most of the time. (Obviously not all of the time as illustrated in the anecdote you shared.)

* The computer has the ability to give students instantaneous feedback about whether they are correct.

* Again, depending on the system, the student might get useful feedback to either help them understand their mistake or guide them to a correct solution.

* As a previous comment pointed out, computer systems can provide teachers a wealth of data about student performance. The data still needs to be judged by a human, but it’s nice to get a report where you didn’t have to manually score all the student work and input it into the system yourself.

on 04 Oct 2012 at 6:29 am10 David PetroI have mixed feelings about adaptive learning. I am totally intrigued by it because I think there is tremendous potential given the technologies that we have at our finger tips now. But I also think that currently it falls short of its potential

For mathematics I think there is definitely a place for it when we are talking about basic math skills or algorithms and practice. Currently I am not sure that there is a system that does a good job of giving helpful feedback but give it time.

But as @Ana pointed out, there is a lot we can gain from looking at how games teach players how to succeed. Typically, mastery is required of various basic levels before the real problem solving begins. In the general sense I think this is done so well in the game Portal (http://store.steampowered.com/app/400/) and more specifically to math with the algebra teaching app Dragon Box (http://dragonboxapp.com/).

And lastly to piggyback onto what @Brian said, adaptive systems shouldn’t be able to replace teachers. However, they can move the rote aspect of mathematics to the virtual world and leave the problem solving to us. I was reminded of this TED talk where Arthur C. Clark is quoted (at about 4:15) as saying “A teacher that can be replaced by a machine, should be” (http://www.ted.com/talks/sugata_mitra_the_child_driven_education.html). With that in mind, I think that eventually (perhaps not yet) these systems will be a perfect addition to the class.

on 04 Oct 2012 at 7:10 am11 Dan MeyerMatt E:Even with the content standards, though, their understanding of

whyyou got a particular content question wrong is so dim.on 04 Oct 2012 at 7:58 am12 Christopher DanielsonWell put. The only thing I’ve seen that is even addressing the issues you raise is xyalgebra. It’s a pet project of a professor at City College, CUNY. To hear him tell it, he has been able to get no traction at all in convincing big publishers to consider his idea of giving meaningful feedback to students based on a database of known errors. Right/wrong is all they are interested in, and this seems to pervade edtech in math more broadly as well.

on 04 Oct 2012 at 8:43 am13 BrianTo Ana, David, and Christopher – I’ve been writing digital math lessons for three years now (as part of a blended curriculum a teacher uses with her kids, not an adaptive system) and we’ve struggled with the issue of how to provide feedback. There are a couple of factors at play:

1. Cost. For every one question I write, every specific feedback that accompanies it costs time and money. It is even more expensive if there are graphics or narration involved. No matter how well intentioned I want to be, I have to weigh cost vs. reward, and when you’re writing lots of questions, the costs quickly add up. (As an educator I wish money didn’t dictate as much as it does, but sadly companies do not have endless resources.)

2. Assumptions. When a student gets an answer wrong, we are assuming why they got it wrong. If I provide them specific feedback tailored to an assumed mistake, I might cause more confusion if the student got the answer wrong for a different reason.

More often than not I provide guiding feedback rather than focusing on the mistake I think a student is making. I do provide specific feedback for presumed mistakes from time to time, but I’ve found that in many cases it isn’t necessary. I can steer the student with a guiding question or I can point out a Help or Hint that they can access to help them try the question again without having to get bogged down in specifics. If the student is really stuck, then they can always turn to a neighbor or call on the teacher.

on 04 Oct 2012 at 9:33 am14 Elaine WatsonWow! I took the time to read Benny’s story and my reaction was sadness at how a child’s learning could be so screwed up by an anonymous program with no adult interaction. That’s malpractice!

on 04 Oct 2012 at 9:53 am15 James KeyI think you guys have made a convincing case for the *limitations* of computers. But I would submit that there is tremendous potential there if they are used judiciously. Computers could be great for *routine, mechanical practice.*

A possible fix for the problem Dan points up is for the program to generate a screen-size stop sign that says, “STOP! SEE YOUR TEACHER ASAP” when the student enters more than one wrong answer in a given practice set, indicating that they have not yet mastered the given skill(s).

on 04 Oct 2012 at 11:49 am16 Tim HuntBenny’s story just reminds me of the advice to eat a healthy balanced diet, and avoid extreme fad diets. Which is hardly profound advice.

So, don’t expect to completely replace teachers with an adaptive math system, but a good maths assessment system can be a very nourishing and healthy part of a balanced maths instruction diet. (OK, analogy stretched too far now.)

@Ana When you say “current adaptive systems are unable to give background, hints, or help to solve the problems” you are just wrong. I don’t know what systems you have been looking at.

I have had a very enjoyable year working with Chris Sangwin, a lecturer at Birmingham University, UK, on his maths assessment system called STACK. (Probably the best link at the moment is http://moodle.org/mod/forum/discuss.php?d=197507.) Also Chris has just written a book about the educational ideas in STACK, and I have had the pleasure of reading a draft. Once it is published I strongly recommend it to everyone here. The book has some interesting thoughts about what it means to teach and assess maths, to what extend computers can help, and then the approach STACK takes.

One of the principles in STACK is that when grading the students response, what you are actually doing is establishing certain mathematical properties of what was entered. You are absolutely not trying to match the form of the students answer against a key. So, in the example where the answer was 1/2, what properties are we interested in?

* Numerically equal to 1/2? – certainly.

* Expressed as a fraction in lowest terms? – well that is more interesting. Depending on the context, that may, or may not be a relevant property. The teacher needs to decide.

* Expressed as a fraction, not a decimal? Again depending on the context, you might or might not want to accept 0.5 or 0.50.

That last one possible property raises another issue. Suppose we really want fractions, and don’t want decimals, and the student types 0.5. Do we consider that a mathematical error and grade it wrong, or do we just consider it that the student has not understood the rules of the game, so we should just give them some feedback like “Actually, we want you to type a fraction, not a decimal. Please try again. You have not been penalised for that response.” Again, it is choices for the teacher.

But, ultimately, there is a part of maths education that involves practicing skills until you can execute them without making mistakes. For that practice, you have a choice: Either the student writes the answers on paper, hands them to the teacher, and gets the outcomes and feedback some time later. Or the student interacts with a computer system that has had high quality content loaded into it, and which can grade the students work immediately and give feedback, and when the student is wrong, or just not entirely correct (e.g. forgot the constant of integration) can let them correct the mistake themselves by editing their answer. That latter option is just clearly better to me – but it is just one part of maths instruction.

on 05 Oct 2012 at 6:19 am17 mr bombasticI think some of the folks commenting on procedural material may not have read the “Benny” article. It is much easier to give questions and feedback that train the brain to complete a process than it is to give questions and feedback that produce understanding.

I hate to site an education guru, but Grant Wiggins had a recent post on transfer that I think relates to this. Essentially the difference between telling the student the type of question (pyth th.) and seeing if they can do it vs. asking a question that involves pyth. th. and seeing if the student recognizes the need for the pyth. th.

I am also a little skeptical of over reliance on standards based grading for the above reasons. Difficult to say what you are really measuring in my un-humble opinion.

on 05 Oct 2012 at 6:44 am18 Christopher DanielsonGot a link for us,

Mr. Bombastic? Love to read the Wiggins piece.I’m with you on the SBG thing. I do think we tend to treat it as though the S stood for Skill. But that’s true of much mathematics assessment, so I don’t consider it an SBG critique per se. It is challenging to design a principled

ideas basedSBG scheme. The science folks are way ahead of us here. I’m looking at you, Jason Buell.on 05 Oct 2012 at 6:47 am19 Doug SmithThe adaptive math system idea strikes me as uber reductionism, which frankly speaking can’t be a good thing.

How does adaptive math fit in with the idea of problem or project based learning? Is there any other way that math could be further de-contextualized?

on 05 Oct 2012 at 10:26 am20 Dan MeyerTim Hunt:Math already seems too much like a game to too many students, full of rules they have to memorize and maybe understand. Now we’re discussing new rules. These rules, unlike most in mathematics, exist only for the convenience of an algorithm which didn’t understand that 3/6 and 1/2 and 0.5 and “one-half” are all the same. We’re bending over backwards to accommodate these horrible adaptive math systems for too little upside, IMO.

on 05 Oct 2012 at 1:18 pm21 mr bombasticGrant Wiggins on transfer:

http://grantwiggins.wordpress.com/

on 05 Oct 2012 at 1:26 pm22 Tim HuntDan, I think you are being over simplistic. 3/6 and 1/2 and 0.5 and “one-half” are not always the same.

In scientific contexts, sometimes 0.5 means “between 0.45 and 0.55” and 1/2 does not mean that. similarly “half a metre” implies a different degree of precision than “500 mm”.

If you were reading an article, you would be very surprised to see a number written as 3/6. You would expect the author to have reduced factions to their lowest terms.

One of the points of math is to allow humans to communicate with each other on quantitative matters. Therefore, we should (at the appropriate time) be teaching these subtle shades of meaning in how humans use numbers (and other mathematical constructs). These are not arbitrary rules that have been invented as a game (my wording was poor before). They may be arbitrary rules, but many language rules are, and students have to learn them if they want to participate fully in the community of users of that language.

It is similar to the way you would expect an english teacher to correct any spelling mistake they saw, even if they were grading an essay about something, not grading a spelling test. Of course, the effect of the ‘mistake’ or just ‘unconventional usage’ on the numerical grade will be different in the different contexts. Related to this you obviously would not let use students use a spell checker during a spelling test, but you might encourage them to use one when writing an essay, which is similar to the old chestnut in maths education about when you should or should not let students use a calculator, or Wolfram Alpha.

So, anyway, my arguments were nothing about “We’re bending over backwards to accommodate these horrible adaptive math systems.” I do not wish to defend that. I am interested in what it takes to build a useful computer tool to help in teaching the maths that students should be learning.

on 06 Oct 2012 at 9:23 am23 George BighamHow about if the kids program the computers instead of the computers programming the kids? Sounds too hard? If its programming in C, java, Matlab, etc. its too hard, but if advanced programs can parse natural language into correlated code (they can do this currently to varying degrees of success), then the students can be assigned make their own set of instructions to a computer and have it do the work of testing their accuracy.

For example, Tyler could write “move the x term to the opposite side as the y term, then … , then..” Susie could write” divide everything by 3…”, Jonnie writes, “get y by itself.” The computer could parse these rules and test them, or respond by asking for more detail like: how to you “get y by itself?” Students would already have to have developed methods on their own the old fashion way, but after they master them they can teach a computer to do the dirty work, such as have it finish a test with 100 problems a 100% correct in 1 min if they taught it correctly.

on 06 Oct 2012 at 4:04 pm24 blaw0013Brian makes positive observations in response 9; compelling until recognizing what Paul Gitchos reminds us Seymour Papert observed, the fundamental trouble with allowing education to become training: “computers to program children.”

on 06 Oct 2012 at 9:51 pm25 Tim Hunt@George Bigham – you mean a bit like the Mathpert system that Michael Beeson at San Jose State University started developing in 1985? It is now marketed as MathXpert http://www.helpwithmath.com/about.php?include=stepbystep.html.

on 07 Oct 2012 at 11:11 am26 John EdelsonI’m pleased to see some realistic criticism of the “adaptive’ learning systems. I’ve been trying to deploy them for awhile and I keep turning off the so- called intelligence and instead, letting the student determine what they need to Lear next. Even if there answer is right.

One of my pet peeves is the lack of sophistication in the granularity of the “helpful lesson” that they dish up. It’s a really hard thing to do right and it’s awful being caught within a software system that feels like a bank’s bad call management system.

on 08 Oct 2012 at 9:07 am27 TomIt’s interesting to see the Anna Murphy Paul example and to think back on the last instant-computer-grading-for-writing product I was pitched.

on 09 Oct 2012 at 10:09 am28 cb1601ejTools are a means not an end, so a critical stance is always needed. This, however, shouldn’t mean we shouldn’t try out anything: a good teacher will remain a good teacher when using tools (like 3 act videos) like adaptive systems, a bad teacher remains a bad teacher. Also, the state if the art is farther then is painted here. It’s less or or and more and and.

on 11 Oct 2012 at 1:25 am29 Tim KnightI think this debate is on the bleeding edge and it is difficult. However, the discussion very quickly becomes didactic and polarized. There is a huge difference between a “blended” or “flipped” environment and a student who only studies with an adaptive math system. The flipping and blending should increase contact time with a teacher and make the math far more interactive and “human”. Why do I see Charlie Brown’s teacher whenever I think about the traditional face to face classroom.

Take the traditional classroom lesson. Once the teacher has finally started interacting with individual students there may only be 30 minutes left in the lesson. Divide that number by 25 and you get less than 90 seconds face time with your teacher. That is an appalling deal. Technology and flipping buys time to work with students individually and in groups on understanding.

I teach online and I can honestly say that my knowledge of the students is as good, and often better, than in a face to face environment. There are numerous reasons for this that I do not have to go into here but one example is how introvert students (traditionally seen as problematic in Western education systems) get a fair chance to ask questions and engage with their teacher. I was shocked by this and the lack of data I had on my interactions with my students. Further to this assessment in a good online environment increases the amount and quality of feedback as a lot of the instructional “heavy lifting” can be leveraged by good content being written ahead of time.

on 11 Oct 2012 at 5:25 am30 Dan MeyerTim Knight:Let’s just pause it there, though. There are enormous swathes of educators who manage not to lecture for half the classroom period, who pose interesting problems and work with small groups of students all the way through the hour.

Flipped advocates (how did we get on flipping in this thread?) would like me to believe that everybody is either lecturing for a half hour in the classroom or lecturing for a half hour on video at home. That obscures a lot of other, better alternatives.

on 13 Oct 2012 at 4:15 pm31 A Brief Change Of Mind On Adaptive Learning | Stanford EDF 403X[…] depress me (more here) and after my conversation in our small group yesterday, they still depress me. But a couple of my […]

on 27 Nov 2012 at 5:32 am32 “Adaptive” Learning Technologies: Pedagogy Should Drive Platform | edtechdigest.com[…] Meyer posted two criticisms of “adaptive” technologies. In the first, he drew comparisons to Stanley Erlwanger’s research on the failures of Individually Prescribed Instruction (IPI). Meyer appropriately lamented the […]