The Scary Side Of Immediate Feedback

Mathspace is a startup that offers both handwriting recognition and immediate feedback on math exercises. Their handwriting recognition is extremely impressive but their immediate feedback just scares me.

My fear isn’t restricted to Mathspace, of course, which is only one website offering immediate feedback out of many. But Mathspace hosts a demo video on their homepage and I think you should watch it. Then you can come back and tell me my fears are unfounded or tell me how we’re going to fix this.

Here’s the problem in three frames.

First, the student solves the equation and finds x = -48. Mathspace gives the student immediate feedback that her answer is wrong.

140827_1lo

The student then changes the sign with Mathspace’s scribble move.

140827_2lo

Mathspace then gives the student immediate feedback that her answer is now right.

140827_3lo

The student thinks she knows how to solve equations. The teacher’s dashboard says the student knows how to solve equations. But quiz the student just a little bit — as Erlwanger did a student named Benny under similar circumstances forty years ago — and you see just how superficial her knowledge of solving equations really is. She might just be swapping signs because that’s why her answers have been wrong in the past.

Everyone walks away feeling like a winner but everyone is losing and no one knows it. That’s the scary side of immediate feedback.

One possible solution.

When a student pulls a scribble move like that, throw a quick text input that asks, “Why did you change your answer?” The student who is just guessing will say something like, “Because it told me I was right.” Send that text along to the teacher to review. The solution is data that can’t be autograded, data that can’t receive immediate feedback, but better data just the same.

Related Awesome Quote

If you can both listen to children and accept their answers not as things to just be judged right or wrong but as pieces of information which may reveal what the child is thinking you will have taken a giant step towards becoming a master teacher rather than merely a disseminator of information.

JA Easley, Jr. & RE Zwoyer

Featured Comment

Justin Lanier:

I would want to emphasize that the issue is that Mathspace (and tech folks generally) tries to give immediate, “personalized” feedback in a fast, slick, cheap, low/no-labor kind of way. And, not surprising, ends up giving crappy feedback.

Daniel Tu-Hoa, a senior vice president at Mathspace responds:

[T]eachers can see every step a student writes, so they can, as you suggest, then go and ask the student: “why did you change your answer here?” For us, technology isn’t intended to replace the teacher, but to empower teachers by giving them access to better information to inform their teaching.

2014 Sep 4. I’ve illustrated here a false positive — the adaptive system incorrectly thinks the student understands mathematics. Fawn Nguyen illustrates another side of bad feedback: false negatives.

About 
I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.

65 Comments

  1. I agree with your analysis, Dan. If the student, the teacher, and the Great Big Gradebook in the Cloud think that learning is happening here, they are probably mistaken. But is it the immediacy of the feedback that’s the issue? This same feedback given a day later, and the same correction made, and the same “now you’re right” feedback would seem to be just as bad.

    And just as much, there’s lots of great immediate feedback that I could give give a kid who’s working a problem right in front of me.

    So I would want to emphasize that the issue is that Mathspace (and tech folks generally) tries to give immediate, “personalized” feedback in a fast, slick, cheap, low/no-labor kind of way. And, not surprising, ends up giving crappy feedback.

  2. It’s a shame that the creators of these sorts of technology haven’t embraced the idea of embedding metacognition and self regulation right into the exercises since it seems like it would be easy to do.

  3. Great points here, Dan and Justin.

    I’m not surprised a post like this follows your “Developing the Question” post series, since we are again too focused on the answer. Here, instant feedback is great for a procedure-based solution, but not so great at measuring any conceptual understanding.

    Could they improve by having students jot down (in point form) why they perform each step in the solution, rather than only when making a correction?

    I used to do a lot of “warm ups” using google forms, but do so much less often for this same reason. I’d always think “great, they get it!” but probably should have been saying “great, they can get the right answer” (but unsure if they really get it). I may start doing this again with a justification textbox for each solution to give me more insight.

  4. Also, and perhaps this is why we use pencils for so long, math requires a bit of grace and wait time. Immediate feedback is nice if it’s like the next day, but giving students feedback before they’ve fully gotten the chance to look at something is an awful way for kids to sharpen their metacognitive skills. Secondly, the feedback looks procedural and not conceptual, as if that’s the aim. Asking a question instead of giving them the answer is our aim; why are ed-tech start ups different from us?

  5. Dear Dan and others –

    Thanks for your thoughts on Mathspace. Feedback is something we value highly – both feedback from math teachers, as well as the feedback we give to students. It is something we think about every day, and you can see this in our philosophy: we give students feedback at every step of every question. In our opinion, this is a better pedagogical approach than other digital products which only give this feedback at the end of a question, when students have written a single right or wrong answer, or chosen a right or wrong multiple choice response. Immediate feedback is also in our opinion preferable to feedback given a week later, or whenever a teacher gets to grade the work.

    Your feedback has certainly given us a lot of food for thought – for example, how should we give feedback – with a comment? Or instead, by prompting the student with a question to get them thinking? Also, we know there are times when immediate feedback is not such a good idea – so we work every day to find that balance with educators. All good things for us to work through.

    However, we never intended Mathspace to be used without teachers – in fact, quite the opposite, we think it is a tool to aid teachers and students in their teaching and learning. I’d like to offer you (and anybody else who’d like one) a full demo of Mathspace so you can see how Mathspace works, and make a more informed commentary – I think you’ll be in a better position to comment on Mathspace after seeing our full product, than just based on a short GIF.

    You’ll get to see that, for example, teachers can see every step a student writes, so they can, as you suggest, then go and ask the student: “why did you change your answer here?” For us, technology isn’t intended to replace the teacher, but to empower teachers by giving them access to better information to inform their teaching. My offer stands for all interested, please get in touch at dtuhoa@mathspace.co

    Regards,
    Daniel Tu-Hoa

  6. Think I’m missing something here, as this strikes me as remarkable tool and a leap forward.

    Imagine a class with 15 minutes of independent practice. Teacher circulates and gets to 20 out of 30 kids, averaging a little more than 40 seconds per kid. Gets a few kids unstuck, asks a few others ‘what’s the next step,’ draws out a few misconceptions, and then walks back to front of the classroom where she uses this data to orchestrate a whole class wrap-up. This all sounds reasonable — it’s what I’ve seen skilled teachers do during independent practice — but 1/3 of the kids didn’t get teacher feedback on their work, and this system sort of ignores that different kids needed different kinds of feedback (some just a quick prompt, others a conceptual question, etc.). I’ve also seen the circulation process get derailed in a ton of ways, and the cost is almost always feedback at the individual kid level. i.e. She stays with the kid in the front for 3 minutes to get him going.

    I think it’s fair to call the question of whether this tool improves this sub-optimal process.

    Even if kids are getting imperfect feedback, that strikes me as (much) better than the alternative of either a) nothing at all or b) a short prompt when you really needed more support.

  7. @Daniel, I appreciate what you’ve shared about your goals and perspective at Mathspace.

    @Sean, your comment drew out for me another problem raised by automatic feedback–namely, where mathematical authority resides within a classroom. In your description of the teacher circulating and clarifying student misconceptions (etc.), it sounds like mathematical authority resides in the teacher. They’re in charge of knowing what’s right and wrong and why, and to help students accordingly. But something that isn’t a part of the tableau you described is the authority and sense-making that happens within and especially among the students who are doing the mathematics. It’s not just the teacher who can prompt, redirect, and explain–it’s every person who’s in the room. And of course everyone’s at a different place and can contribute different things, etc.

    And so this problem emerges: does Mathspace or something similar become an authority in a classroom? What kind of authority? Does its presence undermine students’ own authority, their capacity to make sense of mathematics? Dan broaches this question in his post–is the kid who erases that negative sign doing so because they’ve explained to themselves why it should be erased, or reflexively because the computer told them to?

    I don’t mean to make the best the enemy of the better, and I agree that the alternatives of “either a) nothing at all or b) a short prompt when you really needed more support” are poor options. But I do think that certain kinds of feedback delivery can undermine the building up of mathematical agency and can radically determine the mathematical culture within a classroom. I hope for all students something better.

  8. I know nothing about Mathspace, but you mentioned this in passing:

    > The teacher’s dashboard says the student knows how to solve equations.

    Do you know for sure that the teacher’s dashboard shows the answer as right in this case? Not sure whether you were assuming. One might also design the teacher dashboard to record the answer as wrong, or right (but with hints used and/or answers changed), or something else.

  9. I would propose that when a student gets an incorrect answer, the software creates checkboxes next to each step and requires the student to check the step where the error occurred. Then Mathspace would provide the student a workspace to rework the problem from where the error occurred. As a teacher I would then want the student to check his work with another student. Tools with immediate feedback work best when the teacher is able to immediately review the students work. Otherwise you have a situation where students as in this example are continuing down the wrong path without understanding their error.

  10. Weighing in from the constructivist corner, I would say that @sean and @daniel are very close. There are two fixes that spring to my mind. First, and Daniel/Mathspace might be on this already, change the “tip” to a question (in this case, “What happens when you multiply a negative by a negative?). Next, the teacher should be able to save and print out a compendium of mistakes, % above guess etc. so that things that were difficult for more than one student (possibly grabbing the number without its sign, possibly not understanding why a negative times or divided by a negative yields a positive) could be addressed in lecture. This would be the perfect use of the flipped classroom model, yes?

    I’m interested in the opinion of this august group on the 8th/9th grade algebra games at the site I write for, http://www.mathnook.com. Both I and my twelve-year-old son get hooked on the game in which you solve two-step equations en masse in your head. All you get in feedback is instant knowledge that your answer is correct or incorrect. My son freaked for a while until he asked me to tell him what was going wrong. It was exactly the issue in the problem above. He only saw one negative sign, as if the other didn’t matter. We went right to my change jar so that we could construct the reason that a negative times a negative produced a different result than a negative times a positive.

  11. Further to #10 ZLBrown: Could the working be submitted a line at a time as a matter of course so that the student receives feedback all the way through, rather than just at the answer? Along with an affirmation (“Yes, you need to collect the terms by adding 8 to both sides” [framed in appropriate language]; “Yes, you need to multiply both sides by -3”) or a warning (“No: you multiplied by +3; this was not right”) this would encourage checking every line.

    Admittedly laborious but perhaps good for early trials and could be switched off in later examples?

    Not subtle enough to provide suggestions but would indicate immediately when the error had occurred which should mean errors don’t get ingrained. But I don’t know how tricky that level of feedback would be to implement. Once implemented it might be a boon!

  12. Jennifer Potier

    August 28, 2014 - 3:25 pm -

    this type of feedback uses nothing more than operant conditioning to ‘train’ a student how to get a correct answer to an abstract problem with no context! Not dissimilar to rote learning of the times table.
    We have used this type of immediate feedback for years, it is nothing new, just because it is technologically- based.
    What it does emphasise, however, is that technology will NEVER replace a true teacher in a classroom, or in life , who must guide a student through real challenges, not just inert simulations.
    I think there is a small place for this type of immediate feedback, but god forbid the young person who grows up and changes his financial bottom line from red to black, ‘to make it right’, just because he was conditioned to do so.

    Just as with the training of guide dogs, human interaction and intervention is needed to ensure safe and correct evaluation of a situation….otherwise we are all just a bunch of rats chasing the cheese!

  13. The example presented here shows clearly why technology elements can’t replace humans. However, I’m a little less alarmed perhaps about this specific example. This student *does* know how to solve this particular equation… and many, many algebra students *do* know the algebra but still make careless sign mistakes. A simple intervention from a human teacher would put this student back on track in a jiffy. None of us really knows *why* the student changed the solution from -48 to 48. Maybe the immediate feedback for that very mistake was enough of a reminder for the kiddo and now he/she won’t make that error again.

    I’d love to explore this software more. One example doesn’t affirm the tool or make me fear it. I’d argue that some of the (amazing) Desmos labs also affirm incorrect student thinking (students are able to create graphs over time that are not functions in “Function Carnival” for example, and the immediate feedback/animations that result are a fun reward for messing around… ha!)

  14. Daniel Tu-Hoa

    For us, technology isn’t intended to replace the teacher, but to empower teachers by giving them access to better information to inform their teaching.

    Thanks for the response, Daniel. Certainly, I don’t think you’re trying to replace a teacher. That may be a response to somebody else’s criticism. It isn’t mine. And I’m glad to know you see your role as empowering a teacher with better information.

    I don’t think that this solution empowers teachers with better information, though:

    You’ll get to see that, for example, teachers can see every step a student writes, so they can, as you suggest, then go and ask the student: “why did you change your answer here?”

    This asks teachers to dig through the history of every student’s response to every problem looking for those moments where the sign shifts from a negative to a positive. That doesn’t seem empowering to me. That seems like a lot of work.

    An empowering approach would surface those students for the teacher, and perhaps ask students to write briefly why they changed sign so those teachers could decide whether or not they needed to intervene.

    Maybe this sounds tough, like a lot of investment of programmer-hours into data that doesn’t have sharp edges. Good data costs! Keep us all posted.

    Cathy Yenca:

    I’d argue that some of the (amazing) Desmos labs also affirm incorrect student thinking (students are able to create graphs over time that are not functions in “Function Carnival” for example, and the immediate feedback/animations that result are a fun reward for messing around… ha!)

    Great example! We have a filter in that lesson for students whose graphs have “multiple values.” We know that’s a common error in the construction of functions so we try to empower teachers to find it.

    @Sean, I don’t think you’re wrong. I think Mathspace is fine (as are others) but could be better.

  15. Appearances can be deceiving — especially in math where the longer I teach the more amazed I am at the inventiveness of students when it comes to working out their own systems of understanding and problem solving. I’m not a fan of this kind of ‘deus ex machina’ approach to helping students do better in math, buy I don’t suppose it’s going away any time soon, so it’s critical that educators know how to choose a package that will get them more than better statistics.

    This kind of oversimplified feedback in online teaching packages reinforces in students the importance of being right and moving on. It also strengthens their tendency to ‘game’ the system. I think an appropriate response to that kind of rapid error correction might be to integrate at least 2 more similar questions into the exercise on the fly so that if the original exercise had 10 questions the adapted one would have 12. The idea that quick fixes result in more questions to do might motivate students to check that their original answers make sense before submitting them. If one of those additional questions was answered incorrectly, the system could send a flag to the teacher or even freeze the exercise to further curtail students’ inclination to just keep guessing at changes and hitting ‘submit’. If, on the other hand, a student made an error which required more comprehensive correction (changes in the work as well as the answer), that might result only 1 additional question for practice.

  16. I think the fundamental way students learn math is through the thinking about wrong answers. Hence, why math discussions are so powerful.

    Computer programs are very good at determining correctness, your are right or you are wrong and thus students know instinctively that with a computer it is usually fastest to just throw answers at the wall and see if it sticks.

    I think JSV had it right we need programs that ask questions not tell us if we are right or wrong. (by the way I was dropped from an edtech company because my first hint was always go back and read the question again. for some reason they felt that was a useless hint)

  17. Harry O'Malley

    August 29, 2014 - 9:05 am -

    In James Paul Gee’s incredibly insightful talk about learning in video games, he brings up the vital point that our symbol systems (written language and mathematical notation) are referents for real, tangible phenomena, and that if we haven’t had ample experience living in the worlds that the words and numbers refer to, the symbols are meaningless. He illustrates: courts have ruled that two people have the same opportunity to learn if they received the same book. He disagrees, pointing out that if one of the two people has, say, 10,000 hours of experience working with the stuff the book is about, then that person has an enormously better opportunity to learn from the book than someone who has 5 minutes of experience with the same stuff.

    The same is true of feedback. The feedback that Mathspace gives has vastly different effects on different people depending on their level of experience with the material being worked on. If I put my 6 year old daughter on the 3-step equation worksheet, it really is nothing more than a frustrating, boring wild goose chase.

    When I played with the 3-step equations worksheet, on the other hand, I started to wonder about how the computer was checking the answer. I started with (-7x)/5+2=37. Next I typed (-7x)/5+2=((37)), just putting two parenthesis around the thirty-seven. It told me that it was a correct step. I conjectured that, so long as I wrote an equation whose single solution was x=25, that it would tell me that I had made a correct step. So next I wrote 2x=50 and, yep, correct step. This made me perceive the interesting fact that all equations whose single solution is x=25 are equivalent and I imagined using the tool to build new, extravagant equations getting immediate feedback on whether or not they were still equivalent to the original.

    So, to start, I decided to multiply both sides by x, which resulted in 2x^2=50x, and as I thought it would (because I obeyed the laws of balancing both sides, I vaguely said to myself in my head), it told me that I had taken a valid step. But then I realized that, although x=25 was still a solution, so was x=0. So now I conjectured that it checks to see if 25 is a solution, and if it is, then it verifies your step as a correct one. But more interestingly, I perceived the idea that the set of all the equations whose single solutions were 25 was a subset of a larger set of equations: those where 25 is one of the solutions. This then immediately connected to my knowledge of nested function ideas (e.g. that a line can be thought of as a parabola whose “a” value in standard form is zero). Bam! Now I’m really learning, generating new knowledge structures and assimilating them into my current ones. All because of this new type of feedback tool.

    So, my daughter uses the tool and learns nothing memorable. I use the same tool and have a deep aha moment about equations being able to be organized into nested sets based on their solutions. Clearly the feedback has drastically different effects on each of our experiences as well as what each of us learn.

    Of course these are two extreme examples that fall outside of the designers intention, but they help us see the range. I hope we can more readily imagine the infinitely many in-between cases such as a student who has only learned to solve one-step equations working on Mathspace’s three-step equation worksheet vs. someone who has completed the unit that includes three-step equations and is using the tool to review for a final exam.

    The point: the idea of *inherent* problems with certain feedback systems is potentially misguided because of the wild variety of possible prior experiences.

    @Justin: Good to see you on here. Not sure if you remember, but we took a course together at Exeter a few years back. This has implications for your authority angle as well. The way I was using the tool, I was using the feedback mechanism as a servant of my investigation. I certainly saw it as an authority, or expert, on the topic of knowing whether or not an equation has a solution of 25 (it is much better at that than we will ever be), but I was engaging with this authority as a collaborative peer. My point here is that the type of belittling student experience you are warning against, the type that makes students think that they can’t make sense of mathematics themselves, is not an inherent property of any feedback system (including Mathspace), but has more to do with the students’ beliefs in the area of mathematical autonomy before their interaction with the feedback mechanism begins. If they believe that they can’t make sense of mathematics, Mathspace may reinforce that. If they think they can make sense of it, as I do, then Mathspace reinforces that.

  18. Thanks everyone for all the comments.

    Dan Meyer:

    “I think Mathspace is fine, but could be better”

    @Dan – we agree. We’re trying to improve Mathspace every day. Right now, we flag every incorrect step for teachers to review. We also collate the common mistakes to make it easier for teachers to identify the troublesome areas. We collect data at every step of a problem, not just at the end, when students write their final answer — and we believe this is a giant leap in the right direction. The form and the frequency of the feedback is something we will continuously improve on.

    Dan Meyer:

    “When a student pulls a scribble move like that, throw a quick text input that asks, “Why did you change your answer?” The student who is just guessing will say something like, “Because it told me I was right.” Send that text along to the teacher to review. The solution is data that can’t be autograded, data that can’t receive immediate feedback, but better data just the same.”

    @Dan – a timely suggestion as just last week we introduced a new feature allowing students to connect with a teacher immediately on any question, giving teachers the full context you are talking about.

    https://mathspace.co/blog/new-feature-respond-directly-your-students-feedback/

    Not perfect, but a step closer to connecting teachers and students at the right time, so thanks for all the suggestions.

    @robbie — yes, our teacher dashboard shows more than just right/wrong, but also “answers changed” and “hints used” as you suggest — i.e. more information to help teachers to address in lecture

    @justin @jose – we’re not just tech folk – almost half our team are current math teachers, former math teachers or math teachers in training, and we highly value teacher feedback, especially from teachers using Mathspace in the classroom every day

    @jenniferpotier- we all agree that technology will NEVER replace a good teacher.

  19. Harry O’Malley wrote:
    “This made me perceive the interesting fact that all equations whose single solution is x=25 are equivalent and I imagined using the tool to build new, extravagant equations getting immediate feedback on whether or not they were still equivalent to the original.”

    There’s a neat activity called ‘clouding the picture’ that relates to your idea in the old UK National Numeracy Strategy guidance (still available from http://www.nationalstemcentre.org.uk/elibrary/resource/4645/algebra-study-units ) I’ve used it many times and find it an effective way to introduce solving linear equations. It has the benefit of allowing massively differentiated responses too.

  20. Harry O'Malley

    August 31, 2014 - 9:47 am -

    Pete:

    Thanks for the resource. I’ve read through the activities and am thinking about them.

  21. Where to begin?

    1. Not all software providing feedback is alike. I agree much of it could be much better. Certainly providing feedback for just the answer is of little help to strugglers, especially since often the key step is the first with the rest just mopping up. I believe MathSpace and tiltontec.com are the only ones checking intermediate steps. (maybe also Cognitive Tutor?)

    2. Software can evolve, with feedback from users hoping to improve it (not so much from users who just hate the genre — no useful conversation to be had there).

    3. What happened to the generally acknowledged value of continuous formative assessment?

    4. Feedback a day later is not feedback. Feedback is immediate.

    5. Why does this blog and every comment presume the teacher and not the student is responsible for their learning? Are students patients lying in hospital beds waiting for the doctor to come around and fix them? I prefer giving kids multiple learning resources and letting them figure out how they will master Algebra. We need them active, not passive.

    6. If mistakes are visible to the teacher kids become self-conscious. Math anxiety is a serious problem, let’s give them some privacy as they flail about. My software does not even record failed “missions” because I want them to try, try again. If they get frustrated, why cannot they be the ones to ask the teacher or other students for a conceptual reset?

    7. Why cannot the software offer a forum where students and teachers can collaborate, one allowing embedded videos and or easily edited math?

    8. Some ed tech developers were/are teachers. I suspect one can tell which by their products.

    8. Benny averaged over 80% applying a cleverly concocted but wholly bizarre rule system? The story did not say much about the software, but towards the end suggests it was multiple choice. Not good.

    9. The Gran Turismo driving game offers several modes of play. In arcade mode one is free to crash competitors off the road and plow on to victory. In “license” mode (with more advanced licenses required to open up certain races and buy certain cars) if one makes one mistake it’s “game over”. Moral for math software left as an exercise.

    10. In the example given, note that most likely the student made a careless error. If they are in fact weak on signed arithmetic, they will continue to get quality practice despite the deficit. If they are not interested in getting better, they will not get far in “license” mode. Oops, I gave it away.

    11. his has nothing to do with rote memorization. Number facts are stimulus-response and can be memorized by rote. In Algebra the problems all look different and one must see past the particulars to recognize which of dozens of valid transformations can be applied and applied usefully in which order. This recognition of key abstract features offering an opportunity to simplify (or factor) is a learned ability developed only through substantial practice.

    12. A surprise for many commenters (and me when I heard it): teachers like my software because kids cannot continue without correcting mistakes. Apparently without the software struggling students happily filled a page with guesswork so they had something to “turn in”. Another reason feedback must be immediate.

    13. @Harry O’Malley” Great stuff. You meant “x = -25” was correct, yes? Anyway, I tried the ((37)) variant on my software and got a yellow “warning” background on that step and the avatar tutor said “I was thinking in a different direction.” Multiplying both sides by “x” I expected my software to say it only handled linear equations but apparently I have that confession only when a new problem is being entered, so it marked it as wrong. Gotta fix that. My software does fall back on numerical methods to avoid saying “wrong” to correct work (something I dread) so I have some research to do. Thx for the test case!

  22. Kenneth Tilton wrote: “Harry…You meant x=-25 was correct, right?” <– I blame this on the fact that Dan's comment text input box doesn't have enough immediate feedback.

  23. Hi Everyone,

    Just wanted to let you all know that I took up Daniel Tu-Hoa on his offer for a more detailed tour of Mathspace and I was really impressed with where this tool is and where they want to go with it. We discussed some areas where improvements could be made such as allowing the teacher to “toggle” a justification box for students to explain their thinking as well as a “stream/feed” that highlights areas where students were struggling. If/when those are added, that could help take care of some of the original concerns Dan and others had mentioned.

    I did also mention that while Mathspace is really advanced compared to many other online learning tools, I still have concerns that this tool may not improve the motivation of students. We all know that many students who have struggled in math for some time have likely lost motivation to improve. If motivated to learn, it seems as though a student could learn as much as they desire.

    The real question here could be: How can Mathspace be improved to bring out that curiosity that Dan and so many others continue to search for in effective math tasks?

    If students are being assigned Mathspace problems, but are still doing the work “to get it done,” will they be any further ahead?

  24. @Kyle: the demotivation comes from failure, which in turn comes from low quality practice (no feedback). With the feedback students get over a certain psychological barrier preventing them from succeeding at what is in the end fairly simple stuff (thinking Algebra here).

    Given a little feedback (and a good teacher!) students quickly “get it” and then are able to enjoy math the same way others of us did. One teacher reported putting the kids who had fallen off the pace on my software while continued with the others. In a matter of weeks the kids who had fallen behind had caught up with and rejoined the rest of the class.

    One thing we should not forget: everyone enjoys success, and the most turned-off student gets off on doing good work if we can arrange the conditions. The motivation comes from succeeding at something important, which they understand math to be.

    They do not like it because they cannot do it. Math is cumulative so a 90% on signed numbers and a 90% on fractions becomes a train wreck on simple equations involving both. Pretty soon they are saying they are not “math persons”. Nahh, they just need a quality practice environment.

    One advantage of automated practice is that students can now do twice or three times the problems for those who need more practice to internalize math. Problems are generated and scored as fast as they can do them. This also leaves the teacher to wander the room working with individuals or small groups for support or on enrichment tasks.

  25. Kyle:

    We discussed some areas where improvements could be made such as allowing the teacher to “toggle” a justification box for students to explain their thinking as well as a “stream/feed” that highlights areas where students were struggling.

    Right! That’s the the kind of blended learning software we need:

    Combine algorithmically-assessed data (answers to the question “Did the student get the problem right?”) with human-assessed data (answers to the question “Why did you change the sign?”). Send teachers the second data set without attempting to grade it, but flag students who failed the first dataset.

    So now the teacher has a pile of justifications for wrong answers. Such a valuable teaching tool. But for whatever reason, it’s so rare to find blended learning developers who are willing to collect data they don’t assess. To so many developers, a data set is valuable if and only if a computer can assess it.

  26. @Harry O’Malley: “I blame this on the fact that Dan’s comment text input box doesn’t have enough immediate feedback.”

    Priceless.

    So I fixed my software in light of your (from memory) “3x^2=15x” exploration.

    The first fix as advertised was not to say “wrong” when unable to process a student step. Instead: “”I am afraid that expression is beyond my abilities. Can you approach the solution more simply?”

    (Note that it did not attempt numerical methods because it did not make enough progress with the problem to fall back on that.)

    The second fix obviates the need for the first (in this case, but the fix stays for the more general case) by catching the expression when the student (or software evaluator) enters it: “This version of Tilton’s Algebra supports only linear equations, but this step is not linear for (nor was the problem statement). Can you find another path to the solution?”

    Thanks for the RFE. -k

  27. I understand what you’re saying 100%. The immediate feedback can keep our kids from understanding.

    BUT…….The way they turn writing into mathematical equations!!!! This is awesome! And they just started – no need for the Khan-like hate mail yet. Even if nothing comes out of their startup except the licensing of whatever magic they created that turns my ridiculous scribbles into beautiful equations, let’s give credit to the genius that created this.

    This could help launch a lot of other really interesting (and non-immediate feedback) apps for our students!

  28. @Tim, there’s an app called “MyScript Calculator” that serves as a handwriting recognition calculator that’s pretty neat. Also, there’s an app called “MyScript MathPad” that does the same thing, but without the calculator part – a handwriting recognition equation editor, if you will. My students have used MyScript MathPad to take notes since it makes their messy handwriting readable. Nifty. Here are links to both freebies:

    https://itunes.apple.com/us/app/myscript-calculator-handwriting/id578979413?mt=8

    https://itunes.apple.com/us/app/myscript-mathpad-handwriting/id674996719?mt=8

  29. “When you make a mistake in math, your brain grows. Synapses fire in your brain. In fact, your brain grows when you make a mistake, but when you get work right, no brain growth happens…”

    …says Jo Boaler here:
    https://www.youtube.com/watch?v=qD5QR5R6b8E

    Dylan Wiliam also referenced this phenomenon in a recent workshop I attended, calling it the “hyper-correction effect.”

    All this to say, technology now, more than ever, can equip teachers and their students to receive instant feedback through formative assessment tools. Making mistakes is part of that process. Knowing about those mistakes instantly and having the opportunity to correct them and/or receive an intervention from the teacher would seem to aid the sort of brain growth Boaler and Wiliam are talking about.

    No tool is perfect, no teacher is perfect, and no student is perfect. However, using tools that interrupt the learning process when students start to head the wrong direction has been very empowering for me, and more importantly, for my students. They don’t know what they don’t know, and formative tools can really help in a right-on-time way.

  30. @Tim Hartman: They might be using the VisonObjects library, which can indeed be licensed.

    As for Khan, the anti-feedback crowd has to deal with the SRI study (which led more to product redesign than assessment):

    “Although the YouTube videos are the best known
    aspect of Khan Academy, we found that teachers
    and students in the classrooms in our sample were
    attracted to other aspects of the system. These
    included problem sets that helped students practice
    newly learned skills and that provided them with
    immediate feedback and hints when needed;…”

    I wish I could google it up, but there was even a comment from the Khan folks saying they never meant the videos to be a big deal (!) and they were intensifying their work on the problem sets and feedback.

    @Cathy Yenca: are you sure no brain growth happens on success? I have always wondered why I can make twenty bad shots in tennis but somehow my neural net strengthens the one good shot I make. Seems to me the endorphin rush off the success translates into reinforcement. But I agree 100% that success works only after it follows struggle: when things come too easily because of whatever excess assistance is provided, learning doe snot occur. “Make ’em sweat,” I like to say.

  31. I disagree with the idea that if mistakes are visible to the teacher kids become self conscious. While that may be true in too many classes that is a teacher mistake and not a general rule.

  32. Feed back that is just something along the lines of “good job” or “wrong” is not useful feedback.

    Maybe not on every change, but at least on some the program or the teacher has to stop the child and ask, “Why are you making that change?” I think even if the program gave a drop down with multiple choice answers, (limit IDK to two per period) that might work.

    I think the real question we need to determine here is what is good feedback and how can a program such as this deliver that in a timely fashion to a student?

  33. Regarding visibility of errors to teachers, maybe it was just me, but I always freaked out if the teacher stopped while wandering around the class and looked over my shoulder to watch me work. I seem to recall kids basically refusing to continue working until I moved on when I did the same as a teacher.

    I think if a teacher says to a kid “I see it took you ten tries to figure out 2/3 divided by 3/7” it will have a chilling effect on their willingness to take risks.

    Somewhat related: I am reminded of what happened when Cognitive Tutor took points off if a student asked for a hint while solving a problem: kids stopped asking for hints and instead asked the teacher. Doh!

    So Dan is right: the designs of these tools must be all over the human engineering aspect or a train wreck is possible.

    As for right/wrong not being useful, two things: when contrasted with a silent piece of paper starting back at the kid it seems useful. As for something more specific than “wrong”, I have considered diagnosing the student’s likely mistake and saying, “Not quite. You might want to rechecked your signed multiplication.” But then I think we are giving too much help: kids just sail through their work fixing mistakes as prompted and never go through the pain and suffering of figuring out what they did wrong.

    This, btw, is where I deviate from the adaptive crowd. Cognitive Tutor (last I looked) provided the answer after the student failed on three tries. I guess the idea is to take them back to easier problems for a while to rebuild their momentum instead of leaving them flailing helplessly at something for which they are not ready.

    I like to see them flail, albeit with several forms of help available besides trial and error, including a last resort internal forum where kids can ask other humans for help when they are home alone.

    Anyway, Dan’s original concern was for us devs to make sure we are making new mistakes. Not to worry, we are.

  34. To anyone still subscribed to this thread, I illustrated here one side of lousy feedback: false positives. Mathspace thinks this student knows how to solve the equation when really the student may just be fudging the signs.

    Fawn Nguyen illustrates another side of lousy feedback: false negatives. She knows the answer and how to get it, but adaptive systems require inputs that are structured just-so, so Mathspace has told her she’s wrong. The problem here is one of formatting and not mathematical understanding. We’ve told someone who understands math that they don’t understand math. A minimally qualified teacher could give better feedback.

    As Justin Lanier points out above, the speed with which students receive feedback (immediately or a day later or a week later) is only one variable to consider. Another is the quality of the feedback. I’d rather a student receive feedback that’s delayed but accurate than immediate and inaccurate.

  35. Harry was definitely right, these comment boxes should be catching mistakes such as “Mathspace has told her she’s wrong.” Actually, it was quite careful to say “Uh-oh…you must enter an equation.”

    The reason it does that is because both sides of this debate (but not all software developers) worry quite a bit about giving false information. I dread it, myself, and have quite a bit of code focused only on that.

    For example, before saying something is wrong based on my own symbolic verification I use numerical methods to see if maybe the kid is plausibly right. If the numerical approach supports their work, I do not say it is wrong. (I forget if I still “say, hmm, you may know more than I do. Carry on”).

    In this case, where they just type “3”, much like Mathspace I say “Typo? I ask because the problem statement involved an equation and this does not.” (I do not offer a Q&A format like Mathspace’s, so I just entered “n+7=10” and then “3”.)

    I was surprised to see my app marked “3+7=10” wrong, but then remembered it supports problems in which students are asked to identify whether an equation is an identity, contradiction, or a conditional expression (and this is a conditional equation with solution n=3.

    Now if this were a system-generated practice problem Fawn could have asked to see it solved, and discovered the expected format. But I had entered the problem myself. I was able then to ask to see a similar problem solved and it came up with “n-6=7” (the random selection of the same variable will completely confuse kids, I will fix that) which it solved via “n-6+6=7+6 and then “n=13”.

    So what do I have to do to piss off you Feedback Haters badly enough to QA my site for free?! Mathspace will indeed get from this some ideas for improvement, such as saying instead “You must enter and equation some in the form of n= “.

    Meanwhile: the FH crowd has no problem with kids working 10-15 problems without feedback? Have you ever tutored one-on-one? Did you correct them as soon as they made a mistake?

  36. “So what do I have to do to piss off you Feedback Haters badly enough to QA my site for free?!”

    Maybe not condescending to critics would be a start. Just thinking out loud.

  37. Condescending? You mean the questions about how you all want practice handled? Sorry, I honestly meant to raise the question of whether you all think substantial student practice is still important, because I think for us to understand each other that is necessary.

    Perhaps the answer is that a lot of practice solving problems is not desired. I certainly hear a lot of denunciation of “rote learning” and “drill 7 kill”. But if it turns out we agree that substantial practice is necessary (I said “if”) then the next question would be, from where and when should the feedback come? Perhaps the answer is traditional: in a group setting where everyone goes over the work. I taught successfully that way, and I hear a lot of discussion nowadays about kids explaining rationales so maybe that is what is desired.

    Otherwise, I should think the conversation should be about how to help us improve automatic feedback.

    So really, I was just trying to move the debate out of what seemed to be a deadly embrace. Perhaps we are so far apart I should just ignore this blog.

    If you meant my general tone, yeah, I am a piece of work. But given your characterization of serious systems produced by serious math educators as “these terrible systems” and “lousy feedback” and Fawn’s suggestion that we were “shitting” her I thought it was OK to join in on the fun.

    ps. I was joking about the feedback. We are not yet even in beta and the first feedback I will seek will be from those who respect what we have done and want to fix the new mistakes we are making. Peace.

  38. Christian Bokhove

    September 4, 2014 - 4:01 pm -

    My belated 50p

    * There’s still a lot to improve on feedback. However, that it’s all crappy to me seems
    like a caricature. Just like #13 which to me gives a false dichotomy. At least in contrast with ‘normal’ practice where this also is the case.
    The comparison imo should be made whether there is a ‘feedback alternative’ with teachers
    providing feedback for the number of tasks and exercises that they used to do for homework or
    individually in a classroom. I think it’s unrealistic to think that teachers can cope with that.
    This is sort of what Sean seems to be saying in comment #7. I would always keep in mind that
    all the elements need to complement each other.

    *@Kenneth I would suggest that the DME from the Freudenthal Institute (Peter Boon) and Aplusix (France) have had very
    sophisticated feedback mechanisms for quite some years. For the former (which might have even inspired some others): the feedback can be customized
    and authored which means that it can be just the way one likes it (like #12 suggests). Note: this does cost quite some time
    to author of course because every alternative response from a student. Feedback also works for non-algebra stuff,
    for example geometric constructions (in Geogebra, e.g. see http://bokhove.net/2012/07/27/storing-student-work-and-checking-geometry-tasks/) or the wellknown building blocks applets.
    See http://www.fi.uu.nl/dwo/en/ (needs Java, I know they’re in the process of getting it to HTML5).
    On Mathspace: I agree with the people who say that it’s at the front of what is currently possible.

    * ‘Gaming the system’ based on automatic feedback is indeed a problem. Some years ago as maths
    teacher I once used software with feedback (see later on which) and students scored 100%. Then at a test they
    performed really badly. Upon querying why it became clear students were too dependent on feedback.
    Hence I proposed -also in a paper for the task design ICMI study- ‘fading’ away feedback (e.g. Renkl):
    a tool starts off with a lot of feedback (per step) and gradually decreases feedback up to a point where
    there is none (almost ‘test like’). In the DME these ‘modes’ (sort of Gran Turismo ;-) are available.

    * There’s not much wrong in letting students make mistakes. You could even intentionally confront their demons
    and provide a ‘too difficult’ question. Just as long as there is no sword that falls down. And there is support and
    formative feedback.

    * I completely agree with Harry #18 in that there are different effects for different people. Just like
    any lesson (a complex collaborative group task can fall flat with struggling students as well!). So I prefer
    a multi-modal approach where different approaches complement each other.

    * Handwriting: afaik Mathspace indeed uses Visionobjects. With a PhD student we tried to get a license to
    do research with the handwriting stuff but it proved to be too expensive. So she (Mandy Lo) went and took the freely
    available Starpad (Windows XP, fascinating stuff the ‘history’ behind this. She suspects Visionobjects might also be
    based on it) and rewrote it for HTML5. Intention still to get it out as GPL but not trivial. http://www.mathpen.co.uk/

    Cheers

  39. Ah, yes, Aplusix. Thx. I had found that then forgot the name and my google fu failed me trying to find it again. I had remembered it as the only other sight that had a nice maths editor. (now there are a few.)

    As for intelligently reducing the help, I cannot argue: as a tutor I used to downright pretend further factorization was possible if they glanced up at me tentatively to see if they were done.

    As for feedback being a problem, yep: I remember the client who instantly corrected every mistake as soon as I raised an eyebrow, but then made another mistake straightaway. I pointed to twenty problems and said you have $5 from my fee and lose one for every problem you get wrong. The cash on the table set him on fire. He would do a problem and then put his hand in my face and say, wait, I am still checking. Where he would have gotten 18 wrong based on his rate error he ended up making $2 — and I never saw the slapdash behavior again.

    DME sounds interesting with the gradual withdrawal of help, but I prefer Gran Turismo: it lets me decide how to learn.

    I think adaptive (where the system decides how I will learn) is a mistake. I got schooled one day watching a student use my software. She had set the difficulty level at easy and was doing problem after problem after problem correctly. I said, hey, go for medium. She shut me down: “I want to do a few more until I am ready to move up.” ha-ha, right, learner differences.

    Too much of what I read on ed blogs these days is about teachers having the information they need to micromanage to perfection the learner experience. The image springs to mind of Nurse Ratched behind the bullet proof glass dispensing medication to the unwilling patients. “OMG, if i do not know what they are thinking they are doomed!!”

    Far better would be to get the student engaged in figuring out how best to master a skill. Just having them aware they are the active and responsible agent is a huge win. Second, hey, they know how they like to learn.

    Me, i cannot for the life of me read a computer library specification. It would be great if I could, but I cannot. Now get out of my way, I am going to write some code in this language i do not know. I will read JIT (just in time) when I have a reason to read.

    the approach I and Gran Turismo take is a lot more trusting of the student: (1) here is an unbending standard that will adjust neither to your issues nor your parents’ threats; (2) here are four different kinds of help you can have to master the material; and (3) knock yourself out.

    The Benny horror is simple: too low a threshold, too much help during the test. I would blame the teacher for not following the program and spending time circulating amongst the kids, but teachers, too, are human: the free time afforded to them by automated feedback disappeared in the time-energy deficit every teacher, um, enjoys.

    -kt

  40. Dan:
    “Right! That’s the the kind of blended learning software we need:

    Combine algorithmically-assessed data (answers to the question “Did the student get the problem right?”) with human-assessed data (answers to the question “Why did you change the sign?”). Send teachers the second data set without attempting to grade it, but flag students who failed the first dataset.

    So now the teacher has a pile of justifications for wrong answers. Such a valuable teaching tool. But for whatever reason, it’s so rare to find blended learning developers who are willing to collect data they don’t assess. To so many developers, a data set is valuable if and only if a computer can assess it.”

    I guess it is really hard to market “work” for the teacher. Advertising that “the product will provide the teacher with a bunch of justifications from students to read after school” doesn’t sound nearly as fun as “everything is marked for you.” Although I don’t have any statistics to back it up, I would bet that teachers using blended learning systems that provide reports indicating what learning goals students are having trouble with aren’t even using that data to better inform their teaching.

    For Mathspace, I doubt their target market is for all-star teachers like Dan, Fawn, or others like them. There are so many teachers (specifically in math) that are still tossing examples up on the board and assigning the odd number questions. If the only form of feedback students are receiving when they practice is an answer key, moving to Mathspace for independent practice is a huge improvement; even if there are areas still requiring some attention.

  41. My daughter just tried the sine rule on a question and was asked to give the answer to one decimal place. She wrote down the correct answer (formatted properly using the editor):

    x = 19/sin(91)*sin(36) = 11.2

    and it was marked wrong. But it is correct!!!

    No feedback given just – it’s wrong. She is now distraught by this that all her friends and teacher will think she is stupid.

    I don’t understand! It’s not clear at all how to write down the answer – does it have to be over at least two lines?

    My daughter gets the sine rule but is very upset by this software.

  42. @Ian: Can you tell me what section that is in? I’d like to dive in and see what I can see.

    btw, I do have the free MS trial and on a different problem after I made a mistake the “Hint” button led to the next step in the solution. You might be able to figure out what is going on that way.

    To avoid this sort of thing (not even knowing why the software says “wrong”) on missions (no second chances) my app marks the work and shows the solution.

    On practice, you can ask to see the problem solved and explained in steps.

    Only on student-entered homework do I not show solutions.

    As for the answere being marked wrong, I am sure MS dreads doing that as much as I, for exactly this reason.

    I’ll go buy a year license now and see what I can see, but a pointer on getting to that kind of problem might help.

    ps. No way those sin arguments are meant to be in radians, is there? 91 sure does not look like radians.

  43. Ian – I have a test account at Mathspace and never tried any trig yet, so I had to give it a go.

    My problem had different values, but I encountered the same problem. I’ll send it on to Daniel as they seem really interested to know how to make it better. I’ve tried in a number of cases to “break” it, but it usually does pretty well. In this case, to provide no feedback was something unusual.

    I tried:

    12 / sin64 * sin38 = 8.22 (round to 2 decimal places)

    I then erased the 8.22, and I got this message:

    “Uh oh… you must enter an equation” with a link to a video clip explaining what that means.

    When I re-wrote:

    12 / sin64 * sin38 = a

    I got the “check” of approval.

    I then wrote:

    8.22 = a

    And it said “Continue…”

    While it is clear that the Mathspace team is looking to ensure students don’t forget how to “write” mathematics properly, this approach puts students in a tough spot if they don’t accurately predict all of the different attempts students will make. If we go back to a point made earlier, this system should be intended to help students who need the assistance most. Sending a struggling student to frustration on their first Sine Law question definitely won’t help us keep them practicing.

    Again, I still think this company has an opportunity to really do something great with this tool. I’m already amazed at much of what it does, but there are areas that need attention quickly.

  44. I’m not sure of the question number, but it was in the sine rule section.

    I just tried myself with the free version. I have to be honest it is completely painful and awful to use! All the time and effort is spent on trying to input the expressions, and it is really unintuitive.

    It took me ages to do a simple sine rule calculation. One needs to start using the weird formatting which is so odd.

    I think I found the problem which prompted my query.
    I was told I got another question wrong, when indeed it was correct! I had two equals signs on the same line which was marked as wrong. When I put the same answer on a separate line, it was marked correct.

    So, to give a simple example:

    I think:

    x = sin(30) = 0.5

    would be marked wrong!

    But

    x = sin(30)

    x = 0.5

    would be marked correct.

    Obviously both are OK, but a confident person would just do the first answer and be mystified why it was marked wrong.

  45. @Ian: Thanks, I found it. Agreed on the painfulness, more below.

    Odd side note: at first I thought the chained “=”s on one line might be the problem so back on an easy algebra problem I did that entering something like x=4/2=2 on one line and MS helpfully broke it into two lines for me!

    That it did not so in your case makes me wonder if they have different code editing different sections of the app. Ouch. The good news: one programmer just needs to go talk to another programmer. :)

    My software just does not let you type in a chained derivation. If the people demand it I will adjust. There are other popular edit tricks I do not support such as writing a line “-2 = -2” under “x + 2 = 4” and then combining into a third line. I hope the people do not make me do that. :)

    Along these lines, I happened to make Fawn’s mistake of just entering 8.21 instead of x=8.21. I spotted little video icon that was part of the “Uh-oh…enter an equation” message and hovered it. The tooltip said “See a video on this error”. I clicked thereupon and got an impeccable explanation of exactly what was going on.

    As I suggested earlier, I also had the option of seeing the “Next Step” done for me as a way of finding out what the application wanted.

    There are two things to consider here.

    First, does anyone remember Benny? Did anyone read that paper to fond out how he fell through the cracks? Benny happened because he had an IQ of 115; /did/ actually learn some things correctly; was tested using multiple choice or with a sample solution to ape; and because they considered a too-low 80% to be mastery.

    Relevant here is the multiple choice and other kinds of implicit help provided by the test format, a format necessary if dumb software is going to collect student work and check it. Ironically, the data input issues behind all the reports in this thread derive precisely from what MS (and I) went out of our ways to solve: how can we capture and parse arbitrary student input so we can bring to the surface any and all knowledge gaps. Believe me, the code gets interesting, especially because as Ian discovered, it can be quite painful doing math thiis way so not only must we let them enter and edit incrementally arbitrary math, we have to make it easy!

    And we really do have to pull this off. For software like this to succeed the math entry and larger UI experience have to be nearly transparent given (my second point) a little practice.

    My second point is that one has to be careful about one’s perspective while testing. Simple example: let’s see how fast this site comes up. Wow, that is slow. No, it was faster than the sound control pops up on my desktop, but once my perspective is “how fast?” everything seems slow.

    A user checking to confirm that MS is terrible Benny-ware will find exactly what they seek, as will a community college math chair under pressure to get more kids past the Algebra requirement so they do not drop out altogether.

    I myself struggled with the MS demands on precise input format and trying to understand exactly what it wanted, but I could guess where the help was and I had the feeling a good hour or two would be enough for me to learn the app. At the same time I saw dozens of impressive tricks making the software transparent, so I know MS will get even easier as time goes by: they grok the issues.

    The unanswered question to the Anti-feedback crowd is whether they are content with struggling math students working on paper that just stares back at them? Because the majority of them are absolutely miserable doing that.

  46. I am just starting to explore this site but check out what the kid says at 2:04 in the control group (no feedback) video.

    Please find me anybody around here who thinks “no feedback” is an ideal condition. You’re arguing with straw men.

    The unanswered question to the Anti-feedback crowd is whether they are content with struggling math students working on paper that just stares back at them? Because the majority of them are absolutely miserable doing that.

    Being discontent with the status quo doesn’t mean we should be content with any alternative. Just because the status quo isn’t perfect doesn’t make any alternative better. Straw men again.

  47. Thanks for the helpful comments.

    My daughter has got the hang of it now! Phew – what a relief, both to her and the parents! She writes the derivation down on a piece of paper first, then is careful to input it in the computer line by line, and finally gives the answer to the required number of decimal places. I’m not sure what happens if you provide more decimal places than required. Hopefully it lets you carry on and write another line with the correct number of spaces. She doesn’t want to risk trying too many decimal places as she now understands what the program is looking for, and is still scared still of typing it in wrongly despite having clearly understood everything.

    I’m a bit old fashioned, but I think you should never be afraid to make a mistake especially when learning things for the first time. Indeed making a mistake is really useful for learning how to do something. I don’t like to see my daughter petrified to input the wrong thing, even though she seems quite capable of doing all the questions.

    I guess the software would be useful to students once they have gained confidence in using the editor. But displaying the scores so that her friends can also see, makes it all seem like very public summative assessment which understandably will scare the poor kids too much!

  48. Well it sounds like we are converging on (a) appreciating immediate feedback in principle while (b) having concerns in fact with many if not all existing solutions.

    I think we developers are OK with that, and indeed are eager for pilot opportunities so we can make our solutions even better. My pilot search begins in earnest at AMATYC 2014 in Nashville.

    This may not always be the case — some tools are long in the tooth and are better rewritten from scratch — but the new kids on the block such as Mathspace, Khan and myself are actively revising our systems.

    Bug, usability and pedagogy problem reports are welcome by all of us, and we might be able to fix things in a day.

  49. Another thing I don’t like about the questions with angles, lengths etc is the appallingly inaccurate diagrams! A recent question had an angle of 6 degrees marked which looked more like 60 degrees! I’ve noticed this in other software, books etc that she occasionally asks me about. Also, some questions are impossible unless you assume a particular angle is a right angle or two side lengths are equal, and yet there is no indication on the diagram or the question wording that is the case.

    It doesn’t take much effort for software to draw an accurate triangle with the angles of the right size (or reasonably close). I know concepts are the important thing, but when questions have rather weird angles in integer degrees and it’s quite easy to draw the diagrams properly, then it strikes me as just being lazy to draw a geometrically dissimilar triangle – and potentially confusing for the students.

  50. Feedback from tools does not sit uncomfortably with any other classroom strategy. In my book a false dichotomy is presented.

    @Ian although I fear this is indeed laziness, I would want to make a case for inaccurate angles. There were cases in my maths lesson where I didn’t want students to measure an angle but use basic properties of triangles and parallel lines to calculate angles. An inaccurate picture, if this was expanded on, allowed students to measure.

  51. @Francisco Ah yes, good point. Perhaps grossly inaccurate pictures should be avoided if possible, but I see the problem. Perhaps that is another reason why the angles seem to be rather precise – 6 degrees, 43 degrees etc. I think the angles were rounded to 10 or 5 when I learned the stuff – a long time ago!

  52. [Aside: for a glimpse at the downside of feedback, google “MyMathLab”. Mind you, prolly not a representative sample. Anyway, there is a recurring theme of error messages “You said X. The anser is X.” They at least have a channel for reporting those.]

    @Francisco Yes, a reasonable triangle constitutes help a bit like multiple choice: if I compute an obtuse result when the diagram (known to be accurate) shows acute… OTOH, I should think a wildly distorted angle off by an order of magnitude, while well-intentioned, has a problem: it would get in the way of my thinking. I would have to struggle to put the conflicting visual evidence out of mind.

    I am accustomed to diagrams off by a little with a prominent warning to that effect.

    @Ian I was/am a hypersensitive learner myself so I feel for your daughter. I am curious to see if she makes a transition to where she becomes comfortable with failure, simply because there are no consequences. Well, if MS works that way.

    My program makes no record of failed missions, nor does it record practice sessions. If the student enters their own problems, they just have to leave problems unfinished, perhaps with the last mistaken step in place.

    ie, all assessment is formative. As with a video game, one simply has more or fewer medals than someone else, should care to compare. And a student having trouble with Algebra just needs to make more attempts before earning a medal — they do not have to fall behind in the medal chase.

    While failing the system keeps track of their personal best, and during a mission shows them which problem would constitute a personal best so they can see their goal. (In Gran Turismo I have the option of seeing a ghost car representing my best time to date — if I see it too far ahead I hit “Restart”.)

    I think the fancy term is “competency-based”?

    Does MS have some public scorecard showing the student rank?

  53. @Kenneth

    >I was/am a hypersensitive learner myself so I feel for
    >your daughter. I am curious to see if she makes a
    >transition to where she becomes comfortable with failure, >simply because there are no consequences. Well, if MS >works that way.

    Thanks. In terms of how MS works – it may depend how MS is set up for the students by the teacher. I have no idea, and as usual now she has the hang of Mathspace I don’t get to see anything now! I’m only called in at times of complete disaster in her mind (even though it’s usually a minor problem in reality).

    >Does MS have some public scorecard showing the student >rank?

    It sounds like it might do, but it’s not entirely clear. She does talk about the `scores’ of other students, but it seems more like a random selection of the others rather than a leaderboard. I’ll have to try to check if she’ll let me! I don’t think it is clear to her what exactly the scores are, which may not be a bad thing.

  54. @Ian, I suspect I speak for everyone when I say how relieved I am this crisis has been resolved! :) You have me wondering if the teacher spent an hour introducing them to the software.

    Back in the 90’s when I sold a desktop version the 800 number for support was tied to my home number. An older guy had decided to learn Algebra, bought my app, and had called to tell me how useless it was. The manual was no help, it seemed.

    I walked him through the app for like thirty minutes and he felt better. Called back the next day to tell me how amazing was the application.

    We can make learning curves short, and I am trying to create an app now that does not have one (by warning, prompting, catching mistakes — as does MS) but I will not achieve that until the end of the pilot process at best.

    Speaking of prepping students for math software, here is a CC professor introducing distance learning students to MyMathLab. https://www.youtube.com/user/billwitte111 Goes on forever, but I imagine it helps kids over the initial hurdle of a new way of doing things.

    I think in three years it will seem strange to do math without software at your back, but for now it is essential these products be introduced properly to unsuspecting students.

  55. @Kenneth Thanks! Yes – this crisis is now officially over!

    You are right – it does look like a good introduction is key to the whole thing and, like your guy needed on the phone, the more sophisticated, human-like the advice, the better.

  56. @Ian, glad to see your daughter has got the hang of how Mathspace expects one line to be entered at a time. Her input was never graded incorrect but rather she was told to input her answer in a format that Mathspace understands. Good news is that thanks to your feedback, we’ve now made a fix: Mathspace can now accept many lines at one time, then break these up into multiple lines of math work:

    http://somup.com/c2QoiN1Qc

    As you can see we’ll now break such an answer down into individual lines for the user to maintain best mathematical practice, while at the same time making sure first time users don’t get frustrated.

    We have some very talented people, very committed to the cause of improving Math education through best pedagogy and best use of technology. So please do keep the feedback coming (preferably through the feedback button on our app so we don’t hijack this post). We’re listening and we move fast.

    Just to let you know how we’ve acted on the feedback in this blog, in the last two weeks Mathspace has added these features:

    * Students can communicate directly with teachers from within a problem – check
    * We prefix introduction to Equations with n= [ ] so students still coming to grips with the topic are not confused by the input requirements – check
    * We allow users to enter equations in a single line and still be able to grade all their input in correct mathematical format – check
    * Students entering an irrelevant step such as 1+1=2 are now given the message “This step is not needed” instead of being graded as incorrect.

    And as always, teachers can see all a student’s shown work – all intermediate steps that a student writes – and this data is summarized into simple, insightful reports, identifying problem areas so teachers can intervene with students as they see fit.

    Please keep the feedback coming, more than anything else this is what helps us improve every day.

    Regards
    Daniel Tu-Hoa, on behalf of Mathspace

  57. Mr. Tu-Hoa,

    Six day turnaround? Is that the best you can do?

    (Uh-oh. I have some competition.)

    Seriously, that is wonderful work. Six days proves you have a great team and a great code base. I already knew you were dedicated to quality math instruction.

    I *seriously* have some competition.

    If an educator wants to know what it is like developing hard software that worries a lot about the user, I decided yesterday to chronicle just one of the bug reports I got from QA: http://stuckonalgebra.blogspot.com/2014/09/bug-story.html

    Still recovering. :)

    -kt

  58. @Daniel T-H That’s great! Very nice to see the issue addressed so quickly and effectively. So, many good things have come together here… this blog, the ability to comment, great follow-up comments from very knowledgeable people, the calming down of a daughter, the very quick implementation of changes as a result of feedback (not even direct feedback). Everyone is to be congratulated on jobs well done!

  59. @IanD: Wait! We’re still learning!

    I am dying to know if your daughter still first does everything on paper (I am guessing double-checking her work) and *then* enters it into MS.

    Here is why I am so curious: my app differs from MS in that I leave (almost*) no trace of student errors. My reasoning is that I want kids to feel completely free to err.

    Freedom to err does two things. First, it should reduce math anxiety. I waffle with “should” because I thought your daughter’s experience showed the opposite, but now I recall that MS keeps a record of mistakes. So maybe it confirms my hunch. Second, and somewhat related, some of us like to learn by trial and error. (You should see me coding.) Would I still have trial-and-error in my learner’s toolkit if all my thrashing is recorded? I might not have math anxiety when I started, but I might develop it!

    In my corporate work I have seen users take two months to trust the software after a serious screwup caused it to start delivering seriously bad answers. I was the director of development but patiently fielded double-checks from agents until they themselves believed things were fixed.

    So I am curious if your daughter decides to give MS a second chance (at direct entry of solutions) or if she is just not going to go there again.

    * (I just thought of one — when they pass a mission the medal they get reflects whether they got a perfect score or got one or two wrong (you know, gold/silver/bronze). But no one can see their mistakes.

  60. @Kenneth I just checked with my daughter. She is indeed writing it out nicely in an exercise book, and then she types it in to MS. So, I think it I still a good use of the software to check all is OK in the end.

    I have always tried to encourage her to write everything down in gory detail on paper, with variable levels of success. I think in the past she was too impatient and often tried to leap ahead to the answer (making mistakes). At last she is doing what I have always wanted her to do, going slowly line by line, and she seems very happy about it!

    I expect it would be different if she was attacking the more elementary problems, and then she may well have typed it in directly. She has only started using it at the sine rule and cosine rule section, which I guess is towards the more advanced end of things.

    – My daughter is very fortunate to have someone (me!) to check her results at times of doubt, but obviously by far the majority of children are on their own when at home. I can really see the value in the software like yours and MS to give reassurance.