Students are receiving more feedback from computers this year than ever before. What does that feedback look like, and what does it teach students about mathematics and about themselves as mathematicians?
Here is a question we might ask math students: what is this coordinate?
Let’s say a student types in (5, 4), a very thoughtful wrong answer. (“Wrong and brilliant,” one might say.) Here are several ways a computer might react to that wrong answer.
1. “You’re wrong.”
This is the most common way computers respond to a student’s idea. But (5, 4) receives the same feedback as answers like (1000, 1000) or “idk,” even though (5, 4) arguably involves a lot more thought from the student and a lot more of their sense of themselves as a mathematician.
This feedback says all of those ideas are the same kind of wrong.
2. “You’re wrong, but it’s okay.”
The shortcoming of evaluative feedback (these binary judgments of “right” and “wrong”) isn’t just that it isn’t nice enough or that it neglects a student’s emotional state. It’s that it doesn’t attach enough meaning to the student’s thinking. The prime directive of feedback is, per Dylan Wiliam, to “cause more thinking.” Evaluative feedback fails that directive because it doesn’t attach sufficient meaning to a student’s thought to cause more thinking.
3. “You’re wrong, and here’s why.”
It’s tempting to write down a list of all possible reasons a student might have given different wrong answers, and then respond to each one conditionally. For example here, we might program the computer to say, “Did you switch your coordinates?”
Certainly, this makes an attempt at attaching meaning to a student’s thinking that the other examples so far have not. But the meaning is often an expert’s meaning and attaches only loosely to the novice’s. The student may have to work as hard to understand the feedback (the word “coordinate” may be new, for example) as to use it.
4. “Let me see if I understand you here.”
Alternately, we can ask computers to clear their throats a bit and say, “Let me see if I understand you here. Is this what you meant?”
We make no assumption that the student understands what the problem is asking, or that we understand why the student gave their answer. We just attach as much meaning as we can to the student’s thinking in a world that’s familiar to them.
“How can I attach more meaning to a student’s thought?”
This animation, for example, attaches the fact that the relationship to the origin has horizontal and vertical components. We trust students to make sense of what they’re seeing. Then we give them an an opportunity to use that new sense to try again.
This “interpretive” feedback is the kind we use most frequently in our Desmos curriculum, and it’s often easier to build than the evaluative feedback, which requires images, conditionality, and more programming.
Honestly, “programming” isn’t even the right word to describe what we’re doing here.
We’re building worlds. I’m not overstating the matter. Educators build worlds in the same way that game developers and storytellers build worlds.
That world here is called “the coordinate plane,” a world we built in a computer. But even more often, the world we build is a physical or a video classroom, and the question, “How can I attach more meaning to a student’s thought?” is a great question in each of those worlds. Whenever you receive a student’s thought and tell them what interests you about it, or what it makes you wonder, or you ask the class if anyone has any questions about that thought, or you connect it to another student’s thought, you are attaching meaning to that student’s thinking.
Every time you work to attach meaning to student thinking, you help students learn more math and you help them learn about themselves as mathematical thinkers. You help them understand, implicitly, that their thoughts are valuable. And if students become habituated to that feeling, they might just come to understand that they are valuable themselves, as students, as thinkers, and as people.
BTW. If you’d like to learn how to make this kind of feedback, check out this segment on last week’s #DesmosLive. it took four lines of programming using Computation Layer in Desmos Activity Builder.
BTW. I posted this in the form a question on Twitter where it started a lot of discussion. Two people made very popular suggestions for different ways to attach meaning to student thought here.
I wonder if there is option 6, that plots a diff point like, shows the coordinates, and asks if they want to revise their (4,5). This could actually be cool for Ss who plots it correctly the first time as a double check.
— Kristin Gray (@MathMinds) December 10, 2020
Unpopular opinion (apparently) from someone who’s seen many Ss start switching coordinates AFTER they’ve learned slope. Since coordinates represent location, not movement, I’d prefer #4 or better yet, “the meeting of the x&y” pic.twitter.com/mxoz8gM6Sv
— Ms. (Lauren) Beitel (@ms_beitel) December 10, 2020
Chris HeddlesDecember 15, 2020 - 2:44 pm -
I’m super-excited to see how this approach to computer-based feedback progresses but, for now, my expectations are low. Not because I doubt the ability of the designers or programmers but because the code is working with minimal data. Even a real human who has only just met a student (relief teacher, tutor, etc.) can struggle to read a student’s thought process from their answers to a question or few. Good human feedback relies on knowing the student and their previous learning, reading their face/body language and asking a few clarifying questions. Automatic systems can’t do that (yet).
With the current state of automated feedback, my approach is to teach the students how to work within the limitations of the computers. The guidance goes something like this:
– automatic feedback is only useful for mechanical/procedural practice so don’t expect it to cover all you need for maths
– if you get a question wrong, try to figure out *why* using whatever information is available to you (“correct answer”, video, hints, etc.)
– if you think you know why you got it wrong then check your understanding by trying another question
– if you still don’t know why you got it wrong (and how to get it right) then stop and ask a real human for help.
Computers don’t have to be perfect to be useful. Work with students to automate what we can while being keenly aware of what remains in the “needs a human” realm. This post outlines a significant improvement on what is currently offered up by some systems as “hints” so I can’t wait to see how it develops :-)
Dan MeyerDecember 17, 2020 - 3:07 pm -
Thoroughly agree here, which is why our aspirations (best indicated by #5) are considerably more modest. We don’t presume to know why a student answered the way they did. We’re just going to reflect the student’s answer back to them and see if they like what they see.
I’m looking forward to seeing more studies of the results as well.
Gloria HuezoDecember 15, 2020 - 6:40 pm -
Thanks for this post, Dan. My students are getting more online feedback yet they are struggling more. The feedback is mostly the message “wrong”. I do have the option of adding personalized feedback.
I am giving feedback like, “Great! Next, do…” when their work is on point but falls short.
I don’t always get the chance to personalize the feedback but I am adjusting to our new tools and I think in 2021, I will be able to give more feedback that is constructive.
Thanks for the nudge and the explanation of why more feedback wasn’t getting the results I had hoped.
Dan MeyerDecember 17, 2020 - 3:09 pm -
Thanks for this perspective here, Gloria. Students are getting an abundance of online feedback but it isn’t clear whether it’s actually a resource, or helping.
SamDecember 16, 2020 - 3:38 pm -
I’d like to share this – but I think under “You’re wrong”, you meant to write: But (5, 4) receives the same feedback as answers like (1000, 1000) or “idk,” even though (5, 4)
I wouldn’t ordinarily care, but I suspect the people I share it with will not engage further otherwise…
Kevin HallDecember 17, 2020 - 5:24 pm -
I agree with your main thrust. I do think automated feedback can be really helpful in the refinement phase of learning, which has been less of a focus for Desmos so far (I think) than the introductory concept-building phase.
When I design activities, I include automated feedback that can pop up repeatedly, and my plan is to read the feedback together with the student and collaboratively interpret it with them the first time it pops up. Let’s say a student is drawing shapes to represent 4² and 4³ and they see a message that says, “Hey, what you drew looks more like 4(2), but question said 4².” As I think Gloria suggests above, the student might not understand the message means. They might think, why does my drawing look like 4(2)? Or, aren’t 4(2) and 4² the same thing? As long as the teacher gets an alert that the student is experiencing that kind of error message, the teacher can check in and co-create understanding at that moment. Or ask the students’ partner to help digest the message.
Once the student understands the message, then they’re ready to recognize and apply it the next time they get the same message. Of course, that assumes you’re doing a Desmos activity that features repeated practice. (I do a lot of these). When you’re doing one, these feedback messages promote the cognitive process of “chunking” by helping the student recognize and chunk together the error they made, what they need to do to fix it, and (hopefully) some kind of visual or verbal explanation of why it’s an error.
So tl;dr: I agree with you, but student errors are also when the cognitive “headache” is most acute, so it’s great to deliver content in response to those headaches.
(FWIW, folks may be interested in these 2 screens to show what I mean:
Kevin HallDecember 17, 2020 - 5:25 pm -
And here is the full activity from which those 2 screens are drawn:
Stei SchreiberDecember 17, 2020 - 5:42 pm -
I saw this in the desmos live. I do think it helps if students are to understand the world of the coordinate plane that they play, experiment, tinker with the math. Even feedback like “that moved it too far right” or “too high”. Or see what happens when you change just one of the numbers.
KarimDecember 25, 2020 - 7:15 pm -
This is wonderful, and I think it demonstrates just how thoughtful Desmos is about creating tools that honor student thinking rather than forcing a trajectory from A to B. The quality that I admire most about the “interpretative feedback” approach is the objective non-judgment; the computer is effectively acting as a “dumb box” that doesn’t try to influence the response as much as reflect it. If a student gets frustrated — no, that’s not the point that I intended! — the frustration can only be with the response itself, which is to say, with the kid’s own reasoning. In addition to coordinates, I can see this feedback method being helpful with other concepts for which misconceptions are common, e.g. proportions, linear equations, etc. (Marcellus will look misshapen. The airplane will land in the wrong place. Etc.)
The question I have is, What types of problems will this method not work for? It seems that for the feedback to work, there has to be some pre-existing answer to evaluate against, e.g. a dot that’s already plotted at (4,5) or a giant whose general look is already established. But what if the coordinate plane were blank? Or what if the task involved no context at all? In situations like this, do you default to a different kind of feedback, e.g. aggregating lots of responses and allowing the class conversation to serve as the nudge? Or do you try to only write tasks that lend themselves to some kind of automated feedback (in which case how do you avoid allowing the evaluation tail to wag the inquiry dog)?
But to the larger point, this is cool. In addition to being humane — as in, not treating students like pegs to be hammered into holes the way that many tech tools do — the approach also does a nice job of demonstrating that mathematics is conventional, i.e. something that humans built rather than something that fell from the sky. That’s helpful.
Dan MeyerDecember 31, 2020 - 2:42 pm -
Karim! Nice of you to interrupt your sojourn from the world of math ed to offer us this interesting question:
I think interpretive feedback only works for questions that are asked inside a world that is familiar to the student. That’s one reason why we give students early experiences with Marcellus (“write in words what a scale giant means”) and Land the Plane (“first drag the point”). Those early experiences make the world familiar enough to students that our interpretive feedback is useful.
And I can’t think of a reason why that category of questions would be any smaller than “all of math itself.”
The challenging work is to make mathworld familiar to students, to ask questions that invite student thinking, and to find ways to interpret that feedback in mathworld again.
A toy example – a student solves 2x – 4 = 10 with x = 3. If the idea of equivalency is familiar to that student – ie. what an equation represents – then the teacher can say, “Okay, what you’re telling me here is that 2*3 -4 should be 10. But I’m getting 2. Let me know what you want to do next.”
Computers and people giving students interpretive feedback all over mathworld. I think that’s the project.
William CareyJanuary 2, 2021 - 1:33 pm -
I’ve been thinking about this a lot. Two things jump out at me. One is that I think you correctly identify meaning as central to mathematics. This is a place where I think mathematics teachers have lots to talk to foreign and classical language teachers about. The debate about what kind of language learning feedback is effective is expansive and old.
The idea of reflecting a student’s thoughts back at them is really important. The more students engage with their thoughts and the thoughts of others, the more they’re *humans* as opposed to machines. The example you give is subtle. Presumably (?) what the student meant when they wrote (5,4) was to correctly plot the indicated point . So what the computer’s doing is not actually reflecting their thoughts back at them, or even interpreting their thoughts. The computer is interpreting their thoughts through the lens of a mathematician who has a particular understanding of the coordinate plane that *is different from that of the student*. When the computer metaphorically asks the student “is this what you meant?” the computer is willfully *misunderstanding* the student so as to change the way the student constructs the meaning of coordinate points. That might be one strategy in a human teacher’s toolbox, but they’d also be able to converse at the level of meaning with their students.
When you say, “we just attach as much meaning as we can to the student’s thinking in a world that’s familiar to them,” that’s *kind of* true. You’re attaching a particular meaning that is not the one that the student intended. That’s a little weird. Absent some much more direct instruction, it moves the task of syllogizing the order of coordinate pairs from the teacher to the student. If the students aren’t articulating the result of that inference to a human being who can interpret it, my experience has been that all sorts of weird stuff happens. From your example here, a student might infer that the greater coordinate is the horizontal one. So you give them more examples. But young people are, as you rightly point out, very creative. Students will infer complex and wrong rules in ways that computers can’t detect.
Beyond that, I wonder if this sort of feedback is effective at teaching grammar, but not logic or rhetoric. For example, I can imagine this sort of feedback being pretty aces at teaching a student how to translate between summation notation and arithmetic notation. And that’s an important first step. But the really interesting mathematical questions tend to be about argumentation and proof, not about notation. I’m reminded of Paul Lockhart’s example of the creative arguments young people produce to prove Thales’s theorem. Or, to lean on summation notation, how would interpretive feedback help students craft an argument that the sum of the first n numbers is half the product of the nth number and the (n+1)th number? I genuinely don’t know. And would teaching the grammar in this way also make it more difficult for students to play with and reason about those more rhetorical questions? I also don’t know!
I think, Dan, you’re working the right project — it’s all about getting students to cogently articulate meaning while making mathematical arguments. I’m really curious to see where the ceiling is!
Dan MeyerJanuary 3, 2021 - 1:55 pm -
Yeah, I think this is dead on and gets at the different capacities of machines and humans.
My least favorite version of feedback for learning and identity formation is where the student gives significant thought to a question and is told (by machine or human) “you’re wrong, purely and totally, and also here is an adult who will explain how to be right without any reference to anything right in your own thinking.”
My favorite version is where a teacher seeks first to understand the student’s thinking, points out its value, and offers feedback that references the parts that were right as much it references the parts that need more development.
Computers … can’t do that.
As you point out, our feedback in the example in the OP doesn’t speak as specifically as a human can to the student’s thinking. So we don’t try. We don’t say, “Here’s what I think you were thinking about.” Instead, we say, “Here’s what your thinking makes me think about. Is this useful for you?”
Yeah, this is a useful contrast. I think something important about the model we’re building for collaboration between humans and computers is that our computers offer interpretive feedback whenever we can imagine it, and everywhere else – especially for answers to questions like yours above, where student thinking evolves so quickly and fitfully and artfully that computers can’t and don’t deserve to parse it – we make that thinking visible to teachers and support them in giving interpretive feedback. The centaur model for human computer interaction is conceptually messier than lots of ways people use computers in math class but I think we’re seeing how effective it can be.
Thanks for all your comments and questions here—all very effective interpretive feedback on my thinking about interpretive feedback.
William CareyJanuary 3, 2021 - 2:34 pm -
“Quickly and fitfully and artfully” is a wonderful turn of phrase – can I steal that to use with parents? It really describes the dynamic I’m shooting for in my classes.
A thing that complicates your project, I think, is that most math *teachers* have never gotten to do mathematics with good interpretive feedback. So I think an adjacent project is inculturation of math teachers to *do* the sort of math we want to teach our students to do which means putting teachers face to face with unfamiliar mathematical ideas (here not being a formally trained mathematician is helpful to me!).
The snapshot thingy looks really interesting. Is there a way with desmos to have multiple people working on the same answer in that framework? The mathematical discussions among teachers I’ve been having this year use zoom as our communication tool, and the face to face is awesome, but boy does the whiteboard suck as a vehicle for communicating mathematical ideas to one another. We usually end up working on paper and taking photos to share with one another, which we then annotate and argue about. I wish there were a really freeform converse.desmos.com that was like Zoom, only with tools to communicate about math that didn’t suck.
Dan MeyerJanuary 4, 2021 - 9:22 pm -
100%. Encountering a productive teaching practice is one challenge. Supporting teachers in adopting it is another.
Dick FullerJanuary 2, 2021 - 5:50 pm -
How about a back channel so I can submit my answer if I don’t think the machine knows what it is talking about? If I’m right I get rewarded.
what is this coordinate? (1) It’s where red dot is , or( 2) its the intersection of the lines, 4 = x and 5 = y, under the red dot.
I don’t react well to convention and notation questions, especially from computers who can’t understand the irony.
Let’s all get the new year we need.
Melissa VrankovichMay 27, 2021 - 7:16 am -
Thank you for this post! This year, 91% of my students are fully remote, and I rely heavily on technology to assess their level of understanding. My students are receiving online feedback regularly, but it is typically in the form of correct or incorrect. I have found that by leaving personalized, constructive feedback for each student has helped to improve their overall understanding. This year has been a learning curve for both me and my students, but we have persevered!