*[This is my contribution to The Virtual Conference on Mathematical Flavors, hosted by Sam Shah.]*

In the early 20th century, Karl Groos claimed in *The Play of Man* that “the joy in being a cause” is fundamental to all forms of play. One hundred years later, Phil Daro would connect Groos’s theory of play to video gaming:

Every time the player acts, the game responds [and] tells the player your action causes the game action: you are the cause.

Most attempts to “gamify” math class learn the wrong lessons from video games. They import leaderboards, badges, customized avatars, timed competitions, points, and many other *stylistic* elements from video games. But gamified math software has struggled to import this *substantial* element:

Every time the player acts, the game responds.

When the math student acts, how does math class respond? And how is that response different in video games?

Watch how a video game responds to your decision to jump off a ledge.

Now watch math practice software responds to your misinterpretation of “the quotient of 9 and c.”

The video game *interprets* your action in the world of the game. The math software *evaluates* your action for correctness. One results in the joy in being the cause, a fundamental feature of play according to Groos. The other results in something much less joyful.

To see the difference, imagine if the game *evaluated* your decision instead of *interpreting* it.

I doubt anyone would argue with the goals of making math class more joyful and playful, but those goals are more easily adapted to a poster or conference slidedeck than to the actual experience of math students and teachers.

**So what does a math class look like that responds whenever a student acts mathematically, that interprets rather than evaluates mathematical thought, that offers students joy in being the cause of something more than just evaluative feedback.**

“Have students play mathematical or reasoning games,” is certainly a fair response, but bonus points if you have recommendations that apply to core academic content. I will offer a few examples and guidelines of my own in the comments later tomorrow.

**Featured Comments**

I feel like a lot of the best Desmos activities do that, because they can interpret (some of) what the learner inputs. When you do the pool border problem, it doesn’t tell you that your number of bricks is wrong – it just makes the bricks, and you can see if that is too many, too few, or just right.

In general, a reaction like “Well, let’s see what happens if that were true” seems like a good place to start.

My favorite example of this is when Cannon Man’s body suddenly multiplies into two or three bodies if a student draws a graph that fails the vertical line test.

I am so intrigued by the word interpret. “Interpret” is about translating, right? Sometimes when we try to interpret, we (unintentionally) make assumptions based on our own experiences. Recently, I have been pushing myself to linger in observing students as they work, postponing interpretations. I have even picked up a pencil and “tried on” their strategies, particularly ones that are seemingly not getting to a correct solution. I have consistently been joyfully surprised by the math my students were playing with. I’m wondering how this idea of “trying on” student thinking fits with technology. When/how does technology help us try on more student thinking?

I think that many physical games give clear [evaluative] feedback as well, insofar as you test out a strategy, and see if you win or not. Adults can ruin these for children by saying, “are you sure that’s the right move?” rather than simply beating them so they can see what happens when they make that move. The trick there is that some games you improve at simply by losing (I’d put chess in this column, even though more focused study is essential to get really good), where others require more insight to see what you actually need to change.

## 27 Comments

## swi

August 8, 2018 - 5:45 pm -Fantastic ideas here. Looking forward to your further examples. One thing occurs to me: you didn’t show the SMB feedback on the math software, a “bummer sound effect” and being thrust back to the beginning of the problem, forced to run through perhaps dozens of excruciatingly routine steps to get simply get back to the point of error. Curious to see how these two conceptual frameworks actually function when the design elements match up with the contexts. I get it that the idea here is to think of what the interpretive feedback would be in the math software, but in this day and age video games are massively complex, and I’m guessing there are scores of games that have pretty serious evaluative feedback after a FAIL. And, FB that players appreciate, devour, learn from, and completely value. Still, cool to differentiate between the two as possible axes out there in the ecology of learning, playful or otherwise.

## Dan Meyer

August 8, 2018 - 7:48 pm -I’m not enough of a gamer to foreclose the possibility. Let’s just say I’d be really surprised, though.

FWIW, the interpretative feedback version of the math software (in my head) is a sentence that says, “Oh you’ve written the

productof 9 and c, not the quotient. Try again.”## James Cleveland

August 8, 2018 - 6:47 pm -Featured CommentIn general, a reaction like “Well, let’s see what happens if that were true” seems like a good place to start.

## Martin Smith

August 12, 2018 - 2:27 pm -I don’t play many games, but in Halo 4, after you die they replay your death on the “Killcam” while you wait to respawn.

## Dan Meyer

August 12, 2018 - 3:41 pm -Oo interesting reference, Martin. I’m trying to find a connection to interpretative and evaluative feedback. Any help?

## Kevin Hall

August 8, 2018 - 6:56 pm -Featured Comment## Mike Pac

August 8, 2018 - 8:39 pm -The “world of the game” is one possible starting point. The physical world, or pseudo-physical world, offers probably the most intuitive interpretive feedback. Mario, the pool border problem, cannon man, marbleslides, all are based within a system that models at least some aspects of the physical world, making the actions within each “world” lead to familiar outcomes.

So a starting point would be to take any intriguing physical task/problem and make it a challenge. Something like giving kids some elastic to use as a slingshot, with a challenge of developing a function that would allow them to calculate how far back and how far down you’d need to pull the elastic so that the ball could land on any given target. The feedback from the system is intuitive, which would allow kids to self-correct. And depending on how you’d want them to analyse, you could tie it to some of the “Vector and Matrix Quantities” standards, quadratic standards, and/or creating functions standards.

Along the same physical constraint, you could create a system from knot (un)tying. Given a knot and certain allowed moves, figure out how to untie it. Then eventually have them be able to generalize their analyses to be able to untie any knot on the first attempt. Again, this system allows immediate and intuitive feedback.

And even if you wanted to make the world/system slightly less intuitive (possibly in desmos), you could change the geometry — something as simple as making the world a torus like in pac man or allowing only discrete moves on a weirdly connected discrete plane.

## Scott Farrar

August 8, 2018 - 11:53 pm -I like the evaluative and interpretive labels. Dylan Wiliam quoted others in a succinct paragraph about how this is a problem even separate from software. “Teachers who listen evaluatively to their students’ answers learn only whether their students know what they want them to know. If the students cannot answer correctly, then the teachers learn only that the students didn’t get it and that they need to teach the material again, only, presumably, better ” I wrote more on this quote a couple years ago http://scottfarrar.com/blog/evaluative-listening-and-khan-academy/

Some background on “try again”: this was put in when KA moved away from needing streaks of getting questions right (N in a Row). So instead of the feedback saying, “wrong”, this was a new more forgiving feature to say “try again”.

Insider’s perspective!At it’s core, the math practice software isn’t well set up to listen interpretively because (1) it doesn’t do (simulate) the math at hand, but (2) give the first problem, it asks narrowly answered questions. In contrast, the Mario engine simulates jumps and Mario can do many different kinds of jumps, as it interprets the various combinations of buttons pushed, and the surrounding obstacles onscreen. Or: a well crafted Desmos/Geogebra graph will simulate some scenario and any *valid* student input can be interpreted within that scenario. (Like Kevin noted in Function Carnival’s Cannon Man)

But I think math practice software’s constraints and behaviors ties into one of Dan’s “Classic Hits”: Clever Hans. The horse that learned how to “do math” by carefully interpreting the evaluative feedback of it’s trainer. (stomp out 2+3. stomp, stomp, stomp, stomp, *oooh* stomp *ahhh* “see he stopped at 5”)

Students “Clever Hans” their human teachers in this way all the time: play the teacher’s unwitting feedback to the student’s advantage to pursue the goal of answering teacher questions correctly. Acquiring understanding in this way is beside the point.

And I believe students practicing on software can Clever Hans the evaluative feedback to similarly sidestep learning in favor of rewards. They learn the feedback-space of the math practice software, and pick up patterns of question authoring, and can efficiently move through the work to achieve their goals: complete the assignment. I explored a little more with Khan Academy answer data in a more formal way recently and I developed another hypothesis that much of the wrong answers given on Khan Academy could be from basic heuristic that is “input the first thing that comes into your head”. (the paper is here http://people.ischool.berkeley.edu/~zp/papers/ICLS_distributed_misconceptions.pdf)

The student enters “9c” and the software can’t interpret it, so it gives feedback of “wrong” or “try again”. But the student now must try to use that information. They move onto their second guess without much additional thought. It’s a very thin “conversation” between learner and software because neither one is really listening to each other. The student is doing some “success finding” heuristic and is not necessarily thinking about the math at hand– *because* the system cannot contribute anything about the math at hand.

Dan wrote in another comment “interpretative feedback version of the math software (in my head) is a sentence that says, ‘Oh you’ve written the product of 9 and c, not the quotient. Try again.'” Regardless of if the system should give that full sentence — the fact that a system could do so is the key there. The system must have the capability to interpet the user’s input in order to hold a meaningful conversation.

## Sarah Caban

August 9, 2018 - 2:03 am -Featured Comment## Dan Meyer

August 9, 2018 - 5:40 pm -Nice! I’m really interested in non-tech examples of interpretive feedback too. I mean, they should be easy to come by, right? Conversation is

loadedwith interpretative feedback. Whenever we’re chatting, I’m picking up all kinds of interpretative feedback from you. Thoughtful mm-hmms. A skeptical eyebrow. Etc.“Trying on” a student’s strategy is a really interesting way to separate the student’s ego from her idea. It’s flattering that the teacher is taking my work so seriously. Let’s see what she does with it.

I find it useful sometimes to take some student work and say, “what this makes me think about is ….” and then something like “how this would work for larger numbers” or “how the answer would change if this was a negative” or etc. Maybe this is in a different category than interpretative

orevaluative and maybe I just need to read Talk Moves already.## Kevin Hall

August 9, 2018 - 6:00 am -Lots of potential here.The same thing can be done with stacking cups and other scenarios. There are ways to stack cups proportionally and non-proportionally. Rather than telling kids that their expressions for the height of a stack are wrong, you can show them what kind of stack their expression does represent.

## Dan Meyer

August 9, 2018 - 5:56 pm -Like it a lot. We talk on my team about “connecting representations.” Connecting a graph to a table is one form that can take, but we love to be able to connect a mathematical representation to one that’s more contextual, exactly like you describe.

## Rachel

August 9, 2018 - 6:40 am -My thought was similar to Kevin’s. Rather than immediately evaluate whether the response is correct, show a picture. You could use a concrete example (like Kevin’s pizzas), or more abstract representations. In this case, an array model would be a good abstract representation that would show whether you were multiplying or dividing.

Nice move!That would give you good information–does the student understand what they are even looking for? The students that see the incorrect model and continue anyway would be the ones that you want to provide more instruction to (in person or virtually). The ones who can adjust based on the visual feedback would be able to figure it out from the program.

On a video game note, my nephew was frustrated the other day because he was playing Fortnite and after making 4 kills (his pr right now is 5, so he was happy about that), he accidently walked his character off a cliff and died. In response, he walked out to the living room to complain to his mom and hung out in the non-virtual world for a while. A few hours later, he was playing again. So, instead of being forced back into the game when he wasn’t ready, he had time to process what happened and think about something else for a while. When he felt up to the challenge, he was able to return. That is something that I find missing from most educational environments (virtual or not): the right to set aside a problem for a while and return later. How can we give students that freedom? How can we help them use it wisely?

## Dan Meyer

August 9, 2018 - 5:51 pm -Whoa – I really like the part where students see the representation and then decide whether to keep it or revise.

## Ben Hambrecht

August 9, 2018 - 6:43 am -Very poignant distinction! I couldn’t put it any better. Show, don’t tell! The joy of being a cause drives some to programming as a creative outlet. Real rewarding work instead of school’s pretend-work. But the hurdles for creative empowerment, in math and programming, are still way too high.

## Dan Meyer

August 9, 2018 - 5:52 pm -Programming is a great reference here. The joy of being the cause all over it. Barring syntactical issues, it’s totally non-evaluative. “Look, you told me to run this script so I’m going to run this script.”

## Genevieve

August 9, 2018 - 6:53 am -Have you checked out MANGAHIGH?

https://www.mangahigh.com/en-us/

I’m interested to hear what you think. I hope to incorporate it into my students’ learning this year.

## Dan Finkel

August 9, 2018 - 8:33 am -Fantastic demonstration of Mario seizing up in midair. A physical reaction grips me when I’m denied seeing the natural consequence of the action.

Featured CommentBackgammon was an example of the latter sort for me: because so much luck is involved, and winning is about knowing when you have a 55% chance of success and pressing your advantage, you often get punished for good decisions and rewarded for bad decisions, which confounds the learning process unless you have the right lens.

I’m not a big fan of these practice kinds of programs, but if I wanted to improve this one specifically, I might include a couple of “checks,” of the type I might do when thinking about translating some description into algebra. For example:

“The quotient of 9 and c”

Input: 9c

Let’s check: if c =2, the quotient of 9 and c would be 4.5.

Your answer: 9c = 18 when c = 2

If c = 27, the quotient of 9 and c would be 1/3.

Your answer: 9c = 243 when c = 27.

Something weird is going on…

## Dan Meyer

August 9, 2018 - 5:59 pm -Uf. I had my own physical reaction to “Are you sure that’s the right move?” Evaluative feedback trying to pass as interpretative.

## Alexandra

August 9, 2018 - 8:52 am -I think the response mostly depends on the goal of the task. If the goal is to find the right answer (some math is actually about finding right answers), then the best response to mistake is a hint or “lets do it together”. Or even”try again”.

Interpreting seems appropriate only for “research” or “explore” tasks.

I tried interpreting in “find the right answer” type of tasks and mostly it was awful.

Here are two examples (it’s Russian, but math is obvious):

Interpreting: https://youtu.be/129O34TyMNQ

Evaluating + some hints: https://youtu.be/qsytahwyvqY

Isn’t the second one more student friendly?

## Dan Meyer

August 9, 2018 - 6:02 pm -These are both extremely compelling, Alexandra. I need to think about both some more. I found the first video more useful on a first look, the one where we tell students, “Here is what that expression actually is.” I don’t know if that’s just preference, though.

## Evan Rushton

August 10, 2018 - 9:39 am -I agree with James, Kevin, and Rachel in terms of interpreting student inputs: when a player acts, show them the natural consequence of their action in the game world. Let the player evaluate their action and plan a new strategy.

As swi and Scott point out, the world (context) of SMB is very distant to that of KA, so there are many other game features confounding the comparison. Alexandra gave us concrete math practice software feedback alternatives: (1) interpretive and (2) evaluative+hint. It seems to me that (2) has potential to show students the underlying structure, but is very likely to just walk them to the answer and leave them unsure how to do it on their own, while (1) shows real promise, especially if the software can interpret student input against multiple representations, so students can connect their mistake to a representation they are more comfortable with.

The point I agree with most in the original post is, ‘Most attempts to “gamify” math class learn the wrong lessons from video games. They import leaderboards, badges, customized avatars, timed competitions, points, and many other stylistic elements from video games. But gamified math software has struggled to import this substantial element: Every time the player acts, the game responds.’

Serious games have contextualized feedback: https://medium.com/@E_Rushton/next-generation-learning-games-part-1-eafd616e138b

There isn’t a teacher popping up disguised as a dialogue box who breaks the fourth wall and reminds players that they are taking a test, or completing flying worksheets with shiny objects.

Thanks for calling this out Dan. For folks interested in fun/play, I recommend Raph Koster’s Theory of Fun and Scott McCloud’s Understanding Comics.

## Dan Meyer

August 12, 2018 - 3:54 pm -Nice! Adding those to my reading list.

## suehellman

August 9, 2018 - 12:14 pm -My first thought was to shape the feedback more positively so it didn’t simply say ‘you’re wrong’ & follow up by showing ‘here’s why’. Trying to avoid the verbal equivalent of a big red X, I came up with: 9c would be the answer if the question was asking for a way to show multiply. Ttry again. That seemed to address your big idea of being interpretive rather than evaluative. After some reflection, however, I now see 2 big problems with it.

First, the phrase ‘try again’ can be taken by some learners as an invitation to game the system. They will devote their efforts to figuring out how outsmart the program so they can get through the activity as quickly as possible. <> Beating the system can seem a lot more rewarding than doing the math.

More importantly, though, I realized I’d fallen into the assumption trap. Without first asking why a learner might choose the raised dot, I’d just assumed that the underlying concept being tested was which operation was correct. What if a student simply didn’t remember the meaning of the word ‘quotient’, or didn’t recognize that a/b is a way to represent division, or was on a smartphone and had touched the 3rd button instead of the 4th, or, or….? My feedback would only be meaninful and useful if the learner’s difficulty matched my assumption.

We need a lot more information from students about what underlies the errors we observe. Only then can we address learners’ issues as they see them and stop ourselves from jumping to solutions based on our (often unconscious) assumptions.

So here’s my second try. After the arrows are suggestions for corrective follow-up. The original question would have to be completed correctly in order for the student to move on.

9c would be the answer if the question was asking for a way to show multiply. Where do you think you might have gone wrong?

(a) I really wanted a different button. –> “OK, you have 1 more try.”

(b) I’m not sure what ‘quotient’ means. –> activate prior learning by reminding when/in what context this was previously learned; if want to just give definition, interleaf similar questions in future activities to reinforce learning.

(c) None of the answers looked right, so I guessed. –> provide micro-lesson (5 min or less) on ways to show multiply & divide.

(d) Other: _________________________________________ –> “That’s interesting. Please share it with your teacher.”

(e) This is my 2nd try, & it’s still not right. What now? –> “You have 2 options. See the teacher now & then finish the activity, OR do all the questions you can & get help with the rest at the end.”

## Harry O'Malley

August 9, 2018 - 12:36 pm -Math tools that interpret rather than evaluate? That offer students the joy of being the cause of something? I’ve created an instructional framework and set of tools with this concept as a central theme that spans kindergarten through calculus. Here’s a video outlining the key ideas and resources:

https://www.youtube.com/watch?time_continue=648&v=cqwgPkdJrzk

In short, the framework generalizes the strengths inherent in the Cannon Man activity and Kevin Hall’s pizza delivery proposal. The problem with those activities is that they are too specific. Students need hundreds (thousands? millions?) of experiences interacting with the concrete interpretations of the mathematics they write. The Manifest Framework outlines an efficient, systematic way of accomplishing this.

James Gee has been very influential, here. This video, in particular:

https://www.edutopia.org/video/james-paul-gee-learning-video-games

Big take away? There is no such thing as academic language, only language for worlds we haven’t lived in. We need tools that allow learners to concretely interact with the worlds that mathematics describes.

## William Carey

August 16, 2018 - 1:25 am -The best of these mathematical games that I’ve seen is Project Euler (projecteuler.net). When you answer a question wrong, it just gives you a big red x. When you answer a question right, a nice green check. There’s no prompt to get help or skip the question.

Always woven into the question is a simpler particular case of the question with an answer that allows you to check your own reasoning as you work. It’s marvelously effective at allowing a mathematician to work towards an answer from a place of discovery.

The reward for answering correctly (!) is access to a discussion forum where other people who have answered the question correctly can discuss their answers.

Socially it’s totally different from most online (and classroom) pedagogies, and marvelously effective.

## Evan Rushton

August 16, 2018 - 11:30 am -While I agree that PE has a solid feedback mechanism – I would argue that the site is geared toward mathematically inclined learners who are intrinsically motivated to get answers to obscure math questions, and the feedback fits the needs of this audience. Would a more general audience get tired of being wrong and just rage quit? How do we cultivate the persistence to keep at a problem and try multiple solution pathways before giving up?

It has become a more common social mechanic to open up discussion for folks that get the answer correct (or in some cases, who opt to see the solution, eg HackerRank or Brilliant.) I agree that this is effective and seems to promote more discussion.

I’ve come to feel that the feedback offered in typingclub is ideal feedback for deliberate practice. It shows me the expert model as I attempt to perform in real time. I can refer to it if I get stuck, but I can keep moving forward if I make a mistake. It is both non-intrusive and immediate. I am not sure how to mimic that with something much less mechanical and rote than typing – namely problem solving, or even particular skills like adding fractions… but I hope it illustrates a contrast to the feedback on PE, that in my opinion, more reluctant learners need in order to eventually build the confidence to hit ‘b’ with their left pointer finger and ‘x’ with their left ring finger… and by kludgy analogy, properly distribute the negative when multiplying polynomials, or find a common denominator when adding fractions.

After writing that out, it really doesn’t seem a fitting analogy. Now I am wondering why multiplying polynomials or finding a common denominator are different in structure than simply remembering which finger to use for a keystroke… thanks for getting me to ponder this William.