I’ve heard dozens of variations on that recommendation in my task design workshops. I heard it at Twitter Math Camp this summer. That statement measures tasks along one axis only: the realness of the world of the problem.

But teachers report time and again that these tasks don’t measurably move the needle on student engagement in challenging mathematics. They’re real world, so students are disarmed of their usual question, “When will I ever use this?” But the questions are still boring.

That’s because there is a second axis we focus on less. That axis looks at *work*. It looks at what students *do*.

That work can be real or fake also. The fake work is narrowly focused on precise, abstract, formal calculation. It’s necessary but it interests students less. It interests the world less also. Real work – interesting work, the sort of work students might like to do later in life – involves problem formulation and question development.

That plane looks like this:

We overrate student interest in doing fake work in the real world. We underrate student interest in doing real work in the fake world. There is so much gold in that top-left quadrant. There is much less gold than we think in the bottom-right.

**BTW**. I really dislike the term “real,” which is subjective beyond all belief. (eg. What’s “real” to a thirty-year-old white male math teacher and what’s real to his students often don’t correlate at all.) Feel free to swap in “concrete” and “abstract” in place of “real” and “fake.”

**Related**. Culture Beats Curriculum.

*This is a series about “developing the question” in math class.*

**Featured Tweet**

@ddmeyer @mpershan my kids working with http://t.co/ig98dLeSGS are doing real work, fake world, and loving it.

— Pamela Rawson (@rawsonmath) September 17, 2014

**Featured Comment**

I would add that tasks in the bottom-right quadrant, those designed with a “SIMS world” premise, provide less transfer to the abstract than teachers hope during the lesson design process. This becomes counter-productive when a seemingly “progressive” lesson doesn’t produce the intended result on tests, then we go back not only to square 1, but square -5.

]]>I love this distinction between real world and real work, but I wonder about methods for incorporating feedback into real work problems. In my experience, students continue to look at most problems as “fake” so long as they depend on the teacher (or an answer key or even other students) to let them know which answers are better than others. We like to use tasks such as “Write algebraic functions for the percent intensity of red and green light, r=f(t) and g=f(t), to make the on-screen color box change smoothly from black to bright yellow in 10 seconds.” Adding the direct, immediate feedback of watching the colors change makes the task much more real and motivating.

My daughter just tried the sine rule on a question and was asked to give the answer to one decimal place. She wrote down the correct answer and it was marked wrong. But it is correct!!! No feedback given just – it’s wrong. She is now distraught by this that all her friends and teacher will think she is stupid. I don’t understand! It’s not clear at all how to write down the answer – does it have to be over at least two lines? My daughter gets the sine rule but is very upset by this software.

My skin crawls – seriously. Math involves enough *intrinsic* difficulty and struggle. We don’t need our software tying extraneous weight around our students’ ankles.

Enter Classkick. Even though I’m somewhat curmudgeonly about this space, I think Classkick has loads of promise and it charms the hell out of me.

Five reasons why:

**Teachers provide the feedback. Classkick makes it faster.**This is a really ideal division of labor. In the quote above we see the computer fall apart over an assessment a novice teacher could make. With Classkick, the computer organizes student work and puts it in front of teachers in a way that makes smart teacher feedback faster.- Consequently,
**students can do more interesting work.**When computers have to assess the math, the math is often trivialized. Rich questions involving written justifications turn into simpler questions involving multiple choice responses. Because the teacher is providing feedback in Classkick, students aren’t limited to the kind of work that is easiest for a computer to assess. (Why the demo video shows students completing multiple choice questions, then, is befuddling.) **Written feedback templates**. Butler is often cited for her finding that certain kinds of written feedback are superior to numerical feedback. While many feedback platforms only offer numerical feedback, with Classkick, teachers can give students freeform written feedback and can also set up written feedback templates for the remarks that show up most often.**Peer feedback**. I’m very curious to see how much use this feature gets in a classroom but I like the principle a lot. Students can ask questions and request help from their peers.**A simple assignment workflow for iPads**. I’m pretty okay with these computery things and yet I often get dizzy hearing people describe all the work and wires it takes to get an assignment to and from a student on an iPad. Dropbox folders and WebDAV and etc. If nothing else, Classkick seems to have a super smooth workflow that requires a single login.

Issues?

Handwriting math on a tablet is a chore. An iPad screen stretches 45 square inches. Go ahead and write all the math you can on an iPad screen – equations, diagrams, etc – then take 45 square inches of paper and do the same thing. Then compare the difference. This problem isn’t exclusive to Classkick.

Classkick doesn’t specify a business model though they, like everybody, think being free is awesome. In 2014, I hope we’re all a little more skeptical of “free” than we were before all our favorite services folded for lack of revenue.

This isn’t “instant student feedback” like their website claims. This is feedback from humans and humans don’t do “instant.” I’m great with that! Timeliness is only one important characteristic of feedback. The quality of that feedback is another far more important characteristic.

In a field crowded with programs that offer mediocre feedback instantaneously, I’m happy to see Classkick chart a course towards offering good feedback just a little faster.

**2014 Sep 17**. Solid reservations from Scott Farrar and some useful classroom testimony from Adrian Pumphrey.

**2014 Sep 21**. Jonathan Newman praises the student sharing feature.

**2014 Sep 21**. More positive classroom testimony, this entry from Amy Roediger.

**2014 Sep 22**. Mo Jebara, the founder of Mathspace, has responded to my initial note with a long comment arguing for the adaptive math software in the classroom. I have responded back.

Then the blogosphere’s intrepid Clayton Edwards extracted an answer from the manufacturers of the duck, which gave us all some resolution. For every lot of 300 ducks, the Virginia Candle Company includes one $50, one $20, one $10, one $5, and the rest are all $1. That’s an expected value of $1.27, netting them a neat $9.72 profit per duck on average.

That’s a pretty favorable distribution:

They’re only able to get away with that distribution because competition in the animal-shaped cash-containing soap marketplace is pretty thin.

So after developing the question and answering the question, we then *extended* the question. I had every group decide on a) an animal, b) a distribution of cash, c) a price, and put all that on the front wall of the classroom – our marketplace. They submitted all of that information into a Google form also, along with their rationale for their distribution.

Then I told everybody they could buy any three animals they wanted. Or they could buy the same animal three times. (They couldn’t buy their own animals, though.) They wrote their names on each sheet to signal their purchase. Then they added that information to another Google form.

Given enough time, customers could presumably calculate the expected values of every product in the marketplace and make really informed decisions. But I only allowed a few minutes for the purchasing phase. This forced everyone to judge the distribution against price on the level of intuition only.

During the production and marketing phase, people were practicing with a purpose. Groups tweaked their probability distributions and recalculated expected value over and over again. The creativity of some groups blew my hair back. This one sticks out:

Look at the price! Look at the distribution! You’ll walk away a winner over half the time, a fact that their marketing department makes sure you don’t miss. And yet their expected profit is *positive*. Over time, they’ll bleed you dry. Sneaky Panda!

I took both spreadsheets and carved them up. Here is a graph of the number of customers a store had against how much they marked up their animal.

Look at that downward trend! Even though customers didn’t have enough time to calculate markup exactly, their intuition guided them fairly well. Question here: which point would you most like to be? (Realization here: a store’s profit is the area of the rectangle formed around the diagonal that runs from the origin to the store’s point. Sick.)

So in the *mathematical* world, because all the businesses had given themselves positive expected profit, the *customers* could all expect negative profit. The best purchase was no purchase. Javier won by losing the least. He was down only $1.17 all told.

But in the real world, chance plays its hand also. I asked Twitter to help me rig up a simulator (thanks, Ben Hicks) and we found the *actual* profit. Deborah walked away with $8.52 because she hit an outside chance just right.

Profit Penguin was the winning store for both expected and actual profit.

Keep the concept simple and make winning $10s and $20s fairly regular to entice buyers. All bills – coins are for babies!

So there.

We’ve talked already about *developing the question* and *answering the question*. Daniel Willingham writes that we spend too little time on the former and too much time rushing to the latter. I illustrated those two phases previously. We could reasonably call this post: extending the question.

To extend a question, I find it generally helpful to a) flip a question around, swapping the knowns and unknowns, and b) ask students to *create* a question. I just hadn’t expected the *combination* of the two approaches to bear so much fruit.

I’ve probably left a lot of territory unexplored here. If you teach stats, you should double-team this one with the economics teacher and let me know how it goes.

*This is a series about “developing the question” in math class.*

Math students : Answer-getting :: Math teachers : Resource-finding.

— Dan Meyer (@ddmeyer) September 3, 2014

Math students : "What's the formula for __ ?" :: Math teachers : "Who's got a good lesson for __ ?"

— Dan Meyer (@ddmeyer) September 3, 2014

Math students : Understanding math :: Math teachers : Understanding what makes a good lesson good.

— Dan Meyer (@ddmeyer) September 3, 2014

“Answer-getting” sounds pejorative but it doesn’t have to be. Math is full of interesting answers to get. But what Phil Daro and others have criticized is our fixation on getting answers at the expense of understanding math. Ideally those answers (right or wrong) are means to the ends of understanding math, not the ends themselves.

In the same way, “resource-finding” isn’t necessarily pejorative. Classes need resources and we shouldn’t waste time recreating good ones. But a quick scan of a teacher’s Twitter timeline reveals lots of talk about resources that worked well for students and much less discussion overall about *what it means for a resource to “work well.”*

My preference here may just mean grad school has finally sunk its teeth into me but I’d rather fail trying to answer the question, “What makes a good resource good?” than succeed cribbing someone else’s good resource without understanding why it’s good.

**Related**

- I felt the same way about sessions at Twitter Math Camp.
- Kurt Lewin: “There is nothing so practical as a good theory.”
- Without agreeing or disagreeing with these specific bullet points, everyone should have a bulleted list like this.

**Featured Comment**

Mr K:

This resonates strongly.

I shared a lesson with fellow teachers, and realized I had no good way to communicate what actually made the lesson powerful, and how charging in with the usual assumptions of being the explainer in chief could totally ruin it.

Really worthwhile comments from Grace Chen, Bowen Kerins, and Fawn Nguyen also.

Really, we need to literally go back to questions such as ‘Why am I teaching this?’ ‘Where does this fit into the students learning journey?’ and ‘How am I going to structure the learning so that the student wants to learn this?’ before we even think about where resources fit into our lesson. This takes a lot of time to think about and process. Time and space many teachers just don’t have.

Early on I would edit resources and end up reducing cognitive demand in the interest of making things clearer for students. Now I edit resources to remove material and increase cognitive demand. Or even more often, I’m taking bits and pieces because I have a learning goal, learning process goal and study skills goal that I have to meet with one lesson.

Great lessons in the context of learning around mindset and methods are the instruments we use to “do” our work. But the reflection and coaching conversations where we “learn” about our work are critical as well. Without them, we use scalpels like hammers.

But this work is much harder, much more personal, much more in the moment of the classroom. Can we harness the power of tech to share this work as well as we have to share the tools?

**2014 Sep 8**. Elissa Miller takes a swing at “what makes a good lesson good?” Whether or not I agree with her list is besides my point. My point is that her list is better than dozens of good resources. With a good list, she’ll find them eventually and she’ll have better odds of dodging lousy ones.

The blogger at Simplify With Me posts two interesting activities with dice, one involving *blank* dice, and the other involving space battles:

Once you have your ships, place one die on the engine, one on the shield, and the other two on each weapon. Which die on which part you ask. That’s the magic of this activity. Each person gets to decide for themselves.

**Kathryn Belmonte** posts five more uses for dice in her math classroom.

**Kate Nowak** set the tone for her school year with debate about a set of shapes:

Then I said, okay, so here’s a little secret: what we think of as mathematics is just the result of what everyone has agreed on. We could take our definition of “the same” and run with it. In geometry there’s a special word “congruent” where specific things, that everyone agrees to like a secret pact, are okay and not okay. Then, I erased “the same” and replaced it with “congruent,” and made any adjustments to the definition to make it correct. They had heard the word congruent before, and had the perfectly reasonable middle school understanding that congruent means “same size and shape.” I said that that was great in middle school, but in high school geometry we’re going to be more precise and formal in our language.

**Hannah Schuchhardt** isn’t happy with how her game of Transformation Telephone worked but I thought the premise was great:

I love this activity because it gives kids a way to practice together as a group and self-assess as they go through. Kids are competitive and want their transformations to work out in the end!

**Featured Comments**

]]>Kate does a great job connecting all the dots by focusing on the learning target at the end of the lesson. It appears all great classroom action positions the learning target there. Now to convince our administrators.

My fear isn’t restricted to Mathspace, of course, which is only one website offering immediate feedback out of many. But Mathspace hosts a demo video on their homepage and I think you should watch it. Then you can come back and tell me my fears are unfounded or tell me how we’re going to fix this.

Here’s the problem in three frames.

First, the student solves the equation and finds x = -48. Mathspace gives the student immediate feedback that her answer is wrong.

The student then changes the sign with Mathspace’s scribble move.

Mathspace then gives the student immediate feedback that her answer is now right.

The student thinks she knows how to solve equations. The teacher’s dashboard says the student knows how to solve equations. But quiz the student just a little bit – as Erlwanger did a student named Benny under similar circumstances forty years ago – and you see just how superficial her knowledge of solving equations really is. She might just be swapping signs because that’s why her answers have been wrong in the past.

Everyone walks away feeling like a winner but everyone is losing and no one knows it. That’s the scary side of immediate feedback.

**One possible solution.**

When a student pulls a scribble move like that, throw a quick text input that asks, “Why did you change your answer?” The student who is just guessing will say something like, “Because it told me I was right.” Send that text along to the teacher to review. The solution is data that can’t be autograded, data that can’t receive immediate feedback, but better data just the same.

**Related Awesome Quote**

If you can both listen to children and accept their answers not as things to just be judged right or wrong but as pieces of information which may reveal what the child is thinking you will have taken a giant step towards becoming a master teacher rather than merely a disseminator of information.

**Featured Comment**

I would want to emphasize that the issue is that Mathspace (and tech folks generally) tries to give immediate, “personalized” feedback in a fast, slick, cheap, low/no-labor kind of way. And, not surprising, ends up giving crappy feedback.

Daniel Tu-Hoa, a senior vice president at Mathspace responds:

[T]eachers can see every step a student writes, so they can, as you suggest, then go and ask the student: “why did you change your answer here?” For us, technology isn’t intended to replace the teacher, but to empower teachers by giving them access to better information to inform their teaching.

**2014 Sep 4**. I’ve illustrated here a false positive – the adaptive system incorrectly thinks the student understands mathematics. Fawn Nguyen illustrates another side of bad feedback: false negatives.

First, a video I made with the help of some workshop friends at Eanes ISD. They provided the video. I provided the tracking dots.

To develop the question you could do several things your textbook likely won’t. You could pause the video before the bicycle fades in and ask your students, “What do you think these points represent? Where are we?”

Once they see the bike you could then ask them to rank the dots from fastest to slowest.

It will likely be uncontroversial that A is the fastest. B and C are a bit of a mystery, though, loudly asking the question, “What do we mean by ‘fast’ anyway?” And D is a wild card.

I’m not looking for students to correctly invent the concepts of angular and linear velocity. They’ll likely need our help! I just need them to spend some time looking at the deep structure in these contrasting cases. That’ll prepare them for whatever explanation of linear versus angular velocity follows. The controversy will generate *interest* in that explanation.

Compare that to “rushing to the answer”:

How are you supposed to have a productive conversation about angular velocity without a) seeing motion or b) experiencing conflict?

See, we originally came up with these two different definitions of velocity (linear and angular) in order to resolve a conflict. We’ve lost that conflict in these textbook excerpts. They fail to develop the question and instead rush straight to the answer.

**BTW**. Would you do us all a favor? Show that video to your students and ask them to fill out this survey.

*This is a series about “developing the question” in math class.*

**Featured Comment**:

Bob Lochel, with a great activity that helps students *feel* the difference between angular and linear velocity:

]]>I keep telling myself that I would love to try this activity with 50 kids on the football field, or even have kids consider the speed needed to make it happen.

Without some physical activity, some sense of the motion and what it is that is actually changing, then the problems become nothing more than plug and chug experiences.

I agree with everything you say here. However, I think you will get silent resistance on this because teachers

don’t know what to do nextif their students can’t sketch a graph. But they know their students can follow mechanical instructions, so they’ll fall back on that.

Waitaminit. Is that *you*? Is Kate talking about *you*? Let’s talk about this.

Let’s say you’re working on Barbie Bungee. You’re tempted to jump your students straight to the mechanics of collecting and graphing precise data but you decide to develop that question a little bit first. You ask them for a sketch and the results come back:

A is (basically) correct. With zero rubber bands, Barbie falls her height and no further. Every extra rubber band adds a fixed amount to the distance she falls.

So what would you do with each of these sketches? Me, I think I’d say the same thing to each student.

**BTW**. Kate is back in the classroom after a short hiatus so there’s never been a better time to watch her think about teaching.

**Featured Comments**:

I’d need to think about it in context of the lesson and course flow. What happened before? What was done to orient them to the problem; do they have any concrete experience of the situation or is this more like just get something down, and then what kinds of things would they be basing their response on? What were your reasons for anticipating these 4? Are these kids in Algebra 2 or 8th grade? So I have more questions than answers.

I’m an engineering professor, not a math teacher, and my courses are built around design projects. What I’d tell the students is probably what I usually tell the students in the lab: “Try it and see!”

All four of these kids appear to have slightly different models for understanding how this graph relates to Barbie falling. I’m assuming that we are just asking for a rough sketch here, as per your previous post.

#1 seems to indicate some important understandings of the relationship between the two variables. It is hard to come up with that graph by accident. My feedback to this kid would be to ask her what else could be modeled with this graph.

#2 seems to know that the more rubber bands there are, the longer the distance is. This is a pretty key understanding. I am curious about why they chose to start their graph at the origin, and I would ask them to explain their reasoning behind their creation of this graph. Either they will notice their mistake themselves, or I will have more information with which to ask a better question. One possible response would be to ask kid #2 and kid #3 to justify their graphs and defend them.

#3 seems to be confusing the graph as a map of the actual fall itself, but there could be other explanations for their choice of graph. For example, they could be interpreting distance fallen as just distance, in which case they might be thinking that this means the distance from the ground. I need more information about their thinking, and so I would ask them to explain to me what they have done, and then depending on their response, I ask another question.

#4 did not do the question. There are many reasons why this could be true. They could not be able to read, they could not have a starting place for figuring this out, they could be unwilling to make a mistake, they could be still thinking about the problem by the time I get near them, and more. I need to know more information. Is this a typical pattern from this student? Have they produced similar graphs in the past? What socio-emotional concerns do I need to be aware of? Based on my understandings of these questions, I would ask a question like “Can you explain to me what the problem is asking?” Ideally I have already spent enough time clarifying the problem before everyone started that this particular question will not give me much information (eg. the student does know how to explain the problem) and I will likely need to ask another question. Maybe I need to ask them to describe the relationship between rubber bands and falling bands in words first.

My second reaction, when I read a few of the Barbie PDFs is that these things are so longgg …. I was a middle school science teacher and my ideal worksheet was a one pager. We did a lot of context building by talking through the prompt, what we needed to know and the experimental design. I didn’t always pull it off well, but I also didn’t have kids mechanically following my directions.

*This is a series about “developing the question” in math class.*

Here is a resolution: ask your students for a sketch first.

I’ve been a bit obsessed with “Barbie Bungee,” a lesson on linear regression which you’ll find all over the Internet. It’s the kind of lesson that doesn’t seem to have any original mother or father, only descendants. (Here is NCTM’s version as well as a video from the Teaching Channel.)

Search the Internet for “Barbie Bungee handouts. I have. Invariably, the handout asks students to collect data for how far Barbie falls given a number of rubber bands tied around her ankles and then graph the results *precisely*. Often times those handouts include a blank graph with precise units and labeled axes.

Developing the question means starting from a more informal place. It means asking the students, “What do you *think* the relationship looks like between the number of rubber bands and Barbie’s distance? Sketch it.”

Asking students to sketch the graph serves so many useful purposes.

**It helps us clarify assumptions.**What do we mean by “distance”? Barbie’s distance off the ground? The distance Barbie has fallen?**Predicting the relationship makes it easier to answer questions about it later.**This is from Lisa Kasmer’s research. It’s productive for students to decide if they think the relationship is linear, constant, increasing, decreasing, etc. What is its general shape? How do these quantities covary? As rubber bands increase, what happens to distance? Later, when students start to graph data precisely, the fact that the shape of their data matches their sketch will help confirm their results.**It’s great formative assessment.**Do your students even know what a graph represents? Find out by asking for a sketch. If they can’t*sketch*a graph, their later precise graphing is likely only going to be mechanical and instrumental. (ie. “First number right, second number up.”)**Comparing informal sketches, which may vary widely, will likely make for better debate than comparing precise graphs, which will largely look the same.**And controversy generates interest.

Which would make for a more interesting classroom debate? These three precise graphs?

Or these three imprecise sketches?

If the answer is “make a precise graph of a real-world relationship,” then developing the question means asking for a sketch first. That’s my resolution.

]]>I’m proud of Graphing Stories. That was the first math lesson that drew in any serious way on my video editing hobby. That was the first math lesson that alerted me to the enormous value in sharing curriculum with teachers on the Internet.

I’m unhappy with the project now. I look at it and see the product of a math teacher who is eager to get to the answer of *how to graph a real-world relationship* and less interested in *developing the question* that leads to that answer.

If you watch Adam Poetzel’s graphing story, in which he slides down a playground slide, here’s what you’ll see:

- A title announcing the quantity we’ll be recording: “height of waist off ground.”
- A gridded graph that shows the scale you’ll use. It runs up to 10 feet.
- A video of Adam sliding.

None of this is Adam’s fault, of course. That’s my editing.

Here’s how I’ve been doing a better job developing the question lately in workshops.

- I play the video of Adam sliding.
- I ask participants to tell their neighbors everything they saw. “Don’t miss a detail,” I say, and I’m always surprised by the details participants recall.
- I play the video again and I ask the participants to tell their neighbors their answer to the question, “What quantities could we measure throughout the video?” People suggest all kinds of possibilities. Speed, distance from the left side of the screen, height, temperature.
- Then I tell them I’d like them to focus on Adam’s height. I ask them to tell their neighbors
*in words*what happens to his height over time. - We share some descriptions. People compliment and critique one another. Then I point out how difficult it is to describe his height over time
*in words alone*. - Only then do I pass out the graphs.

The difference is immense. It takes an extra five minutes but participants are much better prepared to make the graph because they’ve spent so much time thinking about the relationship in so many informal ways. So many more participants walk away from the experience feeling like valued contributors to our group because the questions we’ve asked require a wider breadth of skills than just “graph relationships precisely.”

That’s the benefit. Again the cost was only five minutes of class time.

The most productive assumption I can make about any question I pose to a student is that a) there are questions I could have asked *earlier* to develop that main question, b) there are interesting ways I can *extend* that main question. In other words, I try to assume the question I was going to ask is only a thin middle slice of the corpus of interesting questions I *could have* asked. Tell yourself that. Maybe it’s a fiction. Maybe you use the entire question buffalo every time. It’s a useful fiction in any case.

**Next**: Let’s make a resolution.

**BTW**: Kyle Pearce got here first.

**Featured Comment**