Get Posts by E-mail

WTF Math Problems

As I mentioned on Twitter earlier this week, I find a particular kind of math problem extremely exciting now. Here are five of them. I want to know what to call them. I want to know what are their essential features. I want more of them and I want to read more about them.

Here is one of the five, taken from Scott Farrand’s presentation at CMC North.

Here are some points in the plane:

(4, 1), (17, 27), (1, -5), (8, 9), (13, 19), (-2, -11)
(20, 33), (7,7), (-5, -17), (10, 13)

Choose any two of these points. Check with your neighbor to be sure that you didn’t both choose the same pair of points. Now find the rate of change between the first and the second point. Write it on the board. What do you notice?

From Henri Picciotto’s review of Farrand’s session:

Students are stunned to learn that everyone in the class gets the same slope. This sets the stage for proving that the slope between any two points on a given line is always the same, no matter what points you pick.

In an email conversation with Farrand, he proposed the term “WTF Problems” because they all, ideally, involve a moment where the student exclaims “WTF”:

Set up a surprise, such that resolution of that becomes the lesson that you intended. Anything that makes students ask the question that you plan to answer in the lesson is good, because answering questions that haven’t been asked is inherently uninteresting.

These seem like essential features:

  • These problems are all brief. They slot easily into an opener.
  • They look forward and backward. They fit right in the gap between an old concept and the new. They review the old (slope in this case) while setting up the new (collinearity).
  • Students encounter an unexpected result. The world is either more orderly (the slope example above) or less orderly (see problem #2) than they thought.

And the weirdest feature:

  • They require the teacher to be cunning, actively concealing the upcoming WTF, assuring students that, yes, this problem is as trivial as you think it is, knowing all the while that it isn’t.

When did they teach you that in your teacher training?

It’s striking to me that the history of mathematics is driven by the explanations following these WTF moments:

  • We knew how to divide numbers. We didn’t know how to divide by zero. Enter Newton & Leibniz explanation of calculus.
  • We knew how to find the square roots of positive numbers, but not negative. Enter Euler’s explanation of imaginary numbers.
  • We knew what Eucld’s geometry looked like, but what if parallel lines could meet. Enter the explanation of hyperbolic, spherical, and other non-Euclidean geometries.
  • There are lots of WTF moments that haven’t yet been explained.

In school mathematics, though, we simply give the explanations, without paying even the briefest homage to the WTFs that provoked them.

What Farrand and you and I are trying to do here is restore some of that WTF to our math curriculum, without forcing students to re-create thousands of years of intellectual struggle.

So help me out:

  • Have you seen other problems like these?
  • Who else has written about these problems? I believe we’re talking about disequilibrium here, which is Piaget’s territory, but I’m looking for writing local to mathematics.

Featured Comments

David Wees cautions us that the effect of these problems depends on a student’s background knowledge. If you don’t know how to calculate slope, the problem above won’t surprise, just confound. I agree, but the same is true of textbooks and nearly every other resource.

Michael Pershan worries that the “twist” in these problems will become overused, that students will become bored or expectant. (Clara Maxcy echoes.) I demur.

Dan Anderson offers other examples. As do Mike Lawler, Federico Chialvo, Kyle Pearce, Jeff Morrison, and Michael Serra.

Franklin Mason critiques my math history without (I think) critiquing my main point about math history.

Scott Farrand, whose presentation at CMC-North inspired this post, elaborates.

Ben Orlin summarizes the design of these problems in four useful steps.

Terri Gilbert summarizes this post in a t-shirt.

Featured Tweet

December Remainders

Happy New Year. This ThinkUp outfit told me which of my tweets were “biggest” in each of the months of 2014. Twelve “big” tweets, in other words.

Here were my new blog subscriptions in December 2014, some of which might interest you.

What did you fill your head with in December?

Michael Cieply for The New York Times:

The Interview generated roughly $15 million in online sales and rentals during its first four days of availability, Sony Pictures said on Sunday. Sony did not say how much of that total represented $6 digital rentals versus $15 sales. The studio said there were about two million transactions over all.

We gotcha covered, Mike!


Featured Comment

Stas, with a zinger for the ages:

Probably they took advice from this guy.

Angela Ensminger:

Maybe an opening question to this problem would be:. Do you think Sony had more rentals or more sales? This could lead to some interesting discussions before actually solving the problem.

[h/t Math Curmudgeon]

Here is the talk I gave at CMC-North last weekend: Video Games & Making Math More Like Things Students Like.

Students generally prefer video games to our math classes and I wanted to know why. So I played a lot of video games and read a bit about video games and drew some conclusions. I also asked my in-laws to play two video games in front of a camera so we could watch their learning process and draw comparisons to our students.

These are the six lessons I learned:

  1. Video games get to the point.
  2. The real world is overrated.
  3. Video games have an open middle.
  4. The middle grows more challenging and more interesting at the same time.
  5. Instruction is visual, embedded in practice, and only as needed.
  6. Video games lower the cost of failure.

Featured Comments:

Tim brings storytelling to the conversation:

As one of those weird AP Lit and AP Calc teachers – and a gamer – I think “story” is key in video gaming. Psychologists (like Willingham) and sociologists talk about the “story bias” of the brain. Nearly all long video games have a heavy story element. You are a character embedded in a story, be it open-ended or scripted. So often when I’m frustrated with bad game design I’ll push through because I’m committed to the story. So often when I finish the “missions” I give up on the well-designed “side-quests” because the story has rushed out of the game and it’s just a task-garden again.

I’ll play Angry Birds for a few minutes. I’ll play Temple Run till I beat my friend’s score. But I won’t put 20 hours into a game until I find a story I want to be invested in. (In the same breath, I’ll say that – in the sense of “story” that Willingham uses it – Angry Birds and Temple Run have their stories, too. Far more than many “story” problems in math books like to pretend that have.)

Not sure how you get rich story into math. How to become characters whose adventures we become invested in, not the scripted Jane who is trying to maximize the area of his pasture or the open-ended John who is trying to find a good way to estimate the number of people in a photo.

Anyway – the first lesson I learn from video games is: humans will spend hours on a good yarn.

My Panama Canal metaphor was just a joke from the onset so I had to admire Joshua Greene’s continued debunking.

My analysis of Khan Academy’s eighth-grade curriculum was viewed ~20,000 times over the last ten days. Several math practice web sites have asked me to perform a similar analysis on their own products. All of this gives me hope that my doctoral work may be interesting to people outside my small crowd at Stanford.

Two follow-up notes, including the simplest way Khan Academy can improve itself:

One. Several Khan Academy employees have commented on the analysis, both here and at Hacker News.

Justin Helps, a content specialist, confirmed one of my hypotheses about Khan Academy:

One contributor to the prevalence of numerical and multiple choice responses on KA is that those were the tools readily available to us when we began writing content. Our set of tools continues to grow, but it takes time for our relatively small content team to rewrite item sets to utilize those new tools.

But as another commenter pointed out, if the Smarter Balanced Assessment Consortium can make interesting computerized items, what’s stopping Khan Academy? Which team is the bottleneck: the software developers or the content specialists? (They’re hiring!)

Two. In my mind, Khan Academy could do one simple thing to improve itself several times over:

Ask questions that computers don’t grade.

A computer graded my responses to every single question in eighth grade.

That means I was never asked, “Why?” or “How do you know?” Those are seriously important questions but computers can’t grade them and Khan Academy didn’t ask them.

At one point, I was even asked how m and b (of y = mx + b fame) affected the slope and y-intercept of a graph. It’s a fine question, but there was no place for an answer because how would the computer know if I was right?

So if a Khan Academy student is linked to a coach, make a space for an answer. Send the student’s answer to the coach. Let the coach grade or ignore it. Don’t try to do any fancy natural language processing. Just send the response along. Let the human offer feedback where computers can’t. In fact, allow all the proficiency ratings to be overridden by human coaches.

Khan Academy does loads of A/B testing right? So A/B test this. See if teachers appreciate the clearer picture of what their students know or if they prefer the easier computerized assessment. I can see it going either way, though my own preference is clear.

« Prev - Next »