Posts

Comments

Get Posts by E-mail

September Remainders

Awesome Internetting from the last month.

New Blog Subscriptions

  • Tracy Zager has been one of my favorite math voices on Twitter this school year and she’s now blogging. She’s also recently announced a fight with breast cancer and has requested that we “Please help me remember that I have thinking and ideas to share, and am involved in a world bigger than this right now.”
  • Annie Fetter’s work at the Math Forum has always been impressive and it’s a total oversight I hadn’t realized she writes a blog until now.
  • Tim McCaffrey and I share a lot of the same enthusiasms. He helps districts run lesson studies around three-act tasks and just started blogging about it.
  • Matt Bury had positively invaluable commentary during last month’s adaptive learning discussions.
  • Dan Burf, a/k/a Quadrant Dan, is a new teacher who has been using my old, old lessons, which is kind of fun to watch.
  • Amy Roediger, whose writing on Classkick was extremely useful.
  • Julie Wright is full of promise.
  • Just Mathness is full of promise.

New Twitter Follows

Multimedia Math

I make an open offer to my workshop participants to help them with their video editing. A couple of newcomers to multimedia modeling came up with these two tasks:

Great Tweets

Max Goldstein:

Proofs are social documents not compiled code.

Press Clippings

  • The Ontario Ministry of Education filmed an interview series with me and other math education-types in Toronto.
  • An interview with a teen writer from The Santa Fe New Mexican.
  • An interview with AFEMO, a Francophone group of math educators.

Bryan Anderson and Joel Patterson simply subtracted elements from printed tasks, added them back in later, and watched their classrooms become more interesting places for students.

Anderson took a task from the Shell Centre and delayed all the calculation questions, making room for a lot of informal dialog first.

140926_1

Patterson took a Discovering Geometry task and removed the part where the textbook specified that the solution space ran from zero to eight.

140926_2

“It turns out that by shortening the question,” Joel Patterson said, “I opened the question up, and the kids surprised themselves and me!”

I believe EDC calls these “tail-less problems.” I call it being less helpful.

BTW. These are great task designers here. I spent the coldest winter of my life at the Shell Centre because I wanted to learn their craft. Discovering Geometry was written by friend-of-the-blog Michael Serra. This only demonstrates how unforgiving the print medium is to interesting math tasks, like asking Picasso to paint with a toilet plunger. You have to add everything at once.

Mo Jebara, the founder of Mathspace, has responded to my concerns about adaptive math software in general and his in particular. Feel free to read his entire comment. I believe he has articulated several misconceptions about math education and about feedback that are prevalent in his field. I’ll excerpt those misconceptions and respond below.

Computer & Mouse v. Paper & Pencil

Jebara:

Just like learning Math requires persistence and struggle, so too is learning a new interface.

I think Mathspace has made a poor business decision to blame their user (the daughter of an earlier commenter) for misunderstanding their user interface. Business isn’t my business, though. I’ll note instead that adaptive math software here again requires students to learn a new language (computers) before they find out if they’re able to speak the language they’re trying to learn (math).

For example, here is a tutorial screen from software developed by Kenneth Tilton, a frequent commenter here who has requested feedback on his designs:

140921_1lo

Writing that same expression with paper and pencil instead is more intuitive by an order of magnitude. Paper and pencil is an interface that is omnipresent and easily learned, one that costs a bare fraction of the computer Mathspace’s interface requires, one that never needs to be plugged into a wall.

None of this means we should reject adaptive math software, especially not Mathspace, the interface of which allows handwriting. But these user interface issues pile high in the “cost” column, which means the software cannot skimp on the benefits.

Misunderstanding the Status Quo

Jebara:

Does a teacher have time to sit side by side with 30 students in a classroom for every math question they attempt?

[..]

But teachers can’t watch while every student completes 10,000 lines of Math on their way to failing Algebra.

[..]

I talk to teachers every single day and they are crying out for [instant feedback software].

Existing classroom practice has its own cost and benefit columns and Jebara makes the case that classroom costs are exorbitant.

Without adaptive feedback software, to hear Jebara tell it, students are wandering in the dark from problem to problem, completely uncertain if they’re doing anything right. Teachers are beleaguered and unsure how they’ll manage to review every student’s work on every assigned problem. Thirty different students will reveal thirty unique misconceptions for each one of thirty problems. That’s 27,000 unique responses teachers have to make in a 45 minute period. That’s ten responses per second! No wonder all these teachers are crying.

This is all Dickens-level bleak and misunderstands, I believe, the possible sources of feedback in a classroom.

There is the textbook’s answer key, of course. Some teachers make regular practice of posting all the answers in advance of an exercise set, also, so students have a sense that they’re heading in the right direction and focus on process not product.

Commenter Matt Bury also notes that a student’s classmates are a useful source of feedback. Since I recommended Classkick last week, several readers have tried it out in their classes. Amy Roediger writes about the feature that allows students to help other students:

… the best part was how my students embraced collaborating with each other. As the problems got progressively more challenging, they became more and more willing to pitch in and help each other.

All of these forms of feedback exist within their own webs of costs and benefits too, but the idea that without adaptive math software the teacher is the only source of feedback just isn’t accurate.

Immediate v. Delayed Feedback

Most companies in this space make the same set of assumptions:

  1. Any feedback is better than no feedback.
  2. Immediate feedback is better than delayed feedback.

Tilton has written here, “Feedback a day later is not feedback. Feedback is immediate.”

In fact, Kluger & DeNisi found in their meta-analysis of feedback interventions that feedback reduced performance in more than one third of studies. What evidence do we have that adaptive math software vendors offer students the right kind of feedback?

The immediate kind of feedback isn’t without complication either. With immediate feedback, we may find students trying answer after answer, looking for the red x change to a green check mark, learning little more than systematic guessing.

Immediate feedback risks underdeveloping a student’s own answer-checking capabilities also. If I get 37 as my answer to 14 + 22, immediate feedback doesn’t give me any time to reflect on my knowledge that the sum of two even numbers is always even and make the correction myself. Along those lines, Cope and Simmons found that restricting feedback in a Logo-style environment led to better discussions and higher-level problem-solving strategies.

What Computers Do To Interesting Exercises

Jebara:

Can you imagine a teacher trying to provide feedback on 30 hand-drawn probability trees on their iPad in Classkick?

[..]

Can you imagine a teacher trying to provide feedback on 30 responses for a Geometric reasoning problem letting students know where they haven’t shown enough of a proof?

I can’t imagine it, but not because that’s too much grading. I can’t imagine assigning those problems because I don’t think they’re worth a class’ limited time and I don’t think they do justice to the interesting concepts they represent.

Bluntly, they’re boring. They’re boring, but that isn’t because the team at Mathspace is unimaginative or hates fun or anything. They’re boring because a) computers have a difficult time assessing interesting problems, and b) interesting problems are expensive to create.

Please don’t think I mean “interesting” week-long project-based units or something. (The costs there are enormous also.) I mean interesting exercises:

Pick any candy that has multiple colors. Now pick two candies from its bag. Create a probability tree for the candies you see in front of you. Now trade your tree with five students. Guess what candy their tree represents and then compute their probabilities.

The students are working five exercises there. But you won’t find that exercise or exercises like it on Mathspace or any other adaptive math platform for a very long time because a) they’re very hard to assess algorithmically and b) they’re more expensive to create than the kind of problem Jebara has shown us above.

I’m thinking Classkick’s student-sharing feature could be very helpful here, though.

Summary

Jebara:

So why don’t we try and automate the parts that can be automated and build great tools like Classkick to deal with the parts that can’t be automated?

My answer is pretty boring:

Because the costs outweigh the benefits.

In 2014, the benefits of that automation (students can find out instantly if they’re right or wrong) are dwarfed by the costs (see above).

That said, I can envision a future in which I use Mathspace, or some other adaptive math software. Better technology will resolve some of the problems I have outlined here. Judicious teacher use will resolve others. Math practice is important.

My concerns are with the 2014 implementations of the idea of adaptive math software and not with the idea itself. So I’m glad that Jebara and his team are tinkering at the edges of what’s possible with those ideas and willing, also, to debate them with this community of math educators.

Featured Comment

Mercy – all of them. Just read the thread if you want to be smarter.

Can Sports Save Math?

A Sports Illustrated editor emailed me last week:

I’d like to write a column re: how sports could be an effective tool to teach probability/fractions/ even behavioral economics to kids. Wonder if you have thoughts here….

My response, which will hopefully serve to illustrate my last post:

I tend to side with Daniel Willingham, a cognitive psychologist who wrote in his book Why Students Don’t Like School, “Trying to make the material relevant to students’ interests doesn’t work.” That’s because, with math, there are contexts like sports or shopping but then there’s the work students do in those contexts. The boredom of the work often overwhelms the interest of the context.

To give you an example, I could have my students take the NBA’s efficiency formula and calculate it for their five favorite players. But calculating – putting numbers into a formula and then working out the arithmetic – is boring work. Important but boring. The interesting work is in coming up with the formula, in asking ourselves, “If you had to take all the available stats out there, what would your formula use? Points? Steals? Turnovers? Playing time? Shoe size? How will you assemble those in a formula?” Realizing you need to subtract turnovers from points instead of adding them is the interesting work. Actually doing the subtraction isn’t all that interesting.

So using sports as a context for math could surely increase student interest in math but only if the work they’re doing in that context is interesting also.

Featured Email

Marcia Weinhold:

After my AP stats exam, I had my students come up with their own project to program into their TI-83 calculators. The only one I remember is the student who did what you suggest — some kind of sports formula for ranking. I remember it because he was so into it, and his classmates got into it, too, but I hardly knew what they were talking about.

He had good enough explanations for everything he put into the formula, and he ranked some well known players by his formula and everyone agreed with it. But it was building the formula that hooked him, and then he had his calculator crank out the numbers.

Real Work v. Real World

“Make the problem about mobile phones. Kids love mobile phones.”

I’ve heard dozens of variations on that recommendation in my task design workshops. I heard it at Twitter Math Camp this summer. That statement measures tasks along one axis only: the realness of the world of the problem.

140822_1lo

But teachers report time and again that these tasks don’t measurably move the needle on student engagement in challenging mathematics. They’re real world, so students are disarmed of their usual question, “When will I ever use this?” But the questions are still boring.

That’s because there is a second axis we focus on less. That axis looks at work. It looks at what students do.

That work can be real or fake also. The fake work is narrowly focused on precise, abstract, formal calculation. It’s necessary but it interests students less. It interests the world less also. Real work – interesting work, the sort of work students might like to do later in life – involves problem formulation and question development.

That plane looks like this:

140822_2lo

We overrate student interest in doing fake work in the real world. We underrate student interest in doing real work in the fake world. There is so much gold in that top-left quadrant. There is much less gold than we think in the bottom-right.

BTW. I really dislike the term “real,” which is subjective beyond all belief. (eg. What’s “real” to a thirty-year-old white male math teacher and what’s real to his students often don’t correlate at all.) Feel free to swap in “concrete” and “abstract” in place of “real” and “fake.”

Related. Culture Beats Curriculum.

This is a series about “developing the question” in math class.

Featured Tweet

Featured Comment

Bob Lochel:

I would add that tasks in the bottom-right quadrant, those designed with a “SIMS world” premise, provide less transfer to the abstract than teachers hope during the lesson design process. This becomes counter-productive when a seemingly “progressive” lesson doesn’t produce the intended result on tests, then we go back not only to square 1, but square -5.

Fred Thomas:

I love this distinction between real world and real work, but I wonder about methods for incorporating feedback into real work problems. In my experience, students continue to look at most problems as “fake” so long as they depend on the teacher (or an answer key or even other students) to let them know which answers are better than others. We like to use tasks such as “Write algebraic functions for the percent intensity of red and green light, r=f(t) and g=f(t), to make the on-screen color box change smoothly from black to bright yellow in 10 seconds.” Adding the direct, immediate feedback of watching the colors change makes the task much more real and motivating.

« Prev - Next »