Category: mailbag

Total 26 Posts

Can Sports Save Math?

A Sports Illustrated editor emailed me last week:

I’d like to write a column re: how sports could be an effective tool to teach probability/fractions/ even behavioral economics to kids. Wonder if you have thoughts here….

My response, which will hopefully serve to illustrate my last post:

I tend to side with Daniel Willingham, a cognitive psychologist who wrote in his book Why Students Don’t Like School, “Trying to make the material relevant to students’ interests doesn’t work.” That’s because, with math, there are contexts like sports or shopping but then there’s the work students do in those contexts. The boredom of the work often overwhelms the interest of the context.

To give you an example, I could have my students take the NBA’s efficiency formula and calculate it for their five favorite players. But calculating – putting numbers into a formula and then working out the arithmetic – is boring work. Important but boring. The interesting work is in coming up with the formula, in asking ourselves, “If you had to take all the available stats out there, what would your formula use? Points? Steals? Turnovers? Playing time? Shoe size? How will you assemble those in a formula?” Realizing you need to subtract turnovers from points instead of adding them is the interesting work. Actually doing the subtraction isn’t all that interesting.

So using sports as a context for math could surely increase student interest in math but only if the work they’re doing in that context is interesting also.

Featured Email

Marcia Weinhold:

After my AP stats exam, I had my students come up with their own project to program into their TI-83 calculators. The only one I remember is the student who did what you suggest — some kind of sports formula for ranking. I remember it because he was so into it, and his classmates got into it, too, but I hardly knew what they were talking about.

He had good enough explanations for everything he put into the formula, and he ranked some well known players by his formula and everyone agreed with it. But it was building the formula that hooked him, and then he had his calculator crank out the numbers.

“I would eat the extra meatball.”

Simon Terrell recaps his lesson study trip to Japan with Akihiko Takahashi, who was the subject of Elizabeth Green’s American math article last week:

In one case, a teacher was teaching a lesson about division with remainders and the example was packaging meatballs in pack of 4. When faced with the problem of having 13 meatballs and needing 4 per pack, one student’s solution was “I would eat the extra meatball and then they would all fit.” It was so funny and joyful to see that all thinking was welcomed and the teacher artfully led them to the general thinking that she wanted by the end of the lesson.

I can trace my development as a teacher through the different reactions I would have had to “I would eat the extra meatball,” from panic through irritation to some kind of bemusement.

BTW. The comments here have been on another level lately, team, including Simon’s, so thanks for that. I’ve lifted a bunch of them into the main posts of Rand Paul Fixes Calculus and These Tragic “Write An Expression” Problems.

“Think About Your Favorite Problem From A Unit”

Bob Lochel, responding to commenter Jenni who wondered how, when, and where to integrate tasks into a unit:

In my years as math coach, the most efficient piece of advice I would give to teachers is this: think about your favorite problem from a unit, the problem you look forward to, or that problem which is number 158 in the last section which you know will generate all kinds of discussion. Without fail, this problem is often done last, as the summary of all ideas in the unit. Okay, why not do it first? Keep it simmering in the background, flesh it out as ideas are developed and pratice occurs. It often doesn’t take a sledgehammer to make a good unit great.

Pennies, Pearson, And The Mistakes You Never See Coming

I took machine-graded learning to task earlier this week for obscuring interesting student misconceptions. Kristen DiCerbo at Pearson’s Research and Innovation Network picked up my post and argued I was too pessimistic about machine-graded systems, posing this scenario:

Students in the class are sitting at individual computers working through a game that introduces basic algebra courses. Ms. Reynolds looks at the alert on her tablet and sees four students with the “letters misconception” sign. She taps “work sample” and the tablet brings up their work on a problem. She notes that all four seem to be thinking that there are rules for determining which number a letter stands for in an algebraic expression. She taps the four of them on the shoulder and brings them over to a small table while bringing up a discussion prompt. She proceeds to walk them through discussion of examples that lead them to conclude the value of the letters change across problems and are not determined by rules like “c = 3 because c is the third letter of the alphabet.”

My guess is we’re decades, not years, away from this kind of classroom. If it’s possible at all. Three items in this scenario seem implausible:

  • That four students in a classroom might assume “c = 3 because c is the third letter of the alphabet.” I taught Algebra for six years and never saw this conception of variables. (Okay, this isn’t a big deal.)
  • That a teacher has the mental bandwidth to manage a classroom of thirty students and keep an eye on her iPad’s Misconception Monitor. Not long ago I begged people on Twitter to tell me how they were using learning dashboards in the classroom. Everyone said they were too demanding. They used them at home for planning purposes. This isn’t because teachers are incapable but because the job demands too much attention.
  • That the machine grading is that good. The system DiCerbo proposes is scanning and analyzing handwritten student work in real-time, weighing them against a database of misconceptions, and pairing those up with a scripted discussion. Like I said: decades, if ever.

This also means you have to anticipate all the misconceptions in advance, which is tough under the best of circumstances. Take Pennies. Even though I’ve taught it several times, I still couldn’t anticipate all the interesting misconceptions.

The Desmos crew and I had students using smaller circles full of pennies to predict how many pennies fit in a 22-inch circle.

131003_1

But I can see now we messed that up. We sent students straight from filling circles with pennies to plotting them and fitting a graph. We closed off some very interesting right and wrong ways to think about those circles of pennies.

Some examples from reader Karlene Steelman via e-mail:

They tried finding a pattern with the smaller circles that were given, they added up the 1 inch circle 22 times, they combine the 6, 5, 4, 3, 2, 1, and 1 circles to equal 22 inches, they figured out the area of several circles and set up proportions between the area and the number of pennies, etc. It was wonderful for them to discuss the merits and drawbacks of the different methods.

Adding the 1-inch circle 22 times! I never saw that coming. Our system closed off that path before students had the chance even to express their preference for it.

So everyone has a different, difficult job to do here, with different criteria for success. The measure of the machine-graded system is whether it makes those student ideas invisible or visible. The measure of the teacher is whether she knows what to do with them or not. Only the teacher’s job is possible now.

Featured Comments

Sue Hellman:

This doesn’t even touch the students who get questions RIGHT for the wrong reasons.

Dave Major:

Dashboards of the traditional ‘spawn of Satan & Clippy the Excel assistant’ sort throw way too much extremely specific information straight to the surface for my liking (and brain). That information is almost always things that are easy for machines (read. programmers) to work out, and likely hard or time consuming yet dubiously useful for humans to do. I wonder how many teachers, when frozen in time mid-lesson and placed in the brain deli slicer would be thinking “Jimmy has 89% of this task correct and Sally has only highlighted four sentences on this page.”

[Mailbag] Direct Instruction V. Inquiry Learning, Round Eleventy Million

Let me highlight another conversation from the comments, this time between Kevin Hall, Don Byrd, and myself, on the merits of direct instruction, worked examples, inquiry learning, and some blend of the three.

Some biography: Kevin Hall is a teacher as well as a student of cognitive psychology research. His questions and criticisms around here tend to tug me in a useful direction, away from the motivational factors that usually obsess me and closer towards cognitive concerns. The fact that both he and Don Byrd have some experience in the classroom keep them from the worst excesses of cognitive science, which is to see cognition as completely divorced from motivation and the classroom as different only by degrees from a research laboratory.

Kevin Hall:

While people tend to debate which is better, inquiry learning or direct instruction, the research says sometimes it’s one and sometimes the other. A recent meta study found that inquiry is on average better, but only when “enhanced” to provide students with assistance [1]. Worked examples actually can be one such form if assistance (e.g., showing examples and prompting students for explanations of why each step was taken).

One difficulty with just discussing this topics that people tend to disagree about what constitutes inquiry-based learning. I heard David Klahr, a main researcher in this field, speak at a conference once, and he said lots of people considered his “direct instruction” conditions to be inquiry. He wished he had just labelled his conditions as Condition 1, 2, and 3 because it would have avoided lots of controversy.

Here’s where Cognitive Load Theory comes in: effectiveness with inquiry (minimal guidance) depends in the net impact of at least 3 competing factors: (a) motivation, (b) the generation effect, and (c) working memory limitations. Regarding (a), Dan often makes the good point that if teachers use worked examples in a boring way, learning will be poor even if students cognitive needs are being met very well.

The generation effect says that you remember better the facts, names, rules, etc that you are asked to come up with on your own. It can be very difficult to control for this effect in a study, mainly because its always possible that if you let students come up with their own explanations in one group while providing explanations to a control group, the groups will be exposed to different explanations, and then you’re testing the quality of the explanations and not the generation effect itself. However, a pretty brilliant (in my opinion) study controlled for this and verified the effect [2]. We need more studies to confirm. Here is a really portent paragraph from the second page of the paper: “Because examples are often addressed in Cognitive Load Theory (Paas, Renkl, & Sweller, 2003), it is worth a moment to discuss the theory’s predictions. The theory defines three types of cognitive load: intrinsic cognitive load is due to the content itself; extraneous cognitive load is due to the instruction and harms learning; germane cognitive load is due to the instruction and helps learning. Renkl and Atkinson (2003) note that self-explaining increases measurable cognitive load and also increases learning, so it must be a source of germane cognitive load. This is consistent with both of our hypotheses. The Coverage hypothesis suggests that the students are attending to more content, and this extra content increases both load and learning. The Generation hypothesis suggests that load and learning are higher when generating content than when comprehending it. In short, Cognitive Load Theory is consistent with both hypotheses and does not help us discriminate between them.”

Factor (c) is working memory load. The main idea is found in this quote from the Sweller paper Dan linked to above, Why Minimal Instruction During Instruction Does Not Work [3]: “Inquiry-based instruction requires the learner to search a problem space for problem-relevant information. All problem-based searching makes heavy demands on working memory. Furthermore, that working memory load does not contribute to the accumulation of knowledge in long-term memory because while working memory is being used to search for problem solutions, it is not available and cannot be used to learn.” The key here is that when your working memory is being used to figure something out, it’s not actually being used to to learn it. Even after figuring it out, the student may not be quite sure what they figured out and may not be able to repeat it.

Does this mean asking students to figure stuff out for themselves is a bad idea? No. But it does mean you have to pay attention to working memory limitations by giving students lots of drill practice applying a concept right after they discover it. If you don’t give the drill practice after inquiry, students do worse than if you just provided direct instruction. If you do provide the drill practice, they do better than with direct instruction. This is not a firmly-established result in the literature, but it’s what the data seems to show right now. I’ve linked below to a classroom study [4] and a really rigorously-controlled lab study study [5] showing this. They’re both pretty fascinating reads… though the “methods” section of [5] can be a little tedious, the first and last parts are pretty cool. The title of [5] sums it up: “Practice Enables Successful Learning Under Minimal Guidance.” The draft version of that paper was actually subtitled “Drill and kill makes discovery learning a success”!

As I mentioned in the other thread Dan linked to, worked examples have been shown in year-long classroom studies to speed up student learning dramatically. See the section called “Recent Research on Worked Examples in Tutored Problem Solving” in [6]. This result is not provisional, but is one of the best-established results in the learning sciences.

So, in summary, the answer to whether to use inquiry learning is not “yes” or “no”, and people shouldn’t divide into camps based on ideology. Still unanswered question is the question when to be “less helpful” as Dan’s motto says and when to be more helpful.

One of the best researchers in the area is Ken Koedinger, who calls this the Assistance Dilemma and discusses it in this article [7]. His synthesis of his and others’ work on the question seems to say that more complex concepts benefit from inquiry-type methods, but simple rules and skills are better learned from direct instruction [8]. See especially the chart on p. 780 of [8]. There may also be an expertise reversal effect in which support that benefits novice learners of a skill actually ends up being detrimental for students with greater proficiency in that skill.

Okay, before I go, one caveat: I’m just a math teacher in Northern Virginia, so while I follow this literature avidly, I’m not as expert as an actual scientist in this field. Perhaps we could invite some real experts to chime in?

Dan Meyer:

Thanks a mil, Kevin. While we’re digesting this, if you get a free second, I’d appreciate hearing how your understanding of this CLT research informs your teaching.

Kevin Hall:

The short version is that CLT research has made me faster in teaching skills, because cognitive principles like worked examples, spacing, and the testing effect do work. For a summary of the principles, see this link.

But it’s also made me persistent in trying 3-Acts and other creative methods, because it gives me more levers to adjust if students seem engaged but the learning doesn’t seem to “stick”.

Here’s a depressing example from my own classroom:

Two years ago I was videotaping my lessons for my masters thesis on Accountable Talk, a discourse technique. I needed to kick off the topic of inverse functions, and I thought I had a good plan. I wrote down the formula A = s^2 for the area of a square and asked students what the “inverse” of that might mean (just intuitively, before we had actually defined what an inverse function is). Student opinions converged on the S = SqRt(A). I had a few students summarize and paraphrase, making sure they specifically hit on the concept of switching input and output, and everyone seemed to be on board. We even did an analogous problem on whiteboards, which most students got correct. Then I switched the representations and drew the point (2, 4) point on a coordinate plane. I said, “This is a function. What would its inverse be?” I expected it to be easy, but it was surprisingly difficult. Most students thought it would be (-2, -4) or (2, -4), because inverse meant ‘opposite’. Eventually a student, James (not his real name), explained that it would be (4, 2) because that represents switching inputs and outputs. Eventually everyone agreed. Multiple students paraphrased and summarized, and I thought things were good.

Class ended, but I felt good. The next class, I put up an similar problem to restart the conversation. If a function is given by the point (3, 7), what’s the inverse of that function? Dead silence for a while. Then one student (the top student in the class) piped up: “I don’t remember the answer, but I remember that this is where James ‘schooled’ us last class.” Watching the video of that as I wrote up my thesis was pretty tough.

But at least I had something to fall back on. I decided it was a case of too much cognitive load–they were processing the first discussion as we were having it, but they didn’t have the additional working memory needed to consolidate it. If I had attended to cognitive needs better, the question about (2, 4) would have been easier, and I should NOT have switched representations from equations to points until it seemed like the switch would be a piece of cake.

I also think knowing the CLT research has made me realize how much more work I need to do to spiral in my classroom.

Then in another thread on adaptive math programs:

Kevin Hall:

My intention was to respond to your critique that a computer can’t figure out what mistake you’re making, because it only checks your final answer. Programs with inner-loop adaptivity do, in fact, check each step of your work. Before too long, I they might even be better than a teacher at helping individual students identify their mistakes and correct them, because as as teacher I can’t even sit with each student for 5 min per day.

Don Byrd:

I have only a modest amount of experience as a math teacher; I lasted less than two years — less than one year, if you exclude student teaching — before scurrying back to academic informatics/software research. But I scurried back with a deep interest in math education, and my academic work has always been close to the boundary between engineering and cognitive science. Anyway, I think Kevin H. is way too optimistic about the promise of computer-based individualized instruction. He says “It seems to me that if IBM can make Watson win Jeopardy, then effective personalization is also possible.” Possible, yes, but as Dan says, the computer “struggles to capture conceptual nuance.” Success at Jeopardy simply requires coming up with a series of facts; that’s highly data based and procedural. The distance from winning Jeopardy to “capturing conceptual nuance” is much, much greater than the distance from adding 2 and 2 to winning Jeopardy.

Kevin also says that “before too long, [programs with inner-loop adaptivity] might even be better than a teacher at helping individual students identify their mistakes and correct them, because as as teacher I can’t even sit with each student for 5 min per day.” I’d say it’s likely programs might be better than teachers at that “before too long” only if you think of “identifying a mistake” as telling Joanie that in _this_ step, she didn’t convert a decimal to a fraction correctly. It’ll be a very long time before a computer will be able to say why she made that mistake, and thereby help her correct her thinking.

2013 Aug 14. Christian Bokhove passes along an interesting link summarizing criticisms of CLT.