So you have here a fairly straightforward carnival estimation game, which I decided to complicate by filling up a smaller container with the same kind of (horrid) malted eggs and making that quantity known.
I surveyed my students, my math-department colleagues, some of their students, my principal, and the central office staff. A little over 100 guesses all told. I tagged each guess with the following metadata:
- guess type (gut check, visual estimate, math computation),
- job description (student, math teacher, staff member, principal),
- current math class (eg. Algebra 1, Geometry, AP Calculus, etc.),
- grade level (freshman, sophomore, junior, senior),
I showed my students the raw data and asked them what they wanted to know. I wrote their questions on the board.
- who won?
- who guessed worst?
- what was the ranking of everyone in between?
- what type of people used math computation for their guesses?
- were there any tied guesses?
- what was the highest/lowest guess?
- which grade level guessed the most?
- which grade level guessed the best?
I said I was offering a "bounty" for answers to those questions and asked them to define the term. Some kids had seen Dog the Bounty Hunter and explained it from that angle. I assigned each question a point value that corresponded roughly to a) the difficulty of the question and b) its relevance to my objective — how are absolute value and percent error useful for calculating accuracy? I offered 20 points for a picture of an interesting fact. (See "Interesting Pictures" below.)
They had to scrape together 100 points for the day and I offered extra credit for initiative, divergent thinking, etc.
Students worked in pairs on laptops. They downloaded an Excel sheet with all this data, including the real name of every guesser. Naturally, they were into that.
The great part about a sample size of one hundred guesses is how easy it was to determine which groups were taking a tedious, manual approach to these questions and which were using Excel's built-in capability for sorting and calculating. I circulated the classroom and could tell that a group was ready to learn more about Excel because they were using hash marks to count up every freshman, sophomore, junior, and senior. Those students were wandering the desert on foot, ready for the water, compass, and camels I could offer them.
Likewise, I saw another group of students subtracting all one hundred guesses from the actual answer (1831) one at a time on cell phones. It didn't take much to convince them to experiment with another approach.
My favorite conversations with students centered around a definition of "accuracy," as in, "who were the top ten most accurate guessers?" Our earlier trick of just subtracting the guesses from the answer messed with Excel's sort mechanism, unhelpfully stacking positives on top of negatives, when, really, we didn't care if you guessed 100 eggs too high or 100 eggs too low. For our purposes, those two people tied1.
Two students were so close to constructing that operation themselves I had to bite off my tongue to keep from spelling the whole thing out ("ABSOLUTE VALUE! SUBTRACT AND THEN TAKE THE ABSOLUTE VALUE!!") and then the bell rang. We didn't graph anything. We didn't get to percent error. Half the groups got to absolute value.
Off moments like this, I have determined my constructivism multiplier to be four, which is to say it takes me four times longer to bring a student to conceptual understanding through conversation and questioning in a social situation the student helped create than it does to get up in front of the class and simply give it to them straight, no chaser, through direct instruction and a handout of questions I wrote.
What I find maddening about conversations with committed constructivists (cf. the conversation here) is the reflexive assumption that educators choose direct instruction because they're either power-drunk or self-obsessed or because they lack faith, courage, or high expectations. I can't, personally, wave so dismissively at the massive institutional impediments to student-constructed learning.
Percent Error By Grade Level
Percent Error By Guess Type
It's worth pointing out here that "Math Computation" isn't the same thing as "Correct Math Computation." The most accurate guessers verified their correct math computation with a visual estimate.
Percent Error By Math Class
Percent Error By Job Description
That last graph is what I meant at TEDx when I said that math gives your intuition a certain vocabulary. The math teachers have a more descriptive vocabulary for expressing their own intuitions than the students do. This is also a fair answer to the question, "when will I ever use math?" You might not. You can live without it. But it makes a lot of intuitive tasks a lot easier. And you should also understand the risk that you'll one day be fleeced by or passed over for those who know how to speak with that vocabulary.
The Creative Feedback Loop Of Teaching
Where else can you get this? In all of the creative fields that have ever tempted me professionally — I'm talking about graphic design, screenwriting, and filmmaking — ideas often take months to generate and refine, years to produce, and, in many cases, you can't do anything with the feedback except hope it's good enough to get you your next job.
With teaching, you can get any old harebrained idea on Friday, challenge your students with it Monday morning, then adapt it for your afternoon class based on feedback from the morning. The feedback loop is fast enough to give you whiplash. It's so much fun, this job, it seems impossible sometimes that anyone could ever walk away from classroom teaching.
The Grand Prize
Not those horrid malted eggs, that's for sure.
- This is a good companion exercise, in that sense, to How Old Is Tiger Woods?