**Introduction**

My dissertation will examine the opportunities students have to learn math online. In order to say something about the current state of the art, I decided to complete Khan Academy’s eighth grade year and ask myself two specific questions about every exercise:

**What am I asked to**What are my*do*?*verbs*? Am I asked to solve, evaluate, calculate, analyze, or something else?**What do I**What is the end result of my work? Is my work summarized by a number, a multiple-choice response, a graph that I create, or something else?*produce*?

I examined Khan Academy for several reasons. First, because they’re well-capitalized and they employ some of the best computer engineers in the world. They have the human resources to create some novel opportunities for students to learn math online. If *they* struggle, it is likely that *other* companies with equal or lesser human resources struggle also. I also examined Khan Academy because their exercise sets are publicly available online, without a login. This will energize our discussion here and make it easier for you to spotcheck my analysis.

My data collection took me three days and spanned 88 practice sets. You’re welcome to examine my data and critique my coding. In general, Khan Academy practice sets ask that you complete a certain number of exercises in a row before you’re allowed to move on. (Five, in most cases.) These exercises are randomly selected from a pool of item types. Different item types ask for different student work. Some item types ask for *multiple* kinds of student work. All of this is to say, you might conduct this exact same analysis and walk away with slightly different findings. I’ll present only the findings that I suspect will generalize.

After completing my analysis of Khan Academy’s exercises, I performed the same analysis on a set of 24 released questions from the Smarter Balanced Assessment Consortium’s test that will be administered this school year in 17 states.

**Findings & Discussion**

**Khan Academy’s Verbs**

The largest casualty is argumentation. Out of the 402 exercises I completed, I could code only three of their prompts as “argue.” (You can find all them in “Pythagorean Theorem Proofs.”) This is far out of alignment with the Common Core State Standards, which has prioritized constructing and critiquing arguments as one of its eight practice standards that cross all of K-12 mathematics.

Notably, 40% of Khan Academy’s eighth-grade exercises ask students to “calculate” or “solve.” These are important mathematical actions, certainly. But as with “argumentation,” I’ll demonstrate later that this emphasis is out of alignment with current national expectations for student math learning.

The most technologically advanced items were the 20% of Khan Academy’s exercises that asked students to “construct” an object. In these items, students were asked to create lines, tables, scatterplots, polygons, angles, and other mathematical structures using novel digital tools. Subjectively, these items were a welcome reprieve from the frequent calculating and solving, nearly all of which I performed with either my computer’s calculator or with Wolfram Alpha. (Also subjective: my favorite exercise asked me to construct a line.) These items also appeared frequently in the Geometry strand where students were asked to transform polygons.

I was interested to find that the most common student action in Khan Academy’s eighth-grade year is “analyze.” Several examples follow.

- Instead of just asking for the solution of a system of linear equations, for instance, Khan Academy asks the student to analyze how many solutions the system would have.
- Instead of just graphing a function, Khan Academy asks the student to draw conclusions from the graph of a function.
- Instead of just asking students to create a table, Khan Academy presents the table and asks students to draw conclusions.

**Khan Academy’s Productions**

These questions of analysis are welcome but the end result of analysis can take many forms. If you think about instances in your life when you were asked to analyze, you might recall reports you’ve written or verbal summaries you’ve delivered. In Khan Academy, 92% of the analysis questions ended in a multiple-choice response. These multiple-choice items took different forms. In some cases, you could make only one choice. In others, you could make multiple choices. Regardless, we should ask ourselves if such structured responses are the most appropriate assessment of a student’s power of analysis.

Broadening our focus from the “analysis” items to the entire set of exercises reveals that 74% of the work students do in the eighth grade of Khan Academy results in either a number or a multiple-choice response. No other pair of outcomes comes close.

Perhaps the biggest loss here is the fact that I constructed an equation exactly three times throughout my eighth grade year in Khan Academy. Here is one:

This is troubling. In the sixth grade, students studying the Common Core State Standards make the transition from “Number and Operations” to “Expressions and Equations.” By ninth grade, the CCSS will ask those students to use equations in earnest, particularly in the Algebra, Functions, and Modeling domains. Students need preparation *solving* equations, of course, but if they haven’t spent ample time *constructing* equations also, those advanced domains will be inaccessible.

**Smarter Balanced Verbs**

The Smarter Balanced released items ask comparatively fewer “calculate” and “solve” items (they’re the least common verbs, in fact) and comparatively more “construct,” “analyze,” and “argue.”

This lack of alignment is troubling. If one of Khan Academy’s goals is to prepare students for success in Common Core mathematics, they’re emphasizing the wrong set of skills.

**Smarter Balanced Productions**

Multiple-choice responses are also common in the Smarter Balanced assessment but the distribution of item types is broader. Students are asked to produce lots of different mathematical outputs including number lines, non-linear function graphs, probability spinners, corrections of student work, and other productions students won’t have seen in their work in Khan Academy.

SBAC also allows for the production of free-response text while Khan Academy doesn’t. When SBAC asks students to “argue,” in a majority of cases, students express their answer by just *writing* an argument.

This is quite unlike Khan Academy’s three “argue” prompts which produced either a) a multiple-choice response or b) the re-arrangement of the statements and reasons in a pre-filled two-column proof.

**Limitations & Future Directions & Conclusion**

This brief analysis has revealed that Khan Academy students are doing two primary kinds of work (analysis and calculating) and they’re expressing that work in two primary ways (as multiple-choice responses and as numbers). Meanwhile, the SBAC assessment of the CCSS emphasizes a different set of work and asks for more diverse expression of that work.

This is an important finding, if somewhat blunt. A much more comprehensive item analysis would be necessary to determine the nuanced and important differences between two problems that this analysis codes identically. Two separate “solving” problems that result in “a number,” for example, might be of very different value to a student depending on the equations being solved and whether or not a context was involved. This analysis is blind to those differences.

We should wonder why Khan Academy emphasizes this particular work. I have no inside knowledge of Khan Academy’s operations or vision. It’s possible this kind of work is a *perfect* realization of their vision for math education. Perhaps they are doing exactly what they set out to do.

I find it more likely that Khan Academy’s exercise set draws an accurate map of the strengths and weaknesses of education technology in 2014. Khan Academy asks students to solve and calculate so frequently, not because those are the mathematical actions mathematicians and math teachers value most, but because those problems are easy to assign with a computer in 2014. Khan Academy asks students to submit their work as a number or a multiple-choice response, not because those are the mathematical outputs mathematicians and math teachers value most, but because numbers and multiple-choice responses are easy for computers to grade in 2014.

This makes the limitations of Khan Academy’s exercises understandable but not excusable. Khan Academy is falling short of the goal of preparing students for success on assessments of the CCSS, but that’s setting the bar low. There are arguably other, more important goals than success on a standardized test. We’d like students to enjoy math class, to become flexible thinkers and capable future workers, to develop healthy conceptions of themselves as learners, and to look ahead to their *next* year of math class with something other than dread. Will instruction composed principally of selecting from multiple-choice responses and filling numbers into blanks achieve that goal? If your answer is no, as is mine, if that narrative sounds exceedingly grim to you also, it is up to you and me to pose a compelling counter-narrative for online math education, and then re-pose it over and over again.

How many gifts did your true love receive on each day? If the song was titled “The Twenty-Five Days of Christmas,” how many gifts would your true love receive on the twenty-fifth day? How many total gifts did she or he receive on the first two days? The first three days? The first four days? How many gifts did she or he receive on all twelve days?

“The X Days of Christmas.” I like it.

]]>All of that makes *your* blogging more useful to me than ever. Please keep posting your interesting classroom anecdotes.

Here are all the blogs I subscribed to during November 2014:

- It’s my loss that I only just now found
**Cristina Milos’**excellent and evenhanded blogging on mathematics pedagogy and research. She blogs from the UK and tangles with educators across philosophical lines. “How to Argue with A Traditionalist – Ten Commandments” is one of her less evenhanded posts. **Zach Cresswell**wrote a great post about embodied cognition and the concept of a function – kids dancing around according to a graph.**Kevin Davis**asked for a shout-out for his new blog. All signs point to a blog about the flipped math classroom, which is a project – no offense, Kevin – I struggle to get excited about. In the first entry, Kevin assigns a video his students don’t watch. I’m curious what he does next.**Taylor Williams**is a statistics teacher who also knows how to program interesting computer simulations for his students. More, please.**Sandra Corbacioglu**is a former engineer turned math teacher in a 1:1 school. She also documents her practice with lots of pictures, so we’re all in luck. I see she also has excellent taste in graphing calculator technology.- The
**C. Kilbane**tag cloud would include #education, #design, and #making, with posts about 3D printing and video editing. So it would be awesome if he posted more. **Zach Coverstone**regularly blogs short, insightful posts about secondary math, recently asking What Makes An Engaging Task?**Ve Anusic**has exactly one post, but I think it indicates pretty good taste.**Quadrant Dan**is an older subscription but I bumped him onto my blogroll this month. Essential, fun reading.

Coral Connor’s students created 3D chalk charts to demonstrate their understanding of trig functions:

As a showcase entry we spent several lessons developing the Maths of perspective drawings of representations of comparisons between Australia and the mission countries- income, death rates, life expectancy etc, and finished by creating chalk drawings around the school for all to see.

Malke Rosenfeld assigned the Hundred-Face Challenge – make a face using Cuisenaire Rods that up to 100 – and you should really click through to her gallery of student work:

Some kids just made awesome faces. Me: “Hmmm…that looks like it’s more than 100. What are you going to do?” Kid: “I guess we’ll take off the hair.”

One of my favorite aspects of Bob Lochel’s statistics blogging is how cannily he turns his students into interesting data sets for their own analysis:

Both classes gave me strange looks. But with instructions to answer as best they could, the students played along and provided data. Did you note the subtle differences between the two question sets?

Jonathan Claydon shows us how to cobble together a document camera using nothing more than a top of the line Mac and iPad.

]]>]]>Every lesson should begin by getting [students] to articulate something about what they already understand or know about something or their initial ideas. So you try and uncover where they’re starting from and make that explicit. And then when they start working on an activity, you try to confront them with things that really make them stop.

And it might be that you can do this by sitting kids together if they’ve got opposing points of views. So you get conflict between students as well as within. So you get the conflict which comes within, when you say, “I believe

this, but I getthatand they don’t agree.” Or you get conflict between students when they just have fundamental disagreements, when there’s a really nice mathematical argument going on. And they really do want to know and have it resolved. And the teacher’s role is to try to build a bit of tension, if you like, to try and get them to reason their way through it.And I find the more students reason and engage like that then they can get quite emotional. But when they get through it, they remember the stuff really well. So it’s worth it.

**New Blog Subscriptions**

- I met Nicholas Patey at a workshop in San Bernardino. He wrote up a summary of some of our work that made him seem like a solid addition to my network.
- I added Amy Roediger to my blogroll (my short list of must-reads) because more than most bloggers I read she has an intuitive sense of how to create a cognitive conflict in a class. (See: two sets of ten pennies that weigh different amounts. WHAT?!)
- I subscribed to Dani Quinn. My subscription list skews heavily towards North American males and she helps shake me out of both bubbles. She also wrote a post about her motivations for teaching math I found resonant.
- In her most recent post, Leslie Myint wrote, “Apathy is the cancer of today’s classroom.” Subscribed.

**New Twitter Follows**

- I met Chris Duran in Palm Springs. Liked his vibe.
- Leah Temes plunked herself down at my empty breakfast table in Portland last month and started saying interesting things. Then she told me I should follow her on Twitter with the promise of more interesting things there. With only two tweets in the last week, though, I’m getting antsy.
- I subscribed to Peg Cagle because she understands the concerns of Internet-enabled math teachers and she also understand the politics that concern the NCTM board of directors.

**Press Clippings**

- I was interviewed for the New York Daily News about PhotoMath, which at one point in Fall 2014 was going to be the end of math teaching.
- An interview with some kind of education-related Spanish-language blog.

**ICYMI**

- My favorite post of the month was Raymond Johnson’s analysis of NCTM’s difficulties adapting to the present day.
- John Golden crowdsourced a list of free curricula.
- Michael Pershan hosted an open comments thread where he had a conversation with himself about the difficulties of carving out a
*career*as a classroom teacher. - Tim McCaffrey set up Agree or Disagree, which I hope will produce some interesting fight-starters.
- Kyle Pearce created the most interesting iBook I have ever seen for math class. It overclocks all the built-in features (video embeds, etc.) and then goes over the top, including collaborative student data displays. Awesome. Not easy.

- The speaker was well-prepared and knowledgeable.
- The speaker was an engaging and effective presenter.
- The session matched the title and description in the program book.

Attendees scored each statement on a scale from 0 (disagree!) to 3 (agree!). Attendees could also leave written feedback.

At the end of the conference, presenters were sent to a public site where they could access not just their own session feedback, but the feedback for other presenters also. This link started circulating on Twitter. I scraped the feedback from every single session and analyzed all the data.

This is my intention:

**To learn what makes sessions unpopular with attendees.**It’s really hard to determine what makes a session good. “Great session!” shows up an awful lot without elaboration. People were much more specific about what they disliked.

This *isn’t* my intention:

**To shame anybody.**Don’t ask me for the data. Personally, I don’t think it should have been released publicly. I hope the conference committee takes the survey results seriously in planning future conferences but I’m not here to judge anybody.

**Overall**

There were 2,972 total feedback messages. 2,615 were correctly formatted. With 3,600 attendees attending a maximum of eight sessions, that’s 28,800 feedback messages that *could have* been sent, for a response rate of about 10%.

**How Much Feedback Did You Get?**

Most presenters received between 10 and 20 feedback messages. One presenter received 64 messages, though that was across several sessions.

**How Did You Do On Each Of The Survey Questions?**

Overall the feedback is incredibly positive. I’m very curious how this distribution compares to other math education conferences. I attend a lot of them and Palm Springs is on the top shelf. California has a deep bench of talented math educators and Palm Springs is a great location which draws in great presenters from around the country. I’d put it on par with the NCTM regionals. Still, this feedback is surprisingly sunny.

The data also seem to indicate that attendees were more likely to critique the presenters’ *speaking skills* (statement #2) than their *qualifications* (statement #1).

**How Did You Do On A Lousy Measure Of Overall Quality?**

For each presenter, I averaged the responses they received for each of the survey questions and then summed those averages. This measure is problematic for loads of reasons, but more useful than useless I think. It runs from 0 to 9.

62 presenters received perfect scores from all their attendees on all measures. 132 more scored above an 8.0. Even granting the lousiness of the measure, it points to a very well-liked set of presenters.

So why didn’t people like your session? The following quotes are all verbatim.

**What People Said When You Weren’t “Well-Prepared Or Knowledgeable.”**

If someone rated you a 0 or a 1 for this statement, it was because:

- he was late and unprepared.
- frustrating that we spent an hr on ppl sharing rationale or venting. I wanted to hear about strategies and high leaveage activities at the school.
- went very fast
- information was scattered.
- A lot of sitting around and not do much.
- This presentation was scattered and seemed thrown together at the last minute.
- Unfortunately, the presenter was not focused. There was no clear objectives. Please reconsider inviting this presenter.

**What People Said When You Weren’t “An Engaging Or Effective Presenter.”**

If someone rated you a 0 or a 1 for this statement, it was because:

You didn’t offer enough practical classroom applications.

- I wanted things students could use.
- philosophy more than application. I prefer things I can go back with. I already get the philosophy.
- very boring. Too much talking. I wanted more in class material.

Your style needs work.

- very dry and ppt was ineffective
- very disappointing and boring
- Arrogant,mean & full of himself
- BORING. BORING. He’s knowledgable, but dry. Not very interactive.
- knowledgeable but hard to hear
- he spoke very quickly and did not model activities. difficult to follow and not described logistically.
- more confused after leaving

Not enough *doing*.

- not as hands-on as I would have hoped
- too much talking from participants and no information or leadership from the presenter. Everyone had to share their story; very annoying.
- I could do without the justification at the beginning and the talking to each other part. I already know why I’m here.
- I would have liked more time to individually solve problems.

Too *much* doing.

- while they had a good energy, this session was more of a work time than learning. It did not teach me how to facilitate activities
- I didn’t think I was going to a math class. I thought we would be teachers and actuall create some tasks or see student work not our work.
- it would be nice to have teachers do less math and show them how you created the tasks you had us do.

**What People Said When Your Session “Didn’t Match The Title And Description In The Program Book.”**

You were selling a product. (I looked up all of these session descriptions. None of them disclosed the commercial nature of their sessions.)

- The fact that it was a sales pitch should have been more evident.
- only selling ti’s topics not covered
- a sales pitch not something useful to take back to my classroom
- Good product but I was looking more for ideas that I can use without purchasing a product.
- I was hoping for some ideas to help my kids with fact fluency, not a sales pitch.
- didn’t realize it was selling a product
- this is was nothing more than a sales pitch. Disappointed that I wasted a session!
- More like a sales pitch for Inspire
- I would not have gone to this session if I had known it required graphing calculators.

You claimed the wrong grade band.

- good for college methods course not for math conference
- not very appropriate for 6,7 grade.
- disappointed as it was too specific and unique to the HS presented.
- Didn’t really match the title and it should have been directed to middle school only.
- This was not as good for elementary even though descript. said PreK-C / a little was relevant but my time would have been better used in another
- the session was set 2-6 and was presented at grades k-5.
- this was not a great session for upper elementary grade and felt more appropriate for 7-12.

You didn’t connect your talk closely enough to the CCSS.

- not related to common core at all. Disappointing
- unrelated to ccss

You decided to run a technology session, which is impossible at math ed conferences because half the crowd already knows what you’re talking about and is bored and half the crowd doesn’t know what you’re talking about and is overwhelmed.

- gave a few apps but talked mostly about how to teach instead of how to use apps or what apps would be beneficial
- good tutorial for a newbie or first time Geogebra user but I already knew how to use Geogebra so I found most of this pointless. Offer an advanced
- Good information, but we did not actually learn how to create a Google form. I thought we would be guided through this more. It doesn’t help to
- apps were geared to higher lever math not middle school

**People Whose Sessions Were Said to Be the “Best of the Conference!”**

23-way tie! Perhaps useful for your future conference planning, however.

- Armando Martinez-Cruz
- Bob Sornson
- Brad Fulton
- Cathy Seeley
- Cherlyn Converse
- Chris Shore
- David Chamberlain
- Douglas Tyson
- Eli Luberoff
- Gregory Hammond
- Howard Alcosser
- Kasey Grant
- Kim Sutton
- Larry Bell
- Mark Goldstein
- Michael Fenton
- Monica Acosta
- Nate Goza
- Patrick Kimani
- Rachel Lasek
- Scott Bricker
- Vik Hovsepian
- Yours Truly

**Conclusion**

The revelations about technology and hands-on math work interest me most.

In my sessions, I like to *do* math with participants and then *talk* about the math we did. Too much *doing*, however, and participants seem to wonder why the person at the front of the room is even there. That’s a tricky line to locate.

I would also like to present on the technology I use that makes teaching and learning more fun for me and students. But it seems rather difficult to create a presentation that differentiates for the range of abilities we find at these conferences.

The session feedback here has been extremely valuable for my development as a presenter and I only hope the conference committee finds it equally useful for their purposes. Conference committee chair Brian Shay told me via email, “Historically, we use the data and comments to guide our decision-making process. If we see speakers with low reviews, we don’t always accept their proposal for next year.”

Great, if true. Given the skew of the feedback towards “SUPER LIKE!” it seems the feedback would be most useful for scrutinizing poor presenters, not locating great ones. The strongest negative feedback I found was in reaction to unprepared presenters and presentations that were covertly commercial.

CMC has the chance to initiate a positive feedback loop here by taking these data seriously in their choices for the 2015 conference and making sure attendees know their feedback counted. More and more thoughtful feedback from attendees will result.

**Full Disclosure**

I forgot to tell the attendees at my session my PollEverywhere code. Some people still sent in reviews. My practice is to post my most recent twelve months of speaking feedback publicly.

**2014 Nov 4**. It seems I’ve analyzed an incomplete data set. The JSON file I downloaded for one presenter (and likely others) contains fewer responses than he actually received. I don’t have any reason to believe there is systematic bias to which responses were included and excluded but it’s worth mentioning this caveat.

Sam Shah’s blog has been a veritable teaching clinic the last two weeks, more than filling his own installment of Great Classroom Action.

With Attacks and Counterattacks, Sam asked his students to define common shapes as best as they could – triangle, polygon, and circle, for instance. They traded definitions with each other and tried to poke holes in those definitions.

When the counter-attacks were presented, it was interesting how the discussions unfolded. The original group often wanted to defend their definition, and state why the counter-attack was incorrect.

Trade the definitions back, strengthen them, and repeat.

Sam created some very useful scaffolds for the very CCSS-y question, “If you have a shape and its image under a rotation, how can you quickly and easily find its center of rotation?”

This is an awesome exercise (inmyhumbleopinion) because it has kids use patty paper, it has them kinesthetically see the rotation, and it gives them immediate feedback on whether the point they thought was the center of rotation truly is the center of rotation. Simple, sweet, forces some thought.

Sam then pulls a move with a Post-It note that is a stunner, simultaneously useful for clarifying the concept of a variable and for finding the sum of recursive fractions:

Ready? READY? Flip. THAT FLIP IS THE COOLEST THING EVER FOR A MATH TEACHER. That flip was the single thing that made me want to blog about this.

Finally, Sam pulls a masterful move in the setup to his students’ realization that all the perpendicular bisectors of a triangle’s side meet in the same point. He has them first find those lines for pentagons (nothing special revealed) and quadrilaterals (nothing special revealed) before asking them to find them for triangles (something very special revealed).

]]>There were gasps, and one student said, and I quite, “MIND BLOWN.”

I give out 5-6 sets of three dice. I have the students roll them and then add up all the numbers which cannot be seen (bottom, middles and middles). Once they have the sum, they sit back with the dice still stacked and I “read their minds” to get the sum.

So then I shuffled up the little slips of sequences and started saying, B, your sum is 210. C, your sum is 384. D, your sum is 2440. E, your sum is -24. They were astonished!

These moments seem infinitely preferable to just leaping into an explanation of the sums of arithmetic sequences.

Our friends who are concerned with cognitive load should be happy here because students are only accessing long-term memory when we ask them to roll dice, write down some numbers, and add them. It’s easy.

Our friends who are concerned that much of math seems *needless* are happy here also. With The Necessity Principle, Harel and his colleagues described five needs that drive much of our learning about mathematics. Kate and Scott are exploiting one of those needs in particular:

The need for causality is the need to explain – to determine a cause of a phenomenon, to understand what makes a phenomenon the way it is.

[..]

The need for causality does not refer to physical causality in some real-world situation being mathematically modeled, but to logical causality (explanation, mechanism) within the mathematics itself.

Here are three more examples where the teacher appears to be a mind-reader, provoking that need for causality. Then I invite you to submit other examples in the comments so we can create a resource here.

**Rotational Symmetry**

Here is a problem from Michael Serra’s *Discovering Geometry*. No need for causality yet:

But at CMC in Palm Springs last weekend, Serra created that need by asking four people to come to the front of the room and hold up enlargements of those playing cards. Then he turned his back and asked someone else to turn one of the cards 180°. Then he played the mind-reader and figured out which card had been turned by exploiting the properties of rotational symmetry.

**Number Theory**

The Flash Mind Reader exploits a numerical relationship to predict which symbol students are thinking about. Prove the relationship.

Here is a little trick I like to call calculator magic. You will need a calculator, a 7-digit phone number and an unwitting bystander. Here goes:

Key in the first three digits of your phone number

Multiply by 80

Add 1

Multiply by 250

Add the last 4 digits of your phone number

Add the last 4 digits of your phone number again

Subtract 250

Divide the number by 2

Surprise! It is your phone number!

A nice trick is this one with dice. A lot of dice. Let’s say 50 or so. You lay them on the ground like a long chain. The upward facing numbers should be completely random. Then you go from the one end to the other following the following rule. Look at the number of the die where you’re at. Take that many steps along the chain, towards the other end. Repeat. If you’re lucky, you already end up exactly at the last die. You’ll be a magician immediately! But usually, that isn’t the case. What you usually have to do, is take away all those dice which you jumped over during the last step. Tell them that that is “the rule during the first round”. Now the actual magic begins. You tell the audience that they can do whatever they want with the first half of the chain. They may turn around dice. Swap dice. Take dice away. Whatever. As long as they don’t do anything with the second half of the chain. [If you like risks, let them mess up a larger part of the chain.] What you’ll see, is that each and every time, they will end up exactly at the end of the chain!

A few years ago, I found this “trick” on a “maths” site, not sure which, but it was UK. You need 5 index cards. Number them 1, 2, 3, 4, 5 in red ink on the front. On the reverse side, number them 6, 7, 8, 9, 10 in blue ink. Be sure that 1 and 6 are on opposite sides of the same card…same with 2 and 7, etc. Turn your back to the group of students. Have one of the students drop the 5 cards on the floor and tell you how many cards landed with the blue number face up (they don’t tell you the number, just “3 cards are written in blue”). Tell them the total of the numbers showing is 30. The key is that each blue number is 5 more than its respective red number. Red numbers total 15. Each blue number raises the total by 5. So 3 blue numbers make it 15 (the basic sum) + 15 (3 times 5). Let them figure out how you are using the number of blue numbers to find the total of the exposed numbers.

**Expressions & Equations**

I ran an activity with students I called “number tricks.” (Okay. Settle down. Give me a second.) I’d ask the students to pick a number at random and then perform certain operations on it. The class would wind up with the same result in spite of choosing different initial numbers. Constructing the expression and simplifying it would help us see the math behind the magic. (Handout and slides.)

I do something called calendar magic where I show a calendar of the month we’re in, ask the students to select a day and add it with the day after it, the day directly under it (so a week later), and the day diagonally to the right under it, effectively forming a box. Then I ask them to give me the sum and I tell them their day.

Always a bunch of students figure out the trick, but the hardest part is writing the equation. Every year I have students totally stumped writing x+y+a+b. It’s really a reframing for them to think about the

relationshipbetween the numbers and express that algebraically.Finally I ask them to write a rule for three consecutive numbers, but I don’t say which number you should find and inevitably someone has a rule for finding the first number and someone has one for finding the middle number. I love that!

**Different Bases**

Andy Zsiga suggests this card trick involving base 2.

**Call for Submissions**

Where else have you seen mind-reading lead to math-learning? Are there certain areas of math where this technique cannot apply?

**2014 Oct 30**. Megan Schmidt points us to all the NRich tasks that are labeled “Card Trick.”

**2014 Oct 30**. Michael Paul Goldenberg links up the book * Magical Mathematics: The Mathematical Ideas That Animate Great Magic Tricks*.

PhotoMath is an app that wants to do your students’ math homework for them. Its demo video was tweeted at me a dozen times yesterday and it is a trending search in the United States App Store.

In theory, you hold your cameraphone up to the math problem you want to solve. It detects the problem, solves it, and shows you the steps, so you can write them down for your math teacher who insists *you always need to show your steps*.

We should be so lucky. The initial reviews seem to comprise loads of people who are thrilled the app exists (“I really wish I had something like this when I was in school.”) while those who seem to have actually downloaded the app are underwhelmed. (“Didn’t work with anything I fed it.”) A glowing Yahoo Tech review includes as evidence of PhotoMath’s awesomeness this example of PhotoMath choking dramatically on a simple problem.

But we should wish PhotoMath abundant success – perfect character recognition and downloads on every student’s smartphone. Because the only problems PhotoMath could conceivably solve are the ones that are boring and over-represented in our math textbooks.

It’s conceivable PhotoMath could be great for problems with verbs like “compute,” “solve,” and “evaluate.” In some alternate universe where technology didn’t disappoint and PhotoMath worked perfectly, all the most fun verbs would then be left behind: “justify,” “argue,” “model,” “generalize,” “estimate,” “construct,” etc. In that alternate universe, we could quickly evaluate the value of our assignments:

“Could PhotoMath solve this? Then why are we wasting our time?”

**2014 Oct 22**. Glenn Waddell seizes this moment to write an open letter to his math department.

**2014 Oct 22**. David Petro posts a couple of pretty disastrous screenshots of PhotoMath in action.

**2014 Oct 23**. John Scammell puts PhotoMath to work on tests throughout grade 7-12. More disaster.

**2014 Oct 24**. New York Daily News interviewed me about PhotoMath.

**2014 Oct 27**. Jim Pai asked some teachers and students to download and use PhotoMath. Then he surveyed their thoughts.

**Featured Comment**

Kathy Henderson gets the app to recognize a problem but its solution is mystifying:

I find this one of the most convoluted methods to solve this problem! I may show my seventh graders some screen shots from the app tomorrow and ask them what they think of this solution – a teachable moment from a poorly written app!

I we are structuring this the right way, kids (a) won’t use the app when developing the concept, (b) have a degree of comfort with doing it themselves after developing the concept and (c) take the app out when they end up with something crazy like -16t

^{2}+400t+987=0, and factoring/solving by hand would take forever.

The point in this case isn’t how well the character recognition is. Or how correct the solutions are. Because it’s just a matter of time before apps like these solve handwritten algebra problems perfectly in seconds, providing a clear description of all steps taken.

The point is: who provides the equation to be solved by the app? I have never seen an algebraic equation that presented itself miraculously to me in daily life.

]]>ps. Photomath is just a “stupid pet trick” they did to market their recognition engine.