1,000 Math Teachers Tell Me What They Think About Calculators in the Classroom

Yesterday, I asked teachers on Twitter about their classroom calculator policy and 978 people responded.

I wanted to know if they allow calculators a) during classwork, b) during tests, and also which kinds of calculators:

  • Hardware calculators (like those sold by Texas Instruments, Casio, HP, etc.).
  • Mobile phone calculators (like those you can download on your Android or iOS phone).

(Full disclosure: I work for a company that distributes a free four function, scientific, and graphing calculator for mobile phones and other devices.)

I asked the question because hardware calculators don’t make a lot of financial sense to me.

Here are some statistics for high-end HP and Texas Instruments graphing calculators along with a low-end Android mobile phone. (Email readers may need to click through to see the statistics.)

 cost ($)storage (MB)memory (MB)screen size
TI Nspire CX129.9910064320 x 240
HP Prime149.9925632320 x 240
Moto G Unlocked Smartphone179.993200020001920 x 1080

You pay less than 2x more for the mobile phone and you get hardware that is between 30x and 300x more powerful than the hardware calculators. And the mobile phone sends text messages, takes photos, and accesses webpages. In many cases, the student already has a mobile phone. So why spend the money on a second device that is much less powerful?

1,000 teachers gave me their answer.

The vast majority of respondents allow hardware calculator use in their classes. I suspect I’m oversampling for calculator-friendly teachers here, by virtue of drawing that sample from a digital medium like Twitter.

734 of those teachers allow a hardware graphing calculator but not a mobile phone on tests. 366 of those teachers offered reasons for that decision. They had my attention.

Here are their reasons, along with representative quotes, ranked from most common to least.

Test security. (173 votes.)

It’s too easy for students to share answers via text or picture.

Internet access capabilities and cellular capabilities that make it way too easy for the device to turn from an analysis/insight tool to the CheatEnable 3000 model.

School policy. (68 votes.)

School policy is that phones are in lockers.

It’s against school policy. They can use them at home and I don’t have a problem with it, but I’m not allowed to let them use mobile devices in class.

Distraction. (67 votes.)

Students waste time changing music while working problems, causing both mistakes due to lack of attention and inefficiency due to electronic distractions.

We believe the distraction factor is a negative impact on learning. (See Simon Sinek’s view of cell phones as an “addiction to distraction.”)

Test preparation. (54 votes.)

I am also preparing my students for an IB exam at the end of their senior year and there is a specific list of approved calculators. (Phones and computers are banned.)

Basically I am trying to get students comfortable with assessments using the hardware so they won’t freak out on our state test.

Access. (27 votes.)

Our bandwidth is sometimes not enough for my entire class (and others’ classes) to be online all at once.

I haven’t determined a good way so that all students have equal access.


These reasons all seem very rational to me. Still, it’s striking to me that “test security” dwarfs all others.

That’s where it becomes clear to me that the killer feature of hardware calculators is their lack of features. I wrote above that your mobile device “sends text messages, takes photos, and accesses webpages.” At home, those are features. At school, or at least on tests, they are liabilities. That’s a fact I need to think more about.

Featured Comments

Jennifer Potier:

I work in a BYOD school. What I have learned is that the best way to disengage students from electronic devices is to promote learning that involves student sharing of discussion, planning, thinking, and solving problems. When the students are put “centre stage,” the devices start becoming less interesting.

Chris Heddles:

The restriction on calculation aids and internet connections still stems from a serious cultural issue we have in mathematics teaching – the type of questions that we ask. While we continue to emphasise the importance of numerical calculations and algebraic manipulation in assessment, electronic aids to these skills will continue to be an issue.

Instead, we should shift the focus to understanding the situation presented, setting up the equations and then making sense of the calculation results. With this shift, the calculations themselves are relatively unimportant so it doesn’t really matter how the student process them. Digital aids can be freely used because they are off little use when addressing the key aspects of the assessment tasks.

In many ways our current mathematics assessment approach is equivalent to a senior secondary English essay that gave 80% of the grade for neat handwriting and correct spelling. If this were the case then they too, would have to ban all electronic aids to minimise the risk of “cheating” by typing and using spell checking software.

If we change what we value in assessment then we can open up better/cheaper electronic aids for students.

2017 Mar 24. Related to Chris’s comment above, I recently took some sample SAT math tests and was struck by how infrequently I needed a calculator. Not because I’m any kind of mental math genius. Simply because the questions largely concerned analysis and formulation over calculation and solution.

Featured Tweets

Problems with Personalized Learning

A reader pointed me to this interesting article in the current Educational Leadership on “personalized learning.” She said it raised an alarm for her that she couldn’t quite put into words and she asked if I heard that same alarm and, if so, what words I’d use to describe it.

I hear a few alarms, some louder and faster than others. Let me point them out in the piece.

Here we describe a student’s typical day at a personalized learning school. The setting is the Waukesha STEM Academy-Saratoga Campus in Waukesha, Wisconsin.

You could be forgiven for not knowing, based on that selection, that one of the authors is the principal of the Waukesha STEM Academy and that his two co-authors have financial ties to the personalized learning industry the Waukesha STEM Academy is a client of the other two authors [see this exchange. -dm]. What should be disclosed in the article’s first paragraph can only be inferred from the authors’ biographies in its footer. This minimal disclosure is consistent with what I perceive to be irresponsible self-promotion on the part of the personalized learning industry. (See also: “… this robot tutor can essentially read your mind.”)

(Full disclosure: I work in education technology.)

Then, in describing a student’s school experience before personalized learning, the authors write:

… [Cal’s] planner, which looked similar to those of the other 27 students in his class, told him everything he needed to know: Math: Page 122; solve problems 2–18 (evens). [..] Each week looked about the same.

If this is truly the case, if students didn’t interact with each other or their teacher at all, if they simply opened their books and completed a textbook assignment every day, every week, we really can’t do much worse. Most alternatives will look great. This isn’t a sober analysis of available alternatives. Again, this is marketing.

[Cal] began to understand why he sometimes misses some of the things that he hears in class and ands more comfort in module-based courses, where he can fast forward and rewind videos and read instructions at his own pace.

Fast-forwarding, rewinding, and pausing instructional videos are often cited as advantages of personalized learning, not because this is necessarily good instruction, but because it’s what the technology permits.

And this isn’t good instruction. It isn’t even good direct instruction. When someone is explaining something to you and you don’t understand them, you don’t ask that person to “repeat exactly what you just said only slower.” You might tell them what you understand of what they were saying. Then they might back up and take a different approach, using different examples, metaphors, or illustrations, ideally responding using your partial understanding as a resource.

I’m describing a very low bar for effective instruction. I’m describing techniques you likely employ in day-to-day conversation with friends and family without even thinking about them. I’m also describing a bar that 2017 personalized learning technology cannot clear.

His students don’t report to class to be presented with information. Instead, they’re empowered to use a variety of learning tools. Some students, like Cal, prefer step-by-step videos; others prefer songs and catchy rhymes to help them learn concepts. [..] He opens a series of videos and online tutorials, as well as tutorials prepared by his teacher.

In the first sentence, we’re told that students like Cal aren’t presented with information. Then, in the following sentences, we’re told all the different ways that those students are presented with information.

Whether you learn concepts from a step-by-step video, a rap, or a written tutorial, you are being presented with information. And a student’s first experience with new information shouldn’t be someone on a screen presenting it, no matter the style of presentation.

Because there is work students can do before that presentation to prepare themselves to learn and enjoy learning from it.

Because the video presenter treats students as though they have the same existing knowledge and prior conceptions about that information, even though those conceptions vary widely, even though some of them are surprisingly durable and require direct confrontation.

Because these video presentations communicate to students the message that math is something you can’t make sense of unless some adult explains it to you, that learning is something you do by yourself, and that your peers have nothing to offer your understanding of that new information.

I like a lot of the ethos around personalized learning – increasing student agency and metacognition, for example – but the loudest, fastest alarm in the article is this:

The medium is the message. Personalized learning is only as good as its technology, and in 2017 that technology isn’t good enough. Its gravity pulls towards videos of adults talking about math, followed by multiple choice exercises for practice, all of which is leavened by occasional projects. It doesn’t matter that students can choose the pace or presentation of that learning. Taking your pick of impoverished options still leaves you with an impoverished option.

2017 Mar 22. There are too many interesting comments to feature them individually. I’ll single out two of them directly, however:

  • Todd Gray, the Superintendent of the School District of Waukesha.
  • Anthony Rebora, the Editor-in-Chief of Educational Leadership.

The Difference Between Math and Modeling with Math in Five Seconds

Jim Pardun sent me a video of a dog named Twinkie popping balloons in the pursuit of a world record. How you train a dog to do this, I don’t know. How there is a world record for this, I don’t know either.

What I know is that this video clearly illustrates the difference between math and modeling with math.

You can’t break math. Some people think they broke math but all they did was break ground on new disciplines in math where, for example, triangles can have more than 180° and parallel lines can meet.

Our mathematical models, by contrast, arrive broken. “All models are wrong,” said George Box, “but some are useful.” And we see that in this video.

Twinkie pops 25 balloons in 5 seconds. How long will it take her to pop all 100 balloons? A purely mathematical answer is 20 seconds. That’s straightforward proportional reasoning.

But mathematical modeling is less than straightforward. It requires the re-interpretation of that answer through the world’s imperfections. The student who can quickly and confidently calculate 20 seconds may even be worse off here than the student who patiently thinks about how the supply of balloons is dwindling, adds time, and arrives at the actual answer of 37 seconds.

Feel free to show your classes that question video, discuss, and then show them the answer video. Or if your class has access to devices, you can assign this Desmos activity, where we’ll invite them to sketch what they think happens over time as well.

The difference between the students who answer “20 seconds” and “37 seconds” is the same difference between the students who draw Sketch 1 and Sketch 2.

You might think you know how your students will sort into those two groups, but I hope you’ll be surprised.

That difference is the patience that modeling with math requires.

BTW. I’m very interested in situations like these where the world subverts what seems like a straightforward application of a mathematical model.

One more example is the story of St. Matthew Island, which dumps the expectations of pure mathematics on its head at least twice.

Do you have any to trade?

2017 May 19. Steve Rein asked for the data set. Right here.

[Desmos Design] Why We’re Suspicious of Immediate Feedback

One of our design principles at Desmos is to “delay feedback for reflection, especially during concept development activities.” This makes us weird, frankly, in Silicon Valley where no one ever got fired for promising “immediate feedback” in their math edtech.

We get it. Computers have an enormous advantage over humans in their ability to quickly give students feedback on certain kinds of work. But just because computers can deliver immediate feedback doesn’t mean they always should.

For example, Simmons and Cope (1993) found that students were more likely to use procedural strategies like trial-and-error in a condition of immediate feedback than a condition of delayed feedback.

I think I can illustrate that for you with this activity, which has two tasks. You get immediate feedback on one and delayed feedback on the other.

I’ll ask you what I asked 500 Twitter users:

How was your brain working differently in the “Circle” challenge [delayed feedback] than the “Parabola” challenge [immediate feedback]?

Exhibit A:

The circle one was both more challenging and fun. I found myself squinting on the circle to visualize it in my head while with the parabola I mindlessly did trial and error.

Exhibit B:

With the circle, the need to submit before seeing the effect made me really think about what each part of the equation would effect the graph in each way. This resulted in a more strategic first guess rather than a guess and check approach.

Exhibit C:

I could guess & check the parabola challenge. In the circle challenge I had to concentrate more about the center of the circle and the radius. Much more in fact.

Exhibit D:

I couldn’t use trial and error. I had to visualize and estimate and then make decisions. My brain was more satisfied after the circle.

Exhibit E:

I probably worked harder on [the circle] because my answer was not shown until I submitted my answer. It was more frustrating than the parabola problem – but I probably learned more.

This wasn’t unanimous, of course, but it was the prevailing sentiment. For most people, the feedback delay provoked thoughtfulness where the immediate feedback provoked trial-and-error.

We realize that the opposite of “immediate feedback” for many students is “feedback when my teacher returns my paper after a week.” Between those two options, we side with Silicon Valley’s preference for immediate feedback. But if computers can deliver feedback immediately, they can also deliver feedback almost immediately, after a short, productive delay. That’s the kind of feedback we design into our concept development activities.

BTW. For a longer version of that activity, check out Building Conic Sections, created by Dylan Kane and edited with love by our Teaching Faculty.

A High School Math Teacher’s First Experience Teaching Elementary School

At a workshop in Alameda County last month, I made my standard request for classroom teachers to help me make good on my New Year’s resolution. I assumed all the teachers there taught middle- or high-school so I said yes to every teacher who invited me. Later, I’d find out that one of them taught fourth grade.

As a former high school math teacher, this was NIGHTMARE MATERIAL, Y’ALL.

I mean, what do fourth graders even look like? I’m tall, but do I need to worry about stepping on them? What do they know how to do? Do they speak in complete sentences at that age? Clearly, what I don’t know about little kids could fill libraries.

I survived class today. I used a Graham Fletcher 3-Act task because I’m familiar with that kind of curriculum and pedagogy. (Thanks for the concierge support, Graham.) A few observations about the experience, which, again, I survived:

Children are teenagers are adults. Don’t let me make too much of my one hour of primary education experience, but I was struck hard by the similarities between all the different ages I’ve taught. People of all ages like puzzles. They respond well to the techniques of storytelling. Unless they’re wildly misplaced, they come to your class with some informal understanding of your lesson. They appreciate it when you try to surface that understanding, revoice it, challenge it, and help them formalize it. I handled a nine year-old’s ideas about a jar of Skittles in exactly the same way as I handled a forty-nine year-old’s ideas about teaching middle schoolers.

Primary teachers have their pedagogy tight. Ben Spencer (my host teacher) and Sarah Kingston (an elementary math coach) were nice enough to debrief the lesson with me afterwards.

I asked them if I had left money on the table, if I had missed any opportunities to challenge and chase student thinking. They brought up an interesting debate from the end of class, a real Piagetian question about whether a different jar would change the number of Skittles. (It wouldn’t. The number of packages was fixed.) I had asked students to imagine another jar, but my hosts thought the debate demanded some manipulatives so we could test our conjectures. Nice!

Also, Spencer told me that when he asks students to talk with each other, he asks them to share out their partners’ thinking and not their own. That gives them an incentive to tune into what their partners are saying, rather than just waiting for their own turn to talk. Nice! As a secondary teacher, I felt like a champ if I asked students to talk at all. Spencer and his primary colleagues are onto some next-level conversational techniques.

Primary students have more stamina than I anticipated. No doubt much of this credit is due to the norms Mr. Spencer has set up around his “Problem Solving Fridays.” But I’ve frequently heard rules of thumb like “children can concentrate on one task for two to five minutes per year old.” These kids worked on one problem for the better part of an hour.

The pedagogy interests me more than the math.

This sentiment still holds for me after today. I just find algebra more interesting than two-digit multiplication. I’ll try to keep an open mind. Today was not an interesting day of math for me, though it was a very interesting day of learning how novices learn and talk about math.

I’m probably not wacky enough for this work. Mr. Spencer greeted his students by calling out “wopbabalubop!” to which they responded “balap bam boom!” Really fun, and I don’t think you can teach that kind of vibe.

Loads of algorithms, and none of them “standard.” Graham’s 3-Act modeling task asks students to multiply two-digit numbers. I saw an area model. I saw partial products. Students used those approaches flexibly and efficiently. They were able to locate each number in the world when asked. I didn’t see anyone carry a one. Everyone should settle down. This is great.

I expected the experience would either kill me or convince me I should have taught primary students. This one fell somewhere in the middle. I’m excited to return someday, and I was happy to witness the portability of big ideas about students, learning, and mathematics from adult education to high school to elementary school.

Featured Comment

Marilyn Burns:

I remember my first venture in elementary school after teaching ninth grade algebra and eighth grade math for four years. I was curious about younger students and my friend invited me into her third grade class. I can’t remember anything about the lesson I taught, but what I do remember is that I made a student cry. He had done something that I thought was inappropriate behavior and I must have responded pretty harshly. Hey, I was used to teaching older tough kids and I had never thought about modulating my response. It wasn’t my finest hour and I was devastated. My friend helped me through the experience and I even went back. After then I learned other ways to talk with younger students and became more and more fascinated about how they formed their conceptions . . . and misconceptions . . . about mathematical ideas. I’m hooked.