Category: tech contrarianism

Total 126 Posts

1,000 Math Teachers Tell Me What They Think About Calculators in the Classroom

Yesterday, I asked teachers on Twitter about their classroom calculator policy and 978 people responded.

I wanted to know if they allow calculators a) during classwork, b) during tests, and also which kinds of calculators:

  • Hardware calculators (like those sold by Texas Instruments, Casio, HP, etc.).
  • Mobile phone calculators (like those you can download on your Android or iOS phone).

(Full disclosure: I work for a company that distributes a free four function, scientific, and graphing calculator for mobile phones and other devices.)

I asked the question because hardware calculators don’t make a lot of financial sense to me.

Here are some statistics for high-end HP and Texas Instruments graphing calculators along with a low-end Android mobile phone. (Email readers may need to click through to see the statistics.)

 cost ($)storage (MB)memory (MB)screen size
TI Nspire CX129.9910064320 x 240
HP Prime149.9925632320 x 240
Moto G Unlocked Smartphone179.993200020001920 x 1080

You pay less than 2x more for the mobile phone and you get hardware that is between 30x and 300x more powerful than the hardware calculators. And the mobile phone sends text messages, takes photos, and accesses webpages. In many cases, the student already has a mobile phone. So why spend the money on a second device that is much less powerful?

1,000 teachers gave me their answer.


The vast majority of respondents allow hardware calculator use in their classes. I suspect I’m oversampling for calculator-friendly teachers here, by virtue of drawing that sample from a digital medium like Twitter.

734 of those teachers allow a hardware graphing calculator but not a mobile phone on tests. 366 of those teachers offered reasons for that decision. They had my attention.

Here are their reasons, along with representative quotes, ranked from most common to least.

Test security. (173 votes.)

It’s too easy for students to share answers via text or picture.

Internet access capabilities and cellular capabilities that make it way too easy for the device to turn from an analysis/insight tool to the CheatEnable 3000 model.

School policy. (68 votes.)

School policy is that phones are in lockers.

It’s against school policy. They can use them at home and I don’t have a problem with it, but I’m not allowed to let them use mobile devices in class.

Distraction. (67 votes.)

Students waste time changing music while working problems, causing both mistakes due to lack of attention and inefficiency due to electronic distractions.

We believe the distraction factor is a negative impact on learning. (See Simon Sinek’s view of cell phones as an “addiction to distraction.”)

Test preparation. (54 votes.)

I am also preparing my students for an IB exam at the end of their senior year and there is a specific list of approved calculators. (Phones and computers are banned.)

Basically I am trying to get students comfortable with assessments using the hardware so they won’t freak out on our state test.

Access. (27 votes.)

Our bandwidth is sometimes not enough for my entire class (and others’ classes) to be online all at once.

I haven’t determined a good way so that all students have equal access.

Conclusion

These reasons all seem very rational to me. Still, it’s striking to me that “test security” dwarfs all others.

That’s where it becomes clear to me that the killer feature of hardware calculators is their lack of features. I wrote above that your mobile device “sends text messages, takes photos, and accesses webpages.” At home, those are features. At school, or at least on tests, they are liabilities. That’s a fact I need to think more about.

Featured Comments

Jennifer Potier:

I work in a BYOD school. What I have learned is that the best way to disengage students from electronic devices is to promote learning that involves student sharing of discussion, planning, thinking, and solving problems. When the students are put “centre stage,” the devices start becoming less interesting.

Chris Heddles:

The restriction on calculation aids and internet connections still stems from a serious cultural issue we have in mathematics teaching – the type of questions that we ask. While we continue to emphasise the importance of numerical calculations and algebraic manipulation in assessment, electronic aids to these skills will continue to be an issue.

Instead, we should shift the focus to understanding the situation presented, setting up the equations and then making sense of the calculation results. With this shift, the calculations themselves are relatively unimportant so it doesn’t really matter how the student process them. Digital aids can be freely used because they are off little use when addressing the key aspects of the assessment tasks.

In many ways our current mathematics assessment approach is equivalent to a senior secondary English essay that gave 80% of the grade for neat handwriting and correct spelling. If this were the case then they too, would have to ban all electronic aids to minimise the risk of “cheating” by typing and using spell checking software.

If we change what we value in assessment then we can open up better/cheaper electronic aids for students.

2017 Mar 24. Related to Chris’s comment above, I recently took some sample SAT math tests and was struck by how infrequently I needed a calculator. Not because I’m any kind of mental math genius. Simply because the questions largely concerned analysis and formulation over calculation and solution.

Featured Tweets

Problems with Personalized Learning

A reader pointed me to this interesting article in the current Educational Leadership on “personalized learning.” She said it raised an alarm for her that she couldn’t quite put into words and she asked if I heard that same alarm and, if so, what words I’d use to describe it.

I hear a few alarms, some louder and faster than others. Let me point them out in the piece.

Here we describe a student’s typical day at a personalized learning school. The setting is the Waukesha STEM Academy-Saratoga Campus in Waukesha, Wisconsin.

You could be forgiven for not knowing, based on that selection, that one of the authors is the principal of the Waukesha STEM Academy and that his two co-authors have financial ties to the personalized learning industry the Waukesha STEM Academy is a client of the other two authors [see this exchange. -dm]. What should be disclosed in the article’s first paragraph can only be inferred from the authors’ biographies in its footer. This minimal disclosure is consistent with what I perceive to be irresponsible self-promotion on the part of the personalized learning industry. (See also: “… this robot tutor can essentially read your mind.”)

(Full disclosure: I work in education technology.)

Then, in describing a student’s school experience before personalized learning, the authors write:

… [Cal’s] planner, which looked similar to those of the other 27 students in his class, told him everything he needed to know: Math: Page 122; solve problems 2–18 (evens). [..] Each week looked about the same.

If this is truly the case, if students didn’t interact with each other or their teacher at all, if they simply opened their books and completed a textbook assignment every day, every week, we really can’t do much worse. Most alternatives will look great. This isn’t a sober analysis of available alternatives. Again, this is marketing.

[Cal] began to understand why he sometimes misses some of the things that he hears in class and ands more comfort in module-based courses, where he can fast forward and rewind videos and read instructions at his own pace.

Fast-forwarding, rewinding, and pausing instructional videos are often cited as advantages of personalized learning, not because this is necessarily good instruction, but because it’s what the technology permits.

And this isn’t good instruction. It isn’t even good direct instruction. When someone is explaining something to you and you don’t understand them, you don’t ask that person to “repeat exactly what you just said only slower.” You might tell them what you understand of what they were saying. Then they might back up and take a different approach, using different examples, metaphors, or illustrations, ideally responding using your partial understanding as a resource.

I’m describing a very low bar for effective instruction. I’m describing techniques you likely employ in day-to-day conversation with friends and family without even thinking about them. I’m also describing a bar that 2017 personalized learning technology cannot clear.

His students don’t report to class to be presented with information. Instead, they’re empowered to use a variety of learning tools. Some students, like Cal, prefer step-by-step videos; others prefer songs and catchy rhymes to help them learn concepts. [..] He opens a series of videos and online tutorials, as well as tutorials prepared by his teacher.

In the first sentence, we’re told that students like Cal aren’t presented with information. Then, in the following sentences, we’re told all the different ways that those students are presented with information.

Whether you learn concepts from a step-by-step video, a rap, or a written tutorial, you are being presented with information. And a student’s first experience with new information shouldn’t be someone on a screen presenting it, no matter the style of presentation.

Because there is work students can do before that presentation to prepare themselves to learn and enjoy learning from it.

Because the video presenter treats students as though they have the same existing knowledge and prior conceptions about that information, even though those conceptions vary widely, even though some of them are surprisingly durable and require direct confrontation.

Because these video presentations communicate to students the message that math is something you can’t make sense of unless some adult explains it to you, that learning is something you do by yourself, and that your peers have nothing to offer your understanding of that new information.

I like a lot of the ethos around personalized learning – increasing student agency and metacognition, for example – but the loudest, fastest alarm in the article is this:

The medium is the message. Personalized learning is only as good as its technology, and in 2017 that technology isn’t good enough. Its gravity pulls towards videos of adults talking about math, followed by multiple choice exercises for practice, all of which is leavened by occasional projects. It doesn’t matter that students can choose the pace or presentation of that learning. Taking your pick of impoverished options still leaves you with an impoverished option.

2017 Mar 22. There are too many interesting comments to feature them individually. I’ll single out two of them directly, however:

  • Todd Gray, the Superintendent of the School District of Waukesha.
  • Anthony Rebora, the Editor-in-Chief of Educational Leadership.

[Desmos Design] Why We’re Suspicious of Immediate Feedback

One of our design principles at Desmos is to “delay feedback for reflection, especially during concept development activities.” This makes us weird, frankly, in Silicon Valley where no one ever got fired for promising “immediate feedback” in their math edtech.

We get it. Computers have an enormous advantage over humans in their ability to quickly give students feedback on certain kinds of work. But just because computers can deliver immediate feedback doesn’t mean they always should.

For example, Simmons and Cope (1993) found that students were more likely to use procedural strategies like trial-and-error in a condition of immediate feedback than a condition of delayed feedback.

I think I can illustrate that for you with this activity, which has two tasks. You get immediate feedback on one and delayed feedback on the other.

I’ll ask you what I asked 500 Twitter users:

How was your brain working differently in the “Circle” challenge [delayed feedback] than the “Parabola” challenge [immediate feedback]?

Exhibit A:

The circle one was both more challenging and fun. I found myself squinting on the circle to visualize it in my head while with the parabola I mindlessly did trial and error.

Exhibit B:

With the circle, the need to submit before seeing the effect made me really think about what each part of the equation would effect the graph in each way. This resulted in a more strategic first guess rather than a guess and check approach.

Exhibit C:

I could guess & check the parabola challenge. In the circle challenge I had to concentrate more about the center of the circle and the radius. Much more in fact.

Exhibit D:

I couldn’t use trial and error. I had to visualize and estimate and then make decisions. My brain was more satisfied after the circle.

Exhibit E:

I probably worked harder on [the circle] because my answer was not shown until I submitted my answer. It was more frustrating than the parabola problem – but I probably learned more.

This wasn’t unanimous, of course, but it was the prevailing sentiment. For most people, the feedback delay provoked thoughtfulness where the immediate feedback provoked trial-and-error.

We realize that the opposite of “immediate feedback” for many students is “feedback when my teacher returns my paper after a week.” Between those two options, we side with Silicon Valley’s preference for immediate feedback. But if computers can deliver feedback immediately, they can also deliver feedback almost immediately, after a short, productive delay. That’s the kind of feedback we design into our concept development activities.

BTW. For a longer version of that activity, check out Building Conic Sections, created by Dylan Kane and edited with love by our Teaching Faculty.

What’s Wrong with This Experiment?

If you’re the sort of person who helps students learn to design controlled experiments, you might offer them W. Stephen Wilson’s experiment in The Atlantic and ask for their critique.

First, Wilson’s hypothesis:

Wilson fears that students who depend on technology [calculators, specifically –dm] will fail to understand the importance of mathematical algorithms.

Next, Wilson’s experiment:

Wilson says he has some evidence for his claims. He gave his Calculus 3 college students a 10-question calculator-free arithmetic test (can you multiply 5.78 by 0.39 without pulling out your smartphone?) and divided the them into two groups: those who scored an eight or above on the test and those who didn’t. By the end of the course, Wilson compared the two groups with their performance on the final exam. Most students who scored in the top 25th percentile on the final also received an eight or above on the arithmetic test. Students at the bottom 25th percentile were twice as likely to score less than eight points on the arithmetic test, demonstrating much weaker computation skills when compared to other quartiles.

I trust my readers will supply the answer key in the comments.

BTW. I’m not saying there isn’t evidence that calculator use will inhibit a student’s understanding of mathematical algorithms, or that no such evidence will ever be found. I’m just saying this study isn’t that evidence.

Featured Tweet

Got one!

Featured Comment

Scott Farrand:

The most clarifying thing that I can recall being told about testing in mathematics came from a friend in that business: you’ll find a positive correlation between student performance on almost any two math tests. So don’t get too excited when it happens, and beware of using evidence of correlation on two tests as evidence for much.

Collective Effervescence Is the Cost of Personalized Learning

Cross-posted from the Desmos blog. I’m happy enough with this post to re-broadcast it here. The Desmos blog doesn’t have comments, also, which makes this a better forum for you to tell me if I’m wrong.

190924_1

We’re proud to debut our free Classroom Conversation Toolset, which has been the labor of our last three months. You can pause your students’ work. You can anonymize your students’ names. You can restrict the pace of your students through the activity. We believe there are productive and counterproductive ways to use these tools, so let us explain why we built them.

First, the edtech community is extremely excited about personalized learning – students learning at their own pace, uninhibited by their teacher or classmates. Our Activity Builder shares some of that enthusiasm but not all. Until last week, students could click through an activity from the first screen to the last, inhibited by nothing and nobody.

But the cost of personalized learning is often a silent classroom. In the worst-case scenario, you’ll walk into a classroom and see students wearing headphones, plugged into computers, watching videos or clicking multiple choice questions with just enough interest to keep their eyes open. But even when the activities are more interesting and cognitively demanding than video-watching and multiple choice question-clicking, there is still an important cost. You lose collective effervescence.

Collective effervescence is a term that calls to mind the bubbles in fizzy liquid. It’s a term from Émile Durkheim used to describe a particular force that knits social groups together. Collective effervescence explains why you still attend church even though the sermons are online, why you still attend sporting events even though they’re broadcast in much higher quality with much more comfortable seats from your living room. Collective effervescence explains why we still go to movie theaters; laughing, crying, or screaming in a room full of people is more satisfying than laughing, crying, or screaming alone.

An illustrative anecdote. We were testing these features in classes last week. We watched a teacher – Lieva Whitbeck in San Francisco – elicit a manic cheer from a class of ninth-graders simply by revealing the graph of a line. She brought her class together and asked them to predict what they’d see when she turned on the graph. They buzzed for a moment together, predicted a line, and then she gave the crowd what they came for.

She brought them together. She brought back the kids who were a bit ahead and she brought forward the kids who were a bit behind. She de-personalized the learning so she could socialize it. Because arguments are best with other people. Because the negotiation of ideas is most effective when you’re negotiating with somebody. And because collective effervescence is impossible to experience alone.

So these tools could very easily have been called our Classroom Management Toolset. They are useful for managing a class, for pausing the work so you can issue a new prompt or so you can redirect your class. But we didn’t build them for those purposes. We built them to restore what we feel the personalized-learning moment has missed. We built them for conversation and collective effervescence.

Featured Tweets