Category: tech contrarianism

Total 133 Posts

The #1 Most Requested Desmos Feature Right Now, and What We Could Do Instead

When schools started closing months ago, we heard two loud requests from teachers in our community. They wanted:

  1. Written feedback for students.
  2. Co-teacher access to student data.

Those sounded like unambiguously good ideas, whether schools were closed or not. Good pedagogy. Good technology. Good math. We made both.

Here is the new loudest request:

  1. Self-checking activities. Especially card sorts.

hey @Desmos – is there a simple way for students to see their accuracy for a matching graph/eqn card sort? thank you!

Is there a way to make a @Desmos card sort self checking? #MTBoS #iteachmath #remotelearning

@Desmos to help with virtual learning, is there a way to make it that students cannot advance to the next slide until their cardsort is completed correctly?

Let’s say you have students working on a card sort like this, matching graphs of web traffic pre- and post-coronavirus to the correct websites.

Linked card sort activity.

What kind of feedback would be most helpful for students here?

Feedback is supposed to change thinking. That’s its job. Ideally it develops student thinking, but some feedback diminishes it. For example, Kluger and DeNisi (1996) found that one-third of feedback interventions decreased performance.

Butler (1986) found that grades were less effective feedback than comments at developing both student thinking and intrinsic motivation. When the feedback came in the form of grades and comments, the results were the same as if the teacher had returned grades alone. Grades tend to catch and keep student attention.

So we could give students a button that tells them they’re right or wrong.

Resourceful teachers in our community have put together screens like this. Students press a button and see if their card sort is right or wrong.

Feedback that the student has less than half correct.

My concerns:

  1. If students find out that they’re right, will they simply stop thinking about the card sort, even if they could benefit from more thinking?
  2. If students find out that they’re wrong, do they have enough information related to the task to help them do more than guess and check their way to their next answer?

For example, in this video, you can see a student move between a card sort and the self-check screen three times in 11 seconds. Is the student having three separate mathematical realizations during that interval . . . or just guessing and checking?

On another card sort, students click the “Check Work” button up to 10 times.

https://www.desmos.com/calculator/axlhe3shwg

Instead we could tell students which card is the hardest for the class.

Our teacher dashboard will show teachers which card is hardest for students. I used the web traffic card sort last week when I taught Wendy Baty’s eighth grade class online. After a few minutes of early work, I told the students that “Netflix” had been the hardest card for them to correctly group and then invited them to think about their sort again.

I suspect that students gave the Netflix card some extra thought (e.g., “How should I think about the maximum y-value in these cards? Is Netflix more popular than YouTube or the other way around?”) even if they had matched the card correctly. I suspect this revelation helped every student develop their thinking more than if we simply told them their sort was right or wrong.

We could also make it easier for students to see and comment on each other’s card sorts.

In this video, you can see Julie Reulbach and Christopher Danielson talking about their different sorts. I paired them up specifically because I knew their card sorts were different.

Christopher’s sort is wrong, and I suspect he benefited more from their conversation than he would from hearing a computer tell him he’s wrong.

Julie’s sort is right, and I suspect she benefited more from explaining and defending her sort than she would from hearing a computer tell her she’s right.

I suspect that conversations like theirs will also benefit students well beyond this particular card sort, helping them understand that “correctness” is something that’s determined and justified by people, not just answer keys, and that mathematical authority is endowed in students, not just in adults and computers.

Teachers could create reaction videos.

In this video, Johanna Langill doesn’t respond to every student’s idea individually. Instead, she looks for themes in student thinking, celebrates them, then connects and responds to those themes.

I suspect that students will learn more from Johanna’s holistic analysis of student work than they would an individualized grade of “right” or “wrong.”

Our values are in conflict.

We want to build tools and curriculum for classes that actually exist, not for the classes of our imaginations or dreams. That’s why we field test our work relentlessly. It’s why we constantly shrink the amount of bandwidth our activities and tools require. It’s why we lead our field in accessibility.

We also want students to know that there are lots of interesting ways to be right in math class, and that wrong answers are useful for learning. That’s why we ask students to estimate, argue, notice, and wonder. It’s why we have built so many tools for facilitating conversations in math class. It’s also why we don’t generally give students immediate feedback that their answers are “right” or “wrong.” That kind of feedback often ends productive conversations before they begin.

But the classes that exist right now are hostile to the kinds of interactions we’d all like students to have with their teachers, with their classmates, and with math. Students are separated from one another by distance and time. Resources like attention, time, and technology are stretched. Mathematical conversations that were common in September are now impossible in May.

Our values are in conflict. It isn’t clear to me how we’ll resolve that conflict. Perhaps we’ll decide the best feedback we can offer students is a computer telling them they’re right or wrong, but I wanted to explore the alternatives first.

2020 May 25. The conversation continues at the Computation Layer Discourse Forum.

The 2010s of Math Edtech in Review

EdSurge invited me to review the last decade in math edtech.

Entrepreneurs had a mixed decade in K-16 math education. They accurately read the landscape in at least two ways: a) learning math is enormously challenging for most students, and b) computers are great at a lot of tasks. But they misunderstood why math is challenging to learn and put computers to work on the wrong task.

In a similar retrospective essay, Sal Khan wrote about the three assumptions he and his team got right at Khan Academy in the last decade. The first one was extremely surprising to me.

Teachers are the unwavering center of schooling and we should continue to learn from them every day.

Someone needs to hold my hand and help me understand how teachers are anywhere near the center of Khan Academy, a website that seems especially useful for people who do not have teachers.

Khan Academy tries to take from teachers the jobs of instruction (watch our videos) and assessment (complete our autograded items). It presumably leaves for teachers the job of monitoring and responding to assessment results but their dashboards run on a ten-minute delay, making that task really hard!

Teachers are very obviously peripheral, not central, to the work of Khan Academy and the same is true for much of math education technology in the 2010s. If entrepreneurs and founders are now alert to the unique value of teachers in a student’s math education, let’s hear them articulate that value and let’s see them re-design their tools to support it.

“If something cannot go on forever, it will stop.”

Economist Herb Stein’s quote ran through my head while I read The Hustle’s excellent analysis of the graphing calculator market. This cannot go on forever.

Every new school year, Twitter lights up with caregivers who can’t believe they have to buy their students a calculator that’s wildly underpowered and wildly overpriced relative to other consumer electronics.

tweet text: "Hello my 8th grade son is required to have a TI-84 for school but we just cannot afford one- do you have any programs you could recommend"

The Hustle describes Texas Instruments as having “a near-monopoly on graphing calculators for nearly three decades.” That means that some of the students who purchased TI calculators as college students are now purchasing calculators for their own kids that look, feel, act and (crucially) cost largely the same. Imagine they were purchasing their kid’s first car and the available cars all looked, felt, acted, and cost largely the same as their first car. This cannot go on forever.

As the chief academic officer at Desmos, a competitor of Texas Instruments calculators, I was already familiar with many of The Hustle’s findings. Even still, they illuminated two surprising elements of the Texas Instruments business model.

First, the profit margins.

One analyst placed the cost to produce a TI-84 Plus at around $15-20, meaning TI sells it for a profit margin of nearly 50% — far above the electronics industry’s average margin of 6.7%.

Second, the lobbying.

According to Open Secrets and ProPublica data, Texas Instruments paid lobbyists to hound the Department of Education every year from 2005 to 2009 — right around the time when mobile technology and apps were becoming more of a threat.

Obviously the profits and lobbying are interdependent. Rent-seeking occurs when companies invest profits not into product development but into manipulating regulatory environments to protect market share.

I’m not mad for the sake of Desmos here. What Texas Instruments is doing isn’t sustainable. Consumer tech is getting so good and cheap and our free alternative is getting used so widely that regulations and consumer demand are changing quickly.

Another source told The Hustle that graphing calculator sales have seen a 15% YoY decline in recent years — a trend that free alternatives like Desmos may be at least partially responsible for.

You’ll find our calculators embedded in over half of state-level end-of-course exams in the United States, along with the International Baccalaureate MYP exam, the digital SAT and the digital ACT.

I am mad for the sake of kids and families like this, though.

“It basically sucks,” says Marcus Grant, an 11th grader currently taking a pre-calculus course. “It was really expensive for my family. There are cheaper alternatives available, but my teacher makes [the TI calculator] mandatory and there’s no other option.”

Teachers: it was one thing to require plastic graphing calculators calculators when better and cheaper alternatives weren’t available. But it should offend your conscience to see a private company suck 50% profit margins out of the pockets of struggling families for a product that is, by objective measurements, inferior to and more expensive than its competitors.

BTW. This is a Twitter-thread-turned-blog-post. If you want to know how teachers justified recommending plastic graphing calculators, you can read my mentions.

Big Online Courses Have a Problem. Here’s How We Tried to Fix It.

The Problem

Here is some personal prejudice: I don’t love online courses.

I love learning in community, even in online communities, but online courses rarely feel like community.

To be clear, by online courses I mean the kind that have been around almost since the start of the internet, the kind that were amplified into the “Future of Education™” in the form of MOOCs, and which continue today in a structure that would be easily recognized by someone defrosted after three decades in cold storage.

These courses are divided into modules. Each module has a resource like a video or a conversation prompt. Students are then told to respond to the resource or prompt in threaded comments. You’re often told to make sure you respond to a couple of other people’s responses. This is community in online courses.

The reality is that your comment falls quickly down a long list as other people comment, a problem that grows in proportion to the number of students in the course. The more people who enroll, the less attention your ideas receive and consequently you’re less interested in contributing your ideas, a negative feedback loop which offers some insight into the question, “Why doesn’t anybody finish these online courses?”

I don’t love online courses but maybe that’s just me. Two years ago, the ShadowCon organizers and myself, created four online courses to extend the community and ideas around four 10-minute talks from the NCTM annual conference. We hosted the courses using some of the most popular online course software.

The talks were really good. The assignments were really good. There’s always room for improvement but the facilitators would have had to quit their day jobs to increase the quality even 10%.

And still retention was terrible. 3% of participants finished the fourth week’s assignment who finished the first week’s.

Low retention from Week 1 to Week 4 in the course.

The organizers and I had two hypotheses:

  • The size of the course enrollment inhibited community formation and consequently retention.
  • Teachers had to remember another login and website in order to participate in the course, creating friction that decreased retention.

Our Solution

For the following year’s online conference extensions, we wanted smaller groups and we wanted to go to the people, to whatever software they were already using, rather than make the people come to us.

So we used technology that’s even older than online course software, technology that is woven tightly into every teacher’s daily routine: email.

Teachers signed up for the courses. They signed up in affinity groups – coaches, K-5 teachers, or 6-12 teachers.

The assignments and resources they would have received in a forum posting, they received in an email CC’d to two or three other participants, as well as the instructor. They had their conversation in that small group rather than in a massive forum.

Of course this meant that participants wouldn’t see all their classmates’ responses in the massive forum, including potentially helpful insights.

So the role of the instructors in this work wasn’t to respond to every email but rather to keep an eye out for interesting questions and helpful insights from participants. Then they’d preface the next email assignment with a digest of interesting responses from course participants.

The Results

To be clear, the two trials featured different content, different instructors, different participants, and different grouping strategies. They took place in different years and different calendar months in those years. Both courses were free and about math, but there are plenty of variables that confound a direct comparison of the media.

So consider it merely interesting that average course retention was nearly 5x when the medium was email rather than online course software.

Retention was nearly five times greater in the email course than LMS.

It’s also just interesting, and still not dispositive, that the length of the responses in emails were 2x the length of the responses in the online course software.

Double the word count.

People wrote more and stuck around longer for email than for the online course software. That says nothing about the quality of their responses, just the quantity. It says nothing about the degree to which participants in either medium were building on each other’s ideas rather than simply speaking their own truth into the void.

But it does make me wonder, again, if large online courses are the right medium for creating an accessible community around important ideas in our field, or in any field.

What do you notice about this data? What does it make you wonder?

Featured Comments

Leigh Notaro:

By the way, the Global Math Department has a similar issue with sign-ups versus attendance. Our attendance rate is typically 5%-10% of those who sign up. Of course, we do have the videos and the transcript of the chat. So, we have made it easy for people to participate in their own time. Partipating in PD by watching a video though is never the same thing as collaborating during a live event – virtually or face-to-face. It’s like learning in a flipped classroom. Sure, you can learn something, but you miss out on the richness of the learning that really can only happen in a face-to-face classroom of collaboration.

William Carey:

At our school now, when we try out new parent-teacher communication methods, we center them in e-mail, not our student information system. It’s more personal and more deeply woven into the teachers’ lives. It affords the opportunity for response and conversation in a way that a form-sent e-mail doesn’t.

Cathy Yenca:

At the risk of sounding cliché or boastful about reaching “that one student”, how does one represent a “data point” like this one within that tiny 3%? For me, it became 100% of the reason and reward for all of the work involved. I know, I know, I’m a sappy teacher :-)

Justin Reich is extremely thoughtful about MOOCs and online education and offered an excellent summary of some recent work.

2018 Oct 5. Definitely check out the perspective of Audrey, who was a participant in the email group and said she wouldn’t participate again.

2018 Oct 12. Rivka Kugelman had a much more positive experience in the email course than Audrey, one which seemed to hinge on her sense that her emails were actually getting read. Both she and Audrey speak to the challenge of cultivating community online.

Learning the Wrong Lessons from Video Games

[This is my contribution to The Virtual Conference on Mathematical Flavors, hosted by Sam Shah.]

In the early 20th century, Karl Groos claimed in The Play of Man that “the joy in being a cause” is fundamental to all forms of play. One hundred years later, Phil Daro would connect Groos’s theory of play to video gaming:

Every time the player acts, the game responds [and] tells the player your action causes the game action: you are the cause.

Most attempts to “gamify” math class learn the wrong lessons from video games. They import leaderboards, badges, customized avatars, timed competitions, points, and many other stylistic elements from video games. But gamified math software has struggled to import this substantial element:

Every time the player acts, the game responds.

When the math student acts, how does math class respond? And how is that response different in video games?

Watch how a video game responds to your decision to jump off a ledge.

Now watch math practice software responds to your misinterpretation of “the quotient of 9 and c.”

The video game interprets your action in the world of the game. The math software evaluates your action for correctness. One results in the joy in being the cause, a fundamental feature of play according to Groos. The other results in something much less joyful.

To see the difference, imagine if the game evaluated your decision instead of interpreting it.

I doubt anyone would argue with the goals of making math class more joyful and playful, but those goals are more easily adapted to a poster or conference slidedeck than to the actual experience of math students and teachers.

So what does a math class look like that responds whenever a student acts mathematically, that interprets rather than evaluates mathematical thought, that offers students joy in being the cause of something more than just evaluative feedback.

“Have students play mathematical or reasoning games,” is certainly a fair response, but bonus points if you have recommendations that apply to core academic content. I will offer a few examples and guidelines of my own in the comments later tomorrow.

Featured Comments

James Cleveland:

I feel like a lot of the best Desmos activities do that, because they can interpret (some of) what the learner inputs. When you do the pool border problem, it doesn’t tell you that your number of bricks is wrong – it just makes the bricks, and you can see if that is too many, too few, or just right.

In general, a reaction like “Well, let’s see what happens if that were true” seems like a good place to start.

Kevin Hall:

My favorite example of this is when Cannon Man’s body suddenly multiplies into two or three bodies if a student draws a graph that fails the vertical line test.

Sarah Caban:

I am so intrigued by the word interpret. “Interpret” is about translating, right? Sometimes when we try to interpret, we (unintentionally) make assumptions based on our own experiences. Recently, I have been pushing myself to linger in observing students as they work, postponing interpretations. I have even picked up a pencil and “tried on” their strategies, particularly ones that are seemingly not getting to a correct solution. I have consistently been joyfully surprised by the math my students were playing with. I’m wondering how this idea of “trying on” student thinking fits with technology. When/how does technology help us try on more student thinking?

Dan Finkel:

I think that many physical games give clear [evaluative] feedback as well, insofar as you test out a strategy, and see if you win or not. Adults can ruin these for children by saying, “are you sure that’s the right move?” rather than simply beating them so they can see what happens when they make that move. The trick there is that some games you improve at simply by losing (I’d put chess in this column, even though more focused study is essential to get really good), where others require more insight to see what you actually need to change.