Category: tech contrarianism

Total 134 Posts

Computer Feedback That Helps Kids Learn About Math and About Themselves

Students are receiving more feedback from computers this year than ever before. What does that feedback look like, and what does it teach students about mathematics and about themselves as mathematicians?

Here is a question we might ask math students: what is this coordinate?

A target point at (4,5).

Let’s say a student types in (5, 4), a very thoughtful wrong answer. (“Wrong and brilliant,” one might say.) Here are several ways a computer might react to that wrong answer.

1. “You’re wrong.”

A red x appears next to the target point.

This is the most common way computers respond to a student’s idea. But (5, 4) receives the same feedback as answers like (1000, 1000) or “idk,” even though (5, 4) arguably involves a lot more thought from the student and a lot more of their sense of themselves as a mathematician.

This feedback says all of those ideas are the same kind of wrong.

2. “You’re wrong, but it’s okay.”

A red x and the message

The shortcoming of evaluative feedback (these binary judgments of “right” and “wrong”) isn’t just that it isn’t nice enough or that it neglects a student’s emotional state. It’s that it doesn’t attach enough meaning to the student’s thinking. The prime directive of feedback is, per Dylan Wiliam, to “cause more thinking.” Evaluative feedback fails that directive because it doesn’t attach sufficient meaning to a student’s thought to cause more thinking.

3. “You’re wrong, and here’s why.”

A red x and a message that the student might have switched the coordinates appears next to the target point.

It’s tempting to write down a list of all possible reasons a student might have given different wrong answers, and then respond to each one conditionally. For example here, we might program the computer to say, “Did you switch your coordinates?”

Certainly, this makes an attempt at attaching meaning to a student’s thinking that the other examples so far have not. But the meaning is often an expert’s meaning and attaches only loosely to the novice’s. The student may have to work as hard to understand the feedback (the word “coordinate” may be new, for example) as to use it.

4. “Let me see if I understand you here.”

No red x or message. The student's point moves out from the origin next to the target point.

Alternately, we can ask computers to clear their throats a bit and say, “Let me see if I understand you here. Is this what you meant?”

We make no assumption that the student understands what the problem is asking, or that we understand why the student gave their answer. We just attach as much meaning as we can to the student’s thinking in a world that’s familiar to them.

“How can I attach more meaning to a student’s thought?”

This animation, for example, attaches the fact that the relationship to the origin has horizontal and vertical components. We trust students to make sense of what they’re seeing. Then we give them an an opportunity to use that new sense to try again.

The student's point moves along the horizontal axis and then vertically to the student's point.

This “interpretive” feedback is the kind we use most frequently in our Desmos curriculum, and it’s often easier to build than the evaluative feedback, which requires images, conditionality, and more programming.

Honestly, “programming” isn’t even the right word to describe what we’re doing here.

We’re building worlds. I’m not overstating the matter. Educators build worlds in the same way that game developers and storytellers build worlds.

That world here is called “the coordinate plane,” a world we built in a computer. But even more often, the world we build is a physical or a video classroom, and the question, “How can I attach more meaning to a student’s thought?” is a great question in each of those worlds. Whenever you receive a student’s thought and tell them what interests you about it, or what it makes you wonder, or you ask the class if anyone has any questions about that thought, or you connect it to another student’s thought, you are attaching meaning to that student’s thinking.

Every time you work to attach meaning to student thinking, you help students learn more math and you help them learn about themselves as mathematical thinkers. You help them understand, implicitly, that their thoughts are valuable. And if students become habituated to that feeling, they might just come to understand that they are valuable themselves, as students, as thinkers, and as people.

BTW. If you’d like to learn how to make this kind of feedback, check out this segment on last week’s #DesmosLive. it took four lines of programming using Computation Layer in Desmos Activity Builder.

BTW. I posted this in the form a question on Twitter where it started a lot of discussion. Two people made very popular suggestions for different ways to attach meaning to student thought here.

The #1 Most Requested Desmos Feature Right Now, and What We Could Do Instead

When schools started closing months ago, we heard two loud requests from teachers in our community. They wanted:

  1. Written feedback for students.
  2. Co-teacher access to student data.

Those sounded like unambiguously good ideas, whether schools were closed or not. Good pedagogy. Good technology. Good math. We made both.

Here is the new loudest request:

  1. Self-checking activities. Especially card sorts.

hey @Desmos – is there a simple way for students to see their accuracy for a matching graph/eqn card sort? thank you!

Is there a way to make a @Desmos card sort self checking? #MTBoS #iteachmath #remotelearning

@Desmos to help with virtual learning, is there a way to make it that students cannot advance to the next slide until their cardsort is completed correctly?

Let’s say you have students working on a card sort like this, matching graphs of web traffic pre- and post-coronavirus to the correct websites.

Linked card sort activity.

What kind of feedback would be most helpful for students here?

Feedback is supposed to change thinking. That’s its job. Ideally it develops student thinking, but some feedback diminishes it. For example, Kluger and DeNisi (1996) found that one-third of feedback interventions decreased performance.

Butler (1986) found that grades were less effective feedback than comments at developing both student thinking and intrinsic motivation. When the feedback came in the form of grades and comments, the results were the same as if the teacher had returned grades alone. Grades tend to catch and keep student attention.

So we could give students a button that tells them they’re right or wrong.

Resourceful teachers in our community have put together screens like this. Students press a button and see if their card sort is right or wrong.

Feedback that the student has less than half correct.

My concerns:

  1. If students find out that they’re right, will they simply stop thinking about the card sort, even if they could benefit from more thinking?
  2. If students find out that they’re wrong, do they have enough information related to the task to help them do more than guess and check their way to their next answer?

For example, in this video, you can see a student move between a card sort and the self-check screen three times in 11 seconds. Is the student having three separate mathematical realizations during that interval . . . or just guessing and checking?

On another card sort, students click the “Check Work” button up to 10 times.

https://www.desmos.com/calculator/axlhe3shwg

Instead we could tell students which card is the hardest for the class.

Our teacher dashboard will show teachers which card is hardest for students. I used the web traffic card sort last week when I taught Wendy Baty’s eighth grade class online. After a few minutes of early work, I told the students that “Netflix” had been the hardest card for them to correctly group and then invited them to think about their sort again.

I suspect that students gave the Netflix card some extra thought (e.g., “How should I think about the maximum y-value in these cards? Is Netflix more popular than YouTube or the other way around?”) even if they had matched the card correctly. I suspect this revelation helped every student develop their thinking more than if we simply told them their sort was right or wrong.

We could also make it easier for students to see and comment on each other’s card sorts.

In this video, you can see Julie Reulbach and Christopher Danielson talking about their different sorts. I paired them up specifically because I knew their card sorts were different.

Christopher’s sort is wrong, and I suspect he benefited more from their conversation than he would from hearing a computer tell him he’s wrong.

Julie’s sort is right, and I suspect she benefited more from explaining and defending her sort than she would from hearing a computer tell her she’s right.

I suspect that conversations like theirs will also benefit students well beyond this particular card sort, helping them understand that “correctness” is something that’s determined and justified by people, not just answer keys, and that mathematical authority is endowed in students, not just in adults and computers.

Teachers could create reaction videos.

In this video, Johanna Langill doesn’t respond to every student’s idea individually. Instead, she looks for themes in student thinking, celebrates them, then connects and responds to those themes.

I suspect that students will learn more from Johanna’s holistic analysis of student work than they would an individualized grade of “right” or “wrong.”

Our values are in conflict.

We want to build tools and curriculum for classes that actually exist, not for the classes of our imaginations or dreams. That’s why we field test our work relentlessly. It’s why we constantly shrink the amount of bandwidth our activities and tools require. It’s why we lead our field in accessibility.

We also want students to know that there are lots of interesting ways to be right in math class, and that wrong answers are useful for learning. That’s why we ask students to estimate, argue, notice, and wonder. It’s why we have built so many tools for facilitating conversations in math class. It’s also why we don’t generally give students immediate feedback that their answers are “right” or “wrong.” That kind of feedback often ends productive conversations before they begin.

But the classes that exist right now are hostile to the kinds of interactions we’d all like students to have with their teachers, with their classmates, and with math. Students are separated from one another by distance and time. Resources like attention, time, and technology are stretched. Mathematical conversations that were common in September are now impossible in May.

Our values are in conflict. It isn’t clear to me how we’ll resolve that conflict. Perhaps we’ll decide the best feedback we can offer students is a computer telling them they’re right or wrong, but I wanted to explore the alternatives first.

2020 May 25. The conversation continues at the Computation Layer Discourse Forum.

The 2010s of Math Edtech in Review

EdSurge invited me to review the last decade in math edtech.

Entrepreneurs had a mixed decade in K-16 math education. They accurately read the landscape in at least two ways: a) learning math is enormously challenging for most students, and b) computers are great at a lot of tasks. But they misunderstood why math is challenging to learn and put computers to work on the wrong task.

In a similar retrospective essay, Sal Khan wrote about the three assumptions he and his team got right at Khan Academy in the last decade. The first one was extremely surprising to me.

Teachers are the unwavering center of schooling and we should continue to learn from them every day.

Someone needs to hold my hand and help me understand how teachers are anywhere near the center of Khan Academy, a website that seems especially useful for people who do not have teachers.

Khan Academy tries to take from teachers the jobs of instruction (watch our videos) and assessment (complete our autograded items). It presumably leaves for teachers the job of monitoring and responding to assessment results but their dashboards run on a ten-minute delay, making that task really hard!

Teachers are very obviously peripheral, not central, to the work of Khan Academy and the same is true for much of math education technology in the 2010s. If entrepreneurs and founders are now alert to the unique value of teachers in a student’s math education, let’s hear them articulate that value and let’s see them re-design their tools to support it.

“If something cannot go on forever, it will stop.”

Economist Herb Stein’s quote ran through my head while I read The Hustle’s excellent analysis of the graphing calculator market. This cannot go on forever.

Every new school year, Twitter lights up with caregivers who can’t believe they have to buy their students a calculator that’s wildly underpowered and wildly overpriced relative to other consumer electronics.

tweet text: "Hello my 8th grade son is required to have a TI-84 for school but we just cannot afford one- do you have any programs you could recommend"

The Hustle describes Texas Instruments as having “a near-monopoly on graphing calculators for nearly three decades.” That means that some of the students who purchased TI calculators as college students are now purchasing calculators for their own kids that look, feel, act and (crucially) cost largely the same. Imagine they were purchasing their kid’s first car and the available cars all looked, felt, acted, and cost largely the same as their first car. This cannot go on forever.

As the chief academic officer at Desmos, a competitor of Texas Instruments calculators, I was already familiar with many of The Hustle’s findings. Even still, they illuminated two surprising elements of the Texas Instruments business model.

First, the profit margins.

One analyst placed the cost to produce a TI-84 Plus at around $15-20, meaning TI sells it for a profit margin of nearly 50% – far above the electronics industry’s average margin of 6.7%.

Second, the lobbying.

According to Open Secrets and ProPublica data, Texas Instruments paid lobbyists to hound the Department of Education every year from 2005 to 2009 – right around the time when mobile technology and apps were becoming more of a threat.

Obviously the profits and lobbying are interdependent. Rent-seeking occurs when companies invest profits not into product development but into manipulating regulatory environments to protect market share.

I’m not mad for the sake of Desmos here. What Texas Instruments is doing isn’t sustainable. Consumer tech is getting so good and cheap and our free alternative is getting used so widely that regulations and consumer demand are changing quickly.

Another source told The Hustle that graphing calculator sales have seen a 15% YoY decline in recent years – a trend that free alternatives like Desmos may be at least partially responsible for.

You’ll find our calculators embedded in over half of state-level end-of-course exams in the United States, along with the International Baccalaureate MYP exam, the digital SAT and the digital ACT.

I am mad for the sake of kids and families like this, though.

“It basically sucks,” says Marcus Grant, an 11th grader currently taking a pre-calculus course. “It was really expensive for my family. There are cheaper alternatives available, but my teacher makes [the TI calculator] mandatory and there’s no other option.”

Teachers: it was one thing to require plastic graphing calculators calculators when better and cheaper alternatives weren’t available. But it should offend your conscience to see a private company suck 50% profit margins out of the pockets of struggling families for a product that is, by objective measurements, inferior to and more expensive than its competitors.

BTW. This is a Twitter-thread-turned-blog-post. If you want to know how teachers justified recommending plastic graphing calculators, you can read my mentions.

Big Online Courses Have a Problem. Here’s How We Tried to Fix It.

The Problem

Here is some personal prejudice: I don’t love online courses.

I love learning in community, even in online communities, but online courses rarely feel like community.

To be clear, by online courses I mean the kind that have been around almost since the start of the internet, the kind that were amplified into the “Future of Education™” in the form of MOOCs, and which continue today in a structure that would be easily recognized by someone defrosted after three decades in cold storage.

These courses are divided into modules. Each module has a resource like a video or a conversation prompt. Students are then told to respond to the resource or prompt in threaded comments. You’re often told to make sure you respond to a couple of other people’s responses. This is community in online courses.

The reality is that your comment falls quickly down a long list as other people comment, a problem that grows in proportion to the number of students in the course. The more people who enroll, the less attention your ideas receive and consequently you’re less interested in contributing your ideas, a negative feedback loop which offers some insight into the question, “Why doesn’t anybody finish these online courses?

I don’t love online courses but maybe that’s just me. Two years ago, the ShadowCon organizers and myself, created four online courses to extend the community and ideas around four 10-minute talks from the NCTM annual conference. We hosted the courses using some of the most popular online course software.

The talks were really good. The assignments were really good. There’s always room for improvement but the facilitators would have had to quit their day jobs to increase the quality even 10%.

And still retention was terrible. 3% of participants finished the fourth week’s assignment who finished the first week’s.

Low retention from Week 1 to Week 4 in the course.

The organizers and I had two hypotheses:

  • The size of the course enrollment inhibited community formation and consequently retention.
  • Teachers had to remember another login and website in order to participate in the course, creating friction that decreased retention.

Our Solution

For the following year’s online conference extensions, we wanted smaller groups and we wanted to go to the people, to whatever software they were already using, rather than make the people come to us.

So we used technology that’s even older than online course software, technology that is woven tightly into every teacher’s daily routine: email.

Teachers signed up for the courses. They signed up in affinity groups — coaches, K-5 teachers, or 6-12 teachers.

The assignments and resources they would have received in a forum posting, they received in an email CC’d to two or three other participants, as well as the instructor. They had their conversation in that small group rather than in a massive forum.

Of course this meant that participants wouldn’t see all their classmates’ responses in the massive forum, including potentially helpful insights.

So the role of the instructors in this work wasn’t to respond to every email but rather to keep an eye out for interesting questions and helpful insights from participants. Then they’d preface the next email assignment with a digest of interesting responses from course participants.

The Results

To be clear, the two trials featured different content, different instructors, different participants, and different grouping strategies. They took place in different years and different calendar months in those years. Both courses were free and about math, but there are plenty of variables that confound a direct comparison of the media.

So consider it merely interesting that average course retention was nearly 5x when the medium was email rather than online course software.

Retention was nearly five times greater in the email course than LMS.

It’s also just interesting, and still not dispositive, that the length of the responses in emails were 2x the length of the responses in the online course software.

Double the word count.

People wrote more and stuck around longer for email than for the online course software. That says nothing about the quality of their responses, just the quantity. It says nothing about the degree to which participants in either medium were building on each other’s ideas rather than simply speaking their own truth into the void.

But it does make me wonder, again, if large online courses are the right medium for creating an accessible community around important ideas in our field, or in any field.

What do you notice about this data? What does it make you wonder?

Featured Comments

Leigh Notaro:

By the way, the Global Math Department has a similar issue with sign-ups versus attendance. Our attendance rate is typically 5%-10% of those who sign up. Of course, we do have the videos and the transcript of the chat. So, we have made it easy for people to participate in their own time. Partipating in PD by watching a video though is never the same thing as collaborating during a live event – virtually or face-to-face. It’s like learning in a flipped classroom. Sure, you can learn something, but you miss out on the richness of the learning that really can only happen in a face-to-face classroom of collaboration.

William Carey:

At our school now, when we try out new parent-teacher communication methods, we center them in e-mail, not our student information system. It’s more personal and more deeply woven into the teachers’ lives. It affords the opportunity for response and conversation in a way that a form-sent e-mail doesn’t.

Cathy Yenca:

At the risk of sounding cliché or boastful about reaching “that one student”, how does one represent a “data point” like this one within that tiny 3%? For me, it became 100% of the reason and reward for all of the work involved. I know, I know, I’m a sappy teacher :-)

Justin Reich is extremely thoughtful about MOOCs and online education and offered an excellent summary of some recent work.

2018 Oct 5. Definitely check out the perspective of Audrey, who was a participant in the email group and said she wouldn’t participate again.

2018 Oct 12. Rivka Kugelman had a much more positive experience in the email course than Audrey, one which seemed to hinge on her sense that her emails were actually getting read. Both she and Audrey speak to the challenge of cultivating community online.