When schools started closing months ago, we heard two loud requests from teachers in our community. They wanted:
Those sounded like unambiguously good ideas, whether schools were closed or not. Good pedagogy. Good technology. Good math. We made both.
Here is the new loudest request:
- Self-checking activities. Especially card sorts.
hey @Desmos – is there a simple way for students to see their accuracy for a matching graph/eqn card sort? thank you!
Is there a way to make a @Desmos card sort self checking? #MTBoS #iteachmath #remotelearning
@Desmos to help with virtual learning, is there a way to make it that students cannot advance to the next slide until their cardsort is completed correctly?
Let’s say you have students working on a card sort like this, matching graphs of web traffic pre- and post-coronavirus to the correct websites.
What kind of feedback would be most helpful for students here?
Feedback is supposed to change thinking. That’s its job. Ideally it develops student thinking, but some feedback diminishes it. For example, Kluger and DeNisi (1996) found that one-third of feedback interventions decreased performance.
Butler (1986) found that grades were less effective feedback than comments at developing both student thinking and intrinsic motivation. When the feedback came in the form of grades and comments, the results were the same as if the teacher had returned grades alone. Grades tend to catch and keep student attention.
So we could give students a button that tells them they’re right or wrong.
Resourceful teachers in our community have put together screens like this. Students press a button and see if their card sort is right or wrong.
- If students find out that they’re right, will they simply stop thinking about the card sort, even if they could benefit from more thinking?
- If students find out that they’re wrong, do they have enough information related to the task to help them do more than guess and check their way to their next answer?
For example, in this video, you can see a student move between a card sort and the self-check screen three times in 11 seconds. Is the student having three separate mathematical realizations during that interval . . . or just guessing and checking?
On another card sort, students click the “Check Work” button up to 10 times.
Instead we could tell students which card is the hardest for the class.
Our teacher dashboard will show teachers which card is hardest for students. I used the web traffic card sort last week when I taught Wendy Baty’s eighth grade class online. After a few minutes of early work, I told the students that “Netflix” had been the hardest card for them to correctly group and then invited them to think about their sort again.
I suspect that students gave the Netflix card some extra thought (e.g., “How should I think about the maximum y-value in these cards? Is Netflix more popular than YouTube or the other way around?”) even if they had matched the card correctly. I suspect this revelation helped every student develop their thinking more than if we simply told them their sort was right or wrong.
We could also make it easier for students to see and comment on each other’s card sorts.
In this video, you can see Julie Reulbach and Christopher Danielson talking about their different sorts. I paired them up specifically because I knew their card sorts were different.
Christopher’s sort is wrong, and I suspect he benefited more from their conversation than he would from hearing a computer tell him he’s wrong.
Julie’s sort is right, and I suspect she benefited more from explaining and defending her sort than she would from hearing a computer tell her she’s right.
I suspect that conversations like theirs will also benefit students well beyond this particular card sort, helping them understand that “correctness” is something that’s determined and justified by people, not just answer keys, and that mathematical authority is endowed in students, not just in adults and computers.
Teachers could create reaction videos.
In this video, Johanna Langill doesn’t respond to every student’s idea individually. Instead, she looks for themes in student thinking, celebrates them, then connects and responds to those themes.
I suspect that students will learn more from Johanna’s holistic analysis of student work than they would an individualized grade of “right” or “wrong.”
Our values are in conflict.
We want to build tools and curriculum for classes that actually exist, not for the classes of our imaginations or dreams. That’s why we field test our work relentlessly. It’s why we constantly shrink the amount of bandwidth our activities and tools require. It’s why we lead our field in accessibility.
We also want students to know that there are lots of interesting ways to be right in math class, and that wrong answers are useful for learning. That’s why we ask students to estimate, argue, notice, and wonder. It’s why we have built so many tools for facilitating conversations in math class. It’s also why we don’t generally give students immediate feedback that their answers are “right” or “wrong.” That kind of feedback often ends productive conversations before they begin.
But the classes that exist right now are hostile to the kinds of interactions we’d all like students to have with their teachers, with their classmates, and with math. Students are separated from one another by distance and time. Resources like attention, time, and technology are stretched. Mathematical conversations that were common in September are now impossible in May.
Our values are in conflict. It isn’t clear to me how we’ll resolve that conflict. Perhaps we’ll decide the best feedback we can offer students is a computer telling them they’re right or wrong, but I wanted to explore the alternatives first.
2020 May 25. The conversation continues at the Computation Layer Discourse Forum.