The #1 Most Requested Desmos Feature Right Now, and What We Could Do Instead

When schools started closing months ago, we heard two loud requests from teachers in our community. They wanted:

  1. Written feedback for students.
  2. Co-teacher access to student data.

Those sounded like unambiguously good ideas, whether schools were closed or not. Good pedagogy. Good technology. Good math. We made both.

Here is the new loudest request:

  1. Self-checking activities. Especially card sorts.

hey @Desmos – is there a simple way for students to see their accuracy for a matching graph/eqn card sort? thank you!

Is there a way to make a @Desmos card sort self checking? #MTBoS #iteachmath #remotelearning

@Desmos to help with virtual learning, is there a way to make it that students cannot advance to the next slide until their cardsort is completed correctly?

Let’s say you have students working on a card sort like this, matching graphs of web traffic pre- and post-coronavirus to the correct websites.

Linked card sort activity.

What kind of feedback would be most helpful for students here?

Feedback is supposed to change thinking. That’s its job. Ideally it develops student thinking, but some feedback diminishes it. For example, Kluger and DeNisi (1996) found that one-third of feedback interventions decreased performance.

Butler (1986) found that grades were less effective feedback than comments at developing both student thinking and intrinsic motivation. When the feedback came in the form of grades and comments, the results were the same as if the teacher had returned grades alone. Grades tend to catch and keep student attention.

So we could give students a button that tells them they’re right or wrong.

Resourceful teachers in our community have put together screens like this. Students press a button and see if their card sort is right or wrong.

Feedback that the student has less than half correct.

My concerns:

  1. If students find out that they’re right, will they simply stop thinking about the card sort, even if they could benefit from more thinking?
  2. If students find out that they’re wrong, do they have enough information related to the task to help them do more than guess and check their way to their next answer?

For example, in this video, you can see a student move between a card sort and the self-check screen three times in 11 seconds. Is the student having three separate mathematical realizations during that interval . . . or just guessing and checking?

On another card sort, students click the “Check Work” button up to 10 times.

https://www.desmos.com/calculator/axlhe3shwg

Instead we could tell students which card is the hardest for the class.

Our teacher dashboard will show teachers which card is hardest for students. I used the web traffic card sort last week when I taught Wendy Baty’s eighth grade class online. After a few minutes of early work, I told the students that “Netflix” had been the hardest card for them to correctly group and then invited them to think about their sort again.

I suspect that students gave the Netflix card some extra thought (e.g., “How should I think about the maximum y-value in these cards? Is Netflix more popular than YouTube or the other way around?”) even if they had matched the card correctly. I suspect this revelation helped every student develop their thinking more than if we simply told them their sort was right or wrong.

We could also make it easier for students to see and comment on each other’s card sorts.

In this video, you can see Julie Reulbach and Christopher Danielson talking about their different sorts. I paired them up specifically because I knew their card sorts were different.

Christopher’s sort is wrong, and I suspect he benefited more from their conversation than he would from hearing a computer tell him he’s wrong.

Julie’s sort is right, and I suspect she benefited more from explaining and defending her sort than she would from hearing a computer tell her she’s right.

I suspect that conversations like theirs will also benefit students well beyond this particular card sort, helping them understand that “correctness” is something that’s determined and justified by people, not just answer keys, and that mathematical authority is endowed in students, not just in adults and computers.

Teachers could create reaction videos.

In this video, Johanna Langill doesn’t respond to every student’s idea individually. Instead, she looks for themes in student thinking, celebrates them, then connects and responds to those themes.

I suspect that students will learn more from Johanna’s holistic analysis of student work than they would an individualized grade of “right” or “wrong.”

Our values are in conflict.

We want to build tools and curriculum for classes that actually exist, not for the classes of our imaginations or dreams. That’s why we field test our work relentlessly. It’s why we constantly shrink the amount of bandwidth our activities and tools require. It’s why we lead our field in accessibility.

We also want students to know that there are lots of interesting ways to be right in math class, and that wrong answers are useful for learning. That’s why we ask students to estimate, argue, notice, and wonder. It’s why we have built so many tools for facilitating conversations in math class. It’s also why we don’t generally give students immediate feedback that their answers are “right” or “wrong.” That kind of feedback often ends productive conversations before they begin.

But the classes that exist right now are hostile to the kinds of interactions we’d all like students to have with their teachers, with their classmates, and with math. Students are separated from one another by distance and time. Resources like attention, time, and technology are stretched. Mathematical conversations that were common in September are now impossible in May.

Our values are in conflict. It isn’t clear to me how we’ll resolve that conflict. Perhaps we’ll decide the best feedback we can offer students is a computer telling them they’re right or wrong, but I wanted to explore the alternatives first.

2020 May 25. The conversation continues at the Computation Layer Discourse Forum.

Tagged in:
About 
I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.

30 Comments

  1. Reply

    I agree. Right/wrong feedback just seems like the quick solution to a more complex problem. Is real-time feedback a possibility? Like when a little chat window pops up on some websites. I find myself wanting to give students real-time feedback as they’re working — ask them questions, highlights parts of the problem, etc. Things that would be far more valuable than right/wrong.

  2. Reply

    I have used Desmos activities for virtually every synchronous class meeting I’ve had since the shut down. It’s the only way I know to get even an idea of what’s happening at the other end of the fiber optic cable. But the truth is I make a lot of mistakes when I make those activity builders, and I don’t usually have time to do the kind of vetting you describe. and so another reason not to build in this kind of feedback, is that, at least coming for me, that feedback is likely to be wrong. Instead I make the activities so that I can see on the dashboard which screens students are struggling with. Then I know where we should be spending the precious little time we have “together,” and where to look so that I can give student the kind of feedback that might prompt some of that mathematical reasoning.

    Instead I vote for two-way feedback pls.

  3. Rebecca Ambrose

    May 21, 2020 - 12:04 pm -
    Reply

    Thanks for this!!! Today we’re talking about feedback in the Inquiry for Teaching class so the timing couldn’t be better & the use of evidence throughout exemplifies the approach that we are hoping our student teachers will adopt for their conversations with colleagues. I also love the sort task itself and am eager to take a closer look. In appreciation!!!

  4. Reply

    Yes! Being told your right or wrong without further feedback is fairly meaningless.

    I love Johanna Langill’s idea of presenting the flow of class responses. It is like the debrief/discuss portion of a lesson after an explore phase. And though collaborating during the explore phase is really difficult right now, I would imagine that a debrief like this where the main themes of students’ thoughts are discussed is still impactful.

    • It wasn’t my idea, it was literally what Dan had done in a webinar earlier. I think that was the best reaction video that I had done, and it was because it had been modeled for me and this task was uniquely suited to students creating LOTS of different correct and interesting responses. Maybe because the point of this activity is INTERPRETING graphs. It’s harder to do something like this for tasks like “Build a Bigger Field,” where the thing that kids struggle with is writing an equation. And ultimately I was still talking to students- I missed the conversational piece.

      I’d love to think about what are the types of things you could feature in a reaction video, or what are some general principles for creating them. A framework would be nice.

  5. Reply

    The most important thing to consider here is context. Your values do not exist in a vacuum but are based on a lot of assumptions. In the context of a regular classroom setting, I can see how feedback could diminish learning. However, in the context of remote learning where you don’t have a traditional classroom structure of student exploration followed by a debrief/discussion of some sort, it seems highly likely to me that the risk of leaving students confused or feeling like they haven’t learned anything far outweighs the risk of diminishing learning when usually I feel the opposite. It would be amazing if students actually followed up on each other’s card sorts or watched the teacher’s reaction videos, but I believe the immediate feedback would be more valuable for the time being.

    As a self-motivated online learner myself, I always appreciate some way of checking my answer. Otherwise I feel uncertain that I’ve learned anything. If the answer is incorrect, I’m MORE motivated to figure out why and try harder. I believe many students are like this as well, especially those that are actively pursuing remote learning.

    Perhaps to avoid the click happiness, students could have a final submission button that asks “How do you think you did?” (100% / 75% / 50% or less) and then a reflection question if they got them all correct “What is one key takeaway from this lesson?” or “Which do you think you had wrong and why? Now go back and fix it.”

    • I think Kate is getting at something important here. Things are more different than ever now that we have our students at a distance. What worked well before (teacher feedback in person), are not working at all for me with asynchronous instruction (what seemingly is the most equitable form for my students, I understand that sync is working great for others!). Is it possible that the reason why teachers are requesting more comprehensive feedback tools now because their instruction is failing in new ways now? I’m totally with the idea that Desmos doesn’t not want their platform to be a bad imitation of a teacher by giving immediate feedback which enables all kinds of bad habits in students (guess and check can be a bad solution if no reflection is done afterwards). But I feel the need for a way Desmos to give automatic feedback in activities that meets me where I am (nearly drowning!). Bryn had some interesting suggestions (mentioned in the CL forum), like showing the students their Teacher Dashboard status. That might work well on something like a card sort.
      I don’t use CL as much as many but I’ve been using it more and more because of how it *isn’t* a cookie cutter tool for providing rightness/wrongness for random problems assigned. I love how the Desmos CL platform enables all kinds of tools to allow students the ability to build conceptional knowledge. So I don’t want it to become a cookie cutter tool either!

  6. Reply

    You say,
    “After a few minutes of early work, I told the students that “Netflix” had been the hardest card for them to correctly group and then invited them to think about their sort again.”

    Is this based on your analysis of their actions right then, or is it a statement you could say to every set of students (within reason) ?

    Interesting that many of these aim to have the students look or think again, and perhaps the best ways to do that do not include any personal feedback. Or, the personal feedback is incidental to providing a new lens for all students on the work.

    • I can only say, empirically, that there were lots of revisions to those cards after I told them Netflix was the hardest one and let them work again. I would love to know more about the different thinking provoked by different kinds of feedback.

  7. Jeff Holcomb

    May 21, 2020 - 5:28 pm -
    Reply

    Great post, lots to ponder. Some thoughts:
    I’m thinking there is some sort of decision making flow chart in hiding here to help make decisions about what type of feedback to give. Somewhere in this would be things like the feedback students get when the hook is not in the right place (Picture Perfect).
    There is a progression here too– early on in an activity clicking over and over is OK. But at some point I want them to take a breath and think “There has got to be a better way”.
    And “perfect is the enemy of good”. In this environment, I think “right/wrong” might be better than nothing. I guess “when” is the whole point.
    Featured Comment

    I knew teaching was complex, but this situation has really made me more gobsmacked at what we do when in the room.
  8. Reply

    Kids react so differently to being scored right/wrong. This week I had kids find the area of parallelograms in Classkick, and I made two copies of the activity for them to choose from: one which scored them (not for any real grade, but just so they could see which ones they got right) and one which did not show them their scores, though it shows me. I can comment on either one, and they can draw on the figures or ask questions. (The concept lesson was last week, and this week’s practice problems were preceded by a video by me summarizing last week’s work.)

    So far 10 chose non-scored and 17 chose scored. The scored answers are significantly better, at a glance (and not correlated with how well kids “normally” do). Presumably if they got them wrong they stopped and figured out the issue, and guess and check would not have been effective to get the numerical answer. But I think a kid who chose “not scored” might not even have engaged with scored, and I can still leave them feedback, it’s just with a delay.

    So, two suggestions to consider: give kids a choice about seeing scores, and try to score things that are easy to grade but hard to guess and check (like problems requiring a numerical answer).

  9. Reply

    Stand your ground Dan. When I saw the title of your post, I thought for sure the most requested feature was going to be self-checking lessons. I see the requests on Twitter often. I read all your posts. I know why you guys have refrained from building a platform that “grades” student work. Ugggh, I don’t even like typing it. Stay away, Desmos! We don’t need another platform that grades work. Stay aligned with your values. The AB platform has actually given me many opportunities to chat with teachers about why knowing if a student has the right answer isn’t the most valuable thing for the teacher.

  10. Reply

    Thanks for the great timing on this. I am about to do some PD with teachers on card sorts. When teachers in my PD setting ask for card sorts to be self-checking, I get it. At least in a virtual asynchronous environment, it makes sense. But when we do a card sort with our students without technology, I might say things like “think about what ( versus [ means for describing domain” or “describe what let to this set being grouped together” with an individual group. Then, when they fix it, I would want to know what they were thinking that led them to that change. Just checking to see how many are right and getting a “Too bad, so sad. You have 6 correct out of 10.” reminds me of the Price is Right Race Game (https://www.youtube.com/watch?v=VCTOpdlZJ8U) and the frustration that comes with blindly switching cards around without deeply thinking. When students (and teachers) only focus on right answers, they miss opportunites for growth and connection. When constructing a bridge, engineers need the right answer for the mass that can be supported by the bridge and right answers really do matter in that situation.
    Featured Comment

    However, the bridges in my classroom aren’t built with right answers alone. They are built by using opportunites for growth, connection and classroom collaboration.
  11. Reply

    Students don’t watch videos with feedback

    Sometimes when I introduce them in short pieces within the desmos activity, but then they have to be made before watching their answers (because I can’t edit the activity once the code is done, to share with students that enter later).

    Besides, I can’t know if they watch the complete video within desmos (YouTube at least has statistics about the time the video was played)

  12. Reply

    The most interesting thing here, to me, is the vision of how Desmos can best help students after COVID is over. I’m intrigued by your belief that students (generally?) will learn more from holistic analysis after the fact, compared to feedback during the activity. This is definitely true sometimes, and (in my classroom) also not true sometimes. Can we figure out the difference? I wonder if a key factor is whether kids can sense during the activity that they’re getting something wrong, even if there’s no “wrong answer” feedback. In the Netflix/ESPN discussion above, neither kid has a voice in their head saying, “I’m terrible at this, and I just want help,” so delaying discussion till later is great. Same with Turtle Crossing — not just because it implicitly encourages experimentation instead of getting the right answer the first time, but also because it’s whimsical and fun, which takes some attention away from the “Am I sucking at this?” voice that some kids hear more loudly than others.

    But when an activity is more like Marbleslide Lines — where kids can definitely see if they’re failing, and a struggling kid’s process of experimentation can begin to feel more like a grind than a delightful puzzle — then maybe it wouldn’t be so bad to offer kids support?

    I’m thinking of a Screen 1 that kids can go back to if they get stuck on a later screen. Say they’re on Screen 9, and they need to get a line to carry marbles from (3,5) to (10,1). They could go back to Screen 1, where they could type in the coordinates of the point where they need their line to start, the coordinates where they want it to end, and the forms of lines they’re familiar with (slope-intercept, point-slope, etc). Then they could get some guidance on how to make the line they need. A teacher could tell from the existing Desmos dashboard if a particular student’s active screen kept going back to Screen 1, and could go to that student’s desk.

    Too helpful?

    Final note: I’m so grateful to Desmos for working with teachers on the front lines to help us get through this. No complaints at all about the features you’re offering or not offering. Just wondering if we can puzzle out how best to leverage Desmos when regular classes resume.

    • Thanks for your thoughts here, Kevin. I don’t know if the feedback you’re proposing is too helpful. It does seem pretty tiring, though.

      I’m not particularly confident that what I’d offer on Screen 1 would adequately celebrate and respond to the student’s current ideas on Screen 9. I’m not sure it’d be more effective than a five-minute whole-class summary of current thinking I’d seen around the room / dashboard. I’m just not sure the energy spent constructing the intricate intervention is well spent.

  13. Brenda Puett

    May 23, 2020 - 8:01 am -
    Reply

    I think that the teacher should have the option to turn auto-grade or off, and here is why. When I am in the classroom and can have rich discussions about a card sort, offer prompts, etc., it is of course best for student learning. During this pandemic, though, my students prefer to work at, say 3 AM, while I am asleep. They close the activity and consider it finished. The next day, I add comments, hints, praise, but the students never go back to review my comments or revise their answers. I have even tried emailing feedback when they don’t read feedback in Desmos, to no avail.

    • Important feedback here, Brenda. Thanks. Certainly, asynchronous learning makes several of the options above challenging-to-impossible. Certainly, we aren’t operating under ideal circumstances for student learning at all.

  14. Chester Draws

    May 23, 2020 - 7:39 pm -
    Reply

    I’m not big on card sorts, though I do use them from time to time.

    They seem to me to be good for revision of concepts already taught. So, for example, I set them for firming up knowledge of geometry terms (alternate, corresponding etc). In those cases the answers are right or wrong and there is no value judgement involved. It’s a way to make drill less drill like.

    They can be a good way to set part of the class a self-marking exercise for a topic they are already comfortable with while the teacher does something else with the rest. But that requires that they can get most of it right, and generally be able to spot their errors when they don’t.

    If the teaching requires serious feedback, rather than a quick question to clarify something — i.e. actual teaching — then cards seem inappropriate. No amount of adding feedback will fix that.

    Note, I am not American, but I can not do the example cards you have given. I use similar graphs to discuss what they can see and talk about, but I would never set something so open as a right/wrong task in the first place.

  15. Reply

    I think there is no clash in values — we are all on the same page normally. But these are not normal times. The examples you gave will not work for all teachers or all setups. First, it assumes synchronous work or that all students are working on the same activity. I rarely have that.

    I have two math lessons a week with 35 11 yr olds. We are continuing with our regular content. As such,I am using Desmos not only for fun activities but also for practice work sometimes since they can use math type (something they can’t do in Google Classroom). The students do not have any textbooks at home with them, so I am creating all their materials at various levels for the students (and it ranges from a student who cannot express 7/10 + 3/100 as a decimal number to a student asking if arcsin and cosecant are the same. that is a literal example from the same class. it’s hard.)

    It is not possible for me to give all students feedback in real time, though I am always available on Google Chats and will answer questions if they ask. But I am typically reviewing all of the students’ work a few days after the official lesson day. What is really devastating is a student who has spent 45 minutes on an activity, having had a massive misconception, with nothing understood in those 45 minutes. That student will only get the feedback that it’s not going well days later.

    When I was a starting teacher, I never told the kids that the answers were in the back of the book, because I worried they might just copy the answers. I CRINGE when I think back to that teacher. Now I teach all my students where to find the answers and to check their answers early and often to catch any mistakes and fix them before they become ingrained. Now I’m back in the zone where the students don’t know if they’re right or wrong and need to wait (days) for answers from me.

    I think in that world, which is far from the world we want to be in, wanting an easy way for students to see if they’re right or wrong is pretty understandable. I have a lot to think about how I can structure things differently for my students going forward, but wanted to also give another perspective.

  16. Reply

    I’m sorry for not engaging more in the comments above. I offered scattered thoughts on twitter, I want to see if I can organize them here.

    Some feedback makes kids give up, stop thinking, or feel bad. In my view, this is almost always feedback that doesn’t help learning. And this is why so much auto-grading is bad feedback — it doesn’t help learning. (In general, motivation is tangled up with success in ways it’s difficult to separate.)

    Auto-graded work sometimes makes kids feel bad. This is when the auto-grading doesn’t lead to learning, or makes it seem like learning will be impossible. It’s not exactly mysterious why this is. Most of the time when a computer tells you that something is wrong, that’s it. So you’re wrong. What are you supposed to do with that information, as a learner? If you knew how to do it right, you’d have done it right.

    What would the ideal learner do when they get the “wrong answer” info? In some cases, they’d take a close look at their steps and try to suss out the error, essentially discovering the correct way to solve the problem on their own. But in a lot of cases, kids get a question wrong because they don’t know how to do it, or they fundamentally misunderstand the problem. An ideal learner in that case would seek out the information they’re missing, from a text, a video, a friend or a teacher.

    Auto-grading in my experience works best when it makes those ideal behaviors easier. I sometimes play around on the Art of Problem Solving’s Alcumus site, just for fun. It automatically tells me if I’ve said the right answer or not (though it gives me two chances and it lets me give up if I want). Then, there’s always a worked-out solution provided. It’s right there, waiting for me to read it. And then, it gives me a chance to rate the quality of the explanation (which I find empowering in some cases).

    The first incorrect notice gives me a chance to discover my own mistake and learn something from it. The second incorrect notice gives me a chance to study an example. And then I have a chance to practice similar problems (because the computer will continue to provide them). It feels very oriented towards growth. I can’t solve every problem on that site right now, but I’m confident that with enough time I could.

    Deltamath does this nicely as well, though of course not every student reads every explanation or watches every video. It works best in a classroom, where students can ask each other or me if they get something wrong — again, it’s using auto-grading in a context that makes it even easier to act as an ideal student would.

    I’d also like to suggest that there isn’t a meaningful difference between auto-grading and a lot of the “insta” feedback that kids get in current Desmos activities. If a kid understands what a graph means, then they understand that their answer didn’t produce the correct graph. If they don’t understand why, you’ll see those same giving up activities that auto-grading can produce — or they’ll do guess and check with the graph until they get a correct answer, which in some cases is not a bad idea — get the answer, and then try to figure out why it’s correct. In either event, Desmos currently employs a great deal of de facto auto-grading in their activities.

    One way Desmos could help is by making it easy for teachers to connect students to learning. You might make it easy for teachers to attach examples or explanations to a wrong answer. You might make it easy for students to ask the teacher a question via a textbox if they get an answer wrong and they can’t figure out the problem. You might enable teachers to include a brief explanation with the wrong answer, and then let kids rate the quality of that explanation. (Really, check out Alcumus.)

    There are smart ways to do auto-grading, I think. The smartest way, though, is to make sure it’s happening in the context of a lot of interaction between students and a teacher.

  17. Reply

    I also agree with giving the teachers the option to turn feedback on or off. The only reason I would support automatic feedback is the current remote learning environment. Otherwise, I agree with the stance of quality feedback over “right or wrong try again.”

  18. Joanne Robert

    May 27, 2020 - 12:36 am -
    Reply

    Oh please, continue to explore alternatives! I remember handing a student a TI83, 84, 84+, Inspire and saying “explore.” Hints/questions here and there if they got stuck but eventually some started programming, some created their own shortcuts but the feedback was from their ability to use the tool for what they needed. Much like video games that don’t come with instructions. Please do not turn desmos into another IXL or Khan Academy so teachers can assign a “grade.” Its uniqueness is what draws students into another way of thinking.

  19. Leeanne Branham

    August 9, 2020 - 6:34 am -
    Reply

    Somehow I missed this in May. So glad I came back to it now. Great thoughts as we begin building relationships and classroom community with a new set of students. Thanks as always for helping us to examine our practice.

  20. Reply

    Hello guys, my name is Kyran Mckinney!

    I`m an academic writer and I`m going to change your lifes onсe and for all
    Writing has been my passion since early childhood and now I cannot imagine my life without it.
    Most of my works were sold throughout Canada, USA, China and even Russia. Also I`m working with services that help people to save their nerves.
    People ask me “Mr, Kyran Mckinney, I need your professional help” and I always accept the request, `cause I know, that only I can solve all their problems!

    Professional Writer – Kyran Mckinney – Killer Papers Team

Leave a Reply to Chester Draws

Cancel reply

Your email address will not be published. Required fields are marked *