[Desmos Design] Why We’re Suspicious of Immediate Feedback

One of our design principles at Desmos is to “delay feedback for reflection, especially during concept development activities.” This makes us weird, frankly, in Silicon Valley where no one ever got fired for promising “immediate feedback” in their math edtech.

We get it. Computers have an enormous advantage over humans in their ability to quickly give students feedback on certain kinds of work. But just because computers can deliver immediate feedback doesn’t mean they always should.

For example, Simmons and Cope (1993) found that students were more likely to use procedural strategies like trial-and-error in a condition of immediate feedback than a condition of delayed feedback.

I think I can illustrate that for you with this activity, which has two tasks. You get immediate feedback on one and delayed feedback on the other.

I’ll ask you what I asked 500 Twitter users:

How was your brain working differently in the “Circle” challenge [delayed feedback] than the “Parabola” challenge [immediate feedback]?

Exhibit A:

The circle one was both more challenging and fun. I found myself squinting on the circle to visualize it in my head while with the parabola I mindlessly did trial and error.

Exhibit B:

With the circle, the need to submit before seeing the effect made me really think about what each part of the equation would effect the graph in each way. This resulted in a more strategic first guess rather than a guess and check approach.

Exhibit C:

I could guess & check the parabola challenge. In the circle challenge I had to concentrate more about the center of the circle and the radius. Much more in fact.

Exhibit D:

I couldn’t use trial and error. I had to visualize and estimate and then make decisions. My brain was more satisfied after the circle.

Exhibit E:

I probably worked harder on [the circle] because my answer was not shown until I submitted my answer. It was more frustrating than the parabola problem – but I probably learned more.

This wasn’t unanimous, of course, but it was the prevailing sentiment. For most people, the feedback delay provoked thoughtfulness where the immediate feedback provoked trial-and-error.

We realize that the opposite of “immediate feedback” for many students is “feedback when my teacher returns my paper after a week.” Between those two options, we side with Silicon Valley’s preference for immediate feedback. But if computers can deliver feedback immediately, they can also deliver feedback almost immediately, after a short, productive delay. That’s the kind of feedback we design into our concept development activities.

BTW. For a longer version of that activity, check out Building Conic Sections, created by Dylan Kane and edited with love by our Teaching Faculty.

I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.


  1. helpful!

    It’s interesting that the type of feedback here is not a grade or a teacher comment or even separating right vs. wrong (explicitly). I feel a lot of research focuses on all those other aspects of feedback and not on what Shute calls “implicit feedback” such as from a manipulative, simulation, or the environment. I get implicit feedback about gravity when I drop something e.g.

    The instant one feels like implicit feedback. The equation is tracking what you type and showing the consequence in real-time. (within the bounds of legal expressions)

    The delayed question distances you from the consequence. Is this still implicit feedback? Is it more evaluative (discriminating right from wrong)? The result doesn’t say “wrong” though, it shows you the result of your equation. But the equation is no longer “living” in the same way. I remember the old QBASIC Gorillas game where you had to throw a banana at the other gorilla by giving an angle and velocity. Perhaps the Gorillas and this delayed circle are delayed-implicit.

    It seems a danger of immediate-implicit feedback is that it makes hill-climbing too attractive. Hill climbing is a computer science metaphor https://en.wikipedia.org/wiki/Hill_climbing roughly meaning you start with some guess then try to improve that guess incrementally. The “top” of one hill may not be the best hill to be on top of, but the larger danger is that the strategy to hill climb may not be particularly attached to the underlying conceptual structures. It appears this is what you are avoiding: the student who knows they can improve the parabola by altering numbers but has no guarantee of deeper thought about those numbers.

    Does delayed-implicit feedback make you more likely to see deeper structures? It seems we have anecdotal agreement that yes it does. The need to strategize may unlock human heuristics that might escape the negatives of hill-climbing.

    I wonder what the immediate/delayed distinction means for strict evaluation of correct vs. incorrect. (Such as on Khan Academy). You are told immediately if your answer is wrong but nothing about “how” your answer is wrong. Does delaying the feedback still offer a potential positive effect? It seems there is quite a bit more research here. [May-Li will be writing up a blog post about this soon] Shute 2008 does a review of a lot of questions along these lines https://www.ets.org/Media/Research/pdf/RR-07-11.pdf but doesn’t dig into the implicit at all.

    It seems that Desmos is exploring new distinctions within “implicit” and providing some hybrid with other types of feedback studied in other mediums… and I love it :)

  2. Are we sure this is an “either-or” situation?

    Our silicon solution offers two modes of checking, one in which the student’s work is checked after each step, another in which it is not checked until they submit an answer. And we let them choose, because the first is prolly best during concept formation (so they do not spin their wheels working a five step problem after unwittingly erring on the first step) and the latter is best when self-assessing to see if one is ready for an exam where indeed the feedback may not come for a week. And at any rate, students do need to become independent of the quick correction or they’ll never really form the concepts. That is why in our “levelling-up” mode we do not allow second chancesafter mistakes.

    When I was a tutor I started at an even more extreme level of immediacy: I would stop them as they wrote down a step as soon as an error was evident. That is a tough call for software to make, and I myself like blazing thru a problem and then checking my work, so that might be a bridge too far for silicon.


    The bottom line is that different timings of feedback do work differently, but different stages of readiness might demand exactly that.
  3. Marbleslides subtly employs both methods of feedback. When an adjustment was made to an equation, the picture would update immediately, but the student had to click a button to release the marbles and see if they achieved the goal.

    I actually went a step farther and added parameters and sliders to the equations. This might make it more guess-and-checky, but I liked how the animation created a link between increasing/decreasing the parameters and transforming the graph.

    In this case, I was introducing these equations and graphs to the students, so my main objective was to give them an experience of the “what:” what happens if I change this part of the equation? After that, we had some more discussion about “why.”


    Like Kenneth wrote, I wonder if different feedback delays are more useful for different aspects/phases of learning: immediate feedback to gather data and form hypotheses, slightly delayed feedback for hypothesis checking and “why” exploration?
    • I actually went a step farther and added parameters and sliders to the equations. This might make it more guess-and-checky, but I liked how the animation created a link between increasing/decreasing the parameters and transforming the graph.

      FWIW, I’m more suspicious of sliders than of direct manipulation of the parameters in a function, at least as they relate to early concept development. I’m worried that while sliders are extremely useful for experts who know how the parameters affect a graph, they’re too mesmerizing to help novices form their early concepts. I’m worried they don’t associate graph changes with changes to equation parameters. Rather they associate those graph changes to movements of their hands.

      Just worries for now, but there’s a reason with Marbleslides why we didn’t add sliders and why we make sure to ask some questions about static scenarios early in the activity.

  4. I don’t teach higher math and the circle formula is something that I’ve seen before but never really learned or understood. Even parabolas require me to dig back to my memories of high school. So, for me, the guess-and-check on the parabolas was helpful in that it confirmed my memories of how things worked (I was pretty happy when deleting that negative sign gave me the response I expected!). If the next problem had been another parabola, I would probably have been ready to do it without the immediate feedback. Since the next challenge was a circle, I ended up doing guess-and-check anyway, because I could figure out how to place the center pretty easily, but the relation between the radius and the “=x” part of the equation is kind of hazy. I still haven’t gotten it nailed down.
    I guess my hypothesis is similar to Kenneth’s in that the appropriate degree of immediacy depends in part on the student’s competence. If I had only gotten one chance to submit the circle, I would have been frustrated and I wouldn’t have learned anything. I feel that the immediate feedback activity is better for learning or discovering rules, but it would need a follow-up activity. Maybe a card-match?

    big fan

    Or, it would be cool to see a delayed-feedback activity that keeps track of your attempts. So, each time I hit “submit,” the circle is a different color and the corresponding equation is still visible. The first time or two, the goal could just be to get the circle right eventually, but then you could introduce a challenge: do it in as few tries as you can.

    And then, the last screen: can you do it in one try?
    I think the difference is that what I described is a teaching progression, whereas what you have right now feels like an assessment screen. There’s definitely merit to making students think carefully and commit to their choices, but I think that it needs to come after the ground’s been laid.

    • Or, it would be cool to see a delayed-feedback activity that keeps track of your attempts.

      This just came up on Twitter also. Love the suggestion. I’m very curious how it would affect how students think about the challenge.

  5. This is the problem I have with Marbleslides! I love it but find students doing it with a minimal amount of thinking and not remembering what they learned. I have to tell my students that the goal is to do it in ONE TRY. I have also tried having them write it down before they enter it in the computer and that the majority of their time should be spent thinking and writing, not typing. This is hard when they know they can get that immediate feedback.

    Is there any way to have them do Attempt 1, Attempt 2, Attempt 3, etc. without instantaneous feedback?
    Then we could compare not only which students got each one correct, but how many tries it took them.

  6. I checked out the whole set of activities on graphing Conic Sections. They are wonderfully designed! The problems are scaffolded from easier to harder. The ability to get feedback by submitting a wrong answer is extremely valuable. Also, for the “find the next pattern” problems in the parabolas and ellipses, I found it helpful to be able to write the equation of the shapes that were already there and have my answer overlay the original colored graph.

    In my opinion, if a student can successfully work their way through this set of problems, they have not only shown a deep understanding of conic section equations, but have also exhibited a good grasp of the 8 Practice Standards in CCSSM. Keep up the great work! The Desmos site is a true gift to math education!

  7. yowza!

    Steven Spielberg made a parallel comment 15 years ago about editing video using digital editing software vs. the act of cutting and splicing actual film reels. In his opinion at the time of the comment, the time, effort, and commitment required to make a video cut from one shot to another by cutting and taping film forced the filmmaker to carefully consider the shots they would make and the cuts they would require. He felt that using this process made a better filmmaker out of someone and led to a higher quality final product. I don’t have enough experience to agree or disagree, but thinking through this idea in other fields might shed some light on the topic.
    • Super interesting analogy, Harry. Any way you can recall the source of that quote? I couldn’t dig it up after a few minutes of searching.

    • I looked as well and couldn’t find it. It was from an interview with him I saw on TV 15 years ago as part of a documentary or special. I can’t remember the exact context, though. It always stuck with me for some reason and I recall it now and again under different circumstances.

  8. Interesting post, and fun activity. With the parabola exercise I just crashed around until I got it. With the circle I even got out pencil and paper and started entering numbers in my formula to see what would happen with the different points.

    big fan

    One way that I would enjoy would be counting the number of attempts I make, so that I can try and improve my score. That would work better with the delayed entry, as you get a chance to think before entering.
  9. I am old enough to remember when programming was a *costly* exercise, as when I started (very young) to program for money, the client’s TSO (Time Sharing Option) cost to access the remote computer was $1/minute of connect time. (We called it the “Terribly Slow Option”). Plus you were billed for CPU cycles! Yes, I know. This is difficult to imagine now. I feel like a time traveller.

    But here is the thing: We *wrote* out program code on paper, and checked it by thinking about it carefully. Then, we typed it in, and compiled and ran it on *small* test cases. Then we ran the code on big test cases. (I learned the real truth of the Central Limit Theorem from this very early work, and that made me a tiny bit of money later on…) The database was *huge* (all the people who had ever collected Unemployment Insurance in a major Western nation – each record was multi-dimensional, and there were millions of records in the full base. (We had high inflation, and high unemployment. It was called “stagflation”, and it was ugly.) Now, a multi-million record database is nothing now. I know of guys who capture close to half a terrabyte every day. But in these ancient times, it cost real money to run a simulation against the full data set. And it was an awesome intelligence tool, as we used the data to source a simulator to check the expected costs of legislative changes. It was “big data” heuristics, circa the 1970’s, and it worked.

    Key point is, our programming style was *really* different than now. Really different. My programs were typically free of all but minor syntax errors. By thinking about the problem carefully, designing the process, and then keying it ourselves (not using keytypists), we got good, quick, reliable results – using a big remote mainframe none of us ever even saw.

    Immediate feedback is not always good. Quick response time is really good, while doing research and actually working with a computer. But having zero-cost computing and always getting immediate feedback as one works on something, runs the real risk of making us all lazy and a bit sloppy. We bash away at stuff in trial-and-error fashion, while using grunt-and-point interfaces to execute our compiles. It is better to pause and think first. And whether you use JCL, Makefiles, Gradle or (please give me strength…) BAZEL, you still have to decide WTF you want to do, and build a *proper* documented process for actually doing it.

    Immediate feedback is almost always sub-optimal to thinking. But sometimes, say for instance when your boat is being shot at, or your aircraft is in an inverted spin, you may not have the luxury of reflection and analysis. But when that luxury is available, careful reflection is a valuable good, and should be consumed. Take time, and remember Thomas J. Watson’s one-word advice to his people: “Think.”

    PS: I really like the comment by Harry O’Malley, about Steven Spielberg’s views on editing the raw film by hand, instead of using digital. I feel the same way about vacuum tube electronic circuits. Messing around with semi-conductor LSI chips is fun, and the voltages are safe for children. But build a regenerative radio receiver using one single hot triode tube, running in space-charge mode, where the B+ line is only 35 volts, and wind your own coils like kids did in the 1920’s, and you *really* get a feel for how a photon field scatters, and how a tuned circuit incorporating feedback can pull a wee wisp of a signal out of thin air from a hundred miles away. If Thomas Watson said “Think”, I would also add my one word of advice: “Build”. Don’t just simulate it. Build it for real. To build, is to experience the polar-opposite of immediate feedback. Lots and lots and lots of work, with no immediate results at all, until completion. And even then, you may have a failure! This teaches you how to do things. Build something, and touch it with your hands while you make it (but not, of course, if it is highly radioactive, or running on high-voltage!) ;D

  10. perplexing!

    I oftentimes have found myself simply perplexed by how quickly my students shout out thoughtless responses when I pose a question in my Algebra classes. Usually their quick responses are rather illogical. After reflecting on your post, I realized that perhaps I’m at fault for conditioning my students to answer questions in this trial and error fashion! So often I provide immediate feedback when they respond, and I am beginning to believe this may be the root of the problem. I am inspired to think of effective ways to delay my feedback in order to foster a more thoughtful discussion, even under a time crunch. Thanks for the inspiration!

    As a side note, your blog is the first professional one I have decided to follow. I stumbled upon it by searching for math blogs, and I have really enjoyed learning from you and the other readers. Also, I loved the Desmos parabola & circle activity and checked out the Desmos teacher page. I never even knew something like this existed, and I am excited to find ways to incorporate Desmos into my classes. What an absolutely awesome resource!

    • Thanks for the note, Pauline. I was thinking a bit about your “quick responses” comment and recalling a teacher professional development session I lead online this last week.

      I asked attending teachers to type their responses into a Google Doc that was open to everybody. Responses flooded in. People were typing fast. Many admitted later they were either too shy to respond and others said they wanted to get their responses in quickly before they read other peoples’ responses.

      It seemed like a poor medium for helping people think.

      So later in the session I asked a question but turned off editing so people could think and sketch ideas privately first. Then I turned on editing. Several people afterwards expressed a strong preference for the latter model.

      I wonder if that model has any analog to students in classrooms.

  11. like this idea

    A strategy I have used in the classroom which might transfer well to this situation would be tracking the number of trials in the first problem. It still provides immediate feedback in the form of graphing their equation and showing how the equation change creates graphical change, but knowing that the number of attempts is being tracked may encourage more thoughtful guess-and-check. Having the data of a list of submitted equations, in order, would also be a good opportunity for teacher conferencing with the student about their thought process for each step.
    • Yes, counting a behavior changes it. But then are we cramping those with an exploratory learning style? I myself will try twenty things on a software problem rather than open a reference, because references are generally pretty bad and I kind of have an idea what to try, so why not just give it a go? In a learning situation, if I see my guesswork being counted my learning style is being taxed. I am reminded of a competitor Cognitive Tutor Algebra reporting that once kids they were being penalized for asking for hints they stopped using that feature and asked the teacher (who just told them what to do!). Either way the benefit of an automated tutor saving the teacher from being the first line of support was lost. Funny how hard it is to engineer around crafty students. :)

  12. What if it displayed the previous formula and chart along with the current one, so the impact of the change made is visible instead of having to remember what the previous result and formula was? This would make it easier, less system 2 thinking thinking required, to gain experience to build an intuitive mental model. This way it is not just simply a series of guesses, because after couple of tries my working memory is exhausted.

    • What if it displayed the previous formula and chart along with the current one, so the impact of the change made is visible instead of having to remember what the previous result and formula was?

      This makes a lot of sense. I don’t love that clicking into the equation field makes the previous graph disappear.

    • I used it a few ways, but I can’t take full credit – a special ed co-teacher introduced the idea. The first way was on formative quizzes or tests, we’d write 4-3-2-1 next to a question. A student could try something and ask us to look at it. For a little bit of advice or confirmation I would cross off the 4. If they came back, the 3, etc. For the quiz, we made the question worth that many points, and you could get as many points as were not crossed off. I then also had the data that even if the student got a question correct, I knew exactly how much help/advice we had given.

      I then extended this into other practice work that were graded only as “complete/incomplete.” If I came to the student to talk about their work, nothing happened. If the student came up to me or another adult (instead of a peer, etc) to ask “Did I get this right?” I would check for them and discuss, but mark the question similarly. There was no grade consequence, but knowing that I was tracking it, they were motivated to make more independent attempts before asking for help. (Again, I stress that I was always circulating and conferencing to address misconceptions, but this solved a lot of the helpless “I can’t do it” procrastination.)

      While the problem this solved for me is different than the guess-and-check problem on the Desmos activity, I think the strategy would have similar effects. Students like to get something right, and they like to get it right in fewer tries as well.

  13. I really like these activities, and would love to use them in my own classroom, but I would much rather give students several variations of the same activity with new points to separate and have them also decide *if* there is a circle to separate them. Unfortunately, the tools to recreate this don’t seem to be available.

    • Thanks for your thoughts here, Jason. Do you mean the same kinds of fields of red and blue dots and a multiple choice response for “Yes, a circle separates them.” or “No.”? I think we can make that happen pretty easily.

    • Recently, my class and I have been working with desmos and we started including a teamwork and competitive element. I am hoping to have series of fields with red and blue dots to have them separate, and if unable to include the least number of one color or the other. I find that my students have responded well to lessons like this to reinforce these skills, but sometimes they want to do it 3-4 times before they feel comfortable. With desmos, each page seems to go really quick, so it seems like it wouldn’t be bad to give them a variety.

    • To help me calibrate my understanding, how close is this activity to what you’re looking for with regards to scoring, competition, fields of red and blue points, etc.

    • I like the progression of the activity you linked. What I’ve been developing is during a section on conics after deriving the formula for a circle. I was hoping to use something similar to the delayed feedback example, hopefully tracking each submission as the students try to correct errors, and with some added complexities as the task continues. For instance, starting with 2-3 fields of red/blue dots as in your activity for conics, then one or two with r+1<(x-h)^2+(y-k)^2<r to give a thickness that they need to accommodate for, and eventually I started developing a sort of battleship game where the students can make guesses and narrow down on where the "ship" is