Dan Meyer’s Dissertation


Functionary: Learning to Communicate Mathematically in Online Environments

Bloggy Abstract

I took a collection of recommendations from researchers in the fields of online education and mathematics education and asked our friends at Desmos to tie them all together in a digital middle-school math lesson. These recommendations had never been synthesized before. We piloted and iterated that lesson for a year. I then tested that Desmos lesson against a typical online math lesson (lecture-based instruction followed by recall exercises) in a pretest-posttest design. Both conditions learned. The Desmos lesson learned more. (Read the technical abstract.)

Mixed Media

You’re welcome to watch this 90-second summary, watch my defense, read it if you have a few minutes, or eventually use it with your students.

Process Notes

True story: I wrote it with you, the reader of math blogs, in mind.

That is to say, it’s awfully tempting in grad school to lard up your writing with jargon as some kind of shield against criticism. (If your critics can’t understand your writing, they probably can’t criticize it and if you’re lucky they’ll think that’s their fault.) Instead I tried to write as conversationally as possible with as much precision and clarity as I could manage. This didn’t always work. Occasionally, my advisers would chide me for being “too chatty.” That was helpful. Then I stocked my committee with four of my favorite writers from Stanford’s Graduate School of Education and let the chips fall.

Everything from my methods section and beyond gets fairly technical, but if you’re looking for a review of online education and the language of mathematics, I think the early chapters offer a readable summary of important research.

I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. More here.


  1. Great video. Thanks.
    [How long did it take you to make/edit/refine? and don’t count the years of prep and speaking engagements, etc. ;^)

  2. Ray O'Brien

    June 3, 2015 - 3:52 pm -

    Wow! Who puts their dissertation defense online? Dan Meyer. I look forward to perusing all of this. Thanks so much for sharing.

  3. Though I have gone through the full thesis, I have two questions that are posed most easily from the start of the abstract. (I have not watched the defense; my apologies if these questions are answered there.)

    Two sentences, on traditional vs. Functionary interventions:

    “The “traditional” intervention had students perform autograded recall-based work common to current online education platforms, and experience didactic instruction. The “Functionary” intervention, meanwhile, had students perform communicative work, taking turns drawing and describing a graph with an online partner, and experience instruction in response to their need.”

    Two sentences, on findings:

    “An analysis of variance determined that students perceived the Functionary intervention to be significantly more social than the traditional intervention. In the aggregate, both the traditional and Functionary interventions learned significant amounts, with neither learning significantly more than the other.”

    Two questions:

    1. Given the differences described in your descriptions of the interventions, isn’t it clear from the outset that Functionary would be perceived as more social than trad’l?

    2. With no significant difference in learning outcomes as measured, in aggregate, between trad’l and Functionary, is there a strong reason to favor one over the other?

    (In my own reading, I found the lack of difference between trad’l and Functionary the most surprising result of all.)

    Thanks, and congratulations!

  4. Congrats on the defense, degree, and a very cool math lesson!

    But Khan Academy and two textbooks as “typical of on-line”? I think even Khan would admit their practice section needs work (because I saw them say as much after the SRI debacle).

    The best part of your paper was noting that a hundred developers could take research results and come up with a hundred learning software designs.

    Glad to hear you are now open to differentiating between on-line math products instead of sweeping aside the entire genre en masse.

    Especially now that you are one of us. Welcome aboard! I hope you continue refining the design of Functionary to resolve the waiting problem. In exercises I have led with this one-way communication problem the “sender” either drew a diagram of simple shapes or arranged tangram pieces one at a time. Results were shared only at the end to emphasize the disaster, but methinks the math instruction goal would be well-served by incremental exchange after each instruction.

    Interesting to hear “clicking multiple choice” was as engaging as Functionary. Perhaps that characterization does not do justice to the overall experience.

    Congrats again.

  5. Congratulations Dan and thank you for sharing your work with the rest of us. Your work and generosity are much appreciated!

  6. I’m not sure if I’m extremely proud or slightly embarrassed to say this, but upon watching your 90-sec summary I said aloud (to no one in particular), “Wow. He’s so badass.”

    I’m so lucky to work with you and learn from you. Great job, my man. You should be very proud.

  7. Fantastic and congratulations from Sweden!
    Watched the dissertation defense first and was blown away. Then watched the 90-sec summary and I was (and I still am) surprised how much ground breaking stuff you present in that limited amount of time.

    I have said it before and I am saying it again…you HAVE to come over here and spread your knowledge :)

  8. First of all, thanks for sharing your thoughts. Second, I think that “essentially” (puts how many quotes as you like) you discover that:
    Task + Presentation of the tasks = Aim

    In your disertation, the aim is to learn jargon. So you create a format of task for plotting and describe functions: interchange work with a peer, put some instruction here, and there reshare results.

    The traditional tasks have one aim: memorize things from repetitive exercises.

  9. Great! Congratulations!
    Now I hope your able to start blogging again and supplying us with all those great ideas.

  10. “In the aggregate, both the traditional and Functionary interventions learned significant amounts, with neither learning significantly more than the other.”

    What are the implications of this? Is this disappointing to you? I’m sure the term *significantly more* has some technical parameters. i.e. on a short time period, if Group A learns 2% more than Group B, those gains could accumulate massively given enough time, while not qualifying as “significant gains” in the short term. Is that what’s going on? (Also, an increase in engagement that is sustained over years could produce even greater long-term learning gains.)

    Dan, what do you think?

  11. I am not concerned about Functionary’s demonstrated lack of efficacy; the paper clearly documents a big problem with the particular implementation tested: wait time. What if that problem were to be fixed? To me, the premise of the thesis still has not been tested, just as conventional on-line tools have not yet shown what blended learning can do (because all but the newest, well, could be better.)

    Here is how Functionary might be re-engineered:

    Functionary is actually anti-social. It pits sender against receiver, or at least lets senders shirk responsibility for bad descriptions by blaming a solitary receiver.

    To fix this, go truly social: “crowd respond” descriptions. Kids sit there pulling descriptions from a common queue and take exactly one shot at recreating it. They move faster because their is no opportunity to query the sender. After they respond, they see the intended graph, their score, and how they did compared to others. This is vital.

    In duplicate bridge, it no longer matters what cards you are dealt. What matters is how well you play them, because your scoring is against how well other teams play the exact same hands. With Functionary, it does not matter if I get dealt a bad description, what matters is how well I can figure it out.

    This crowd-ranking changes over time as more kids get to the same description. We’ll need a heads-up dashboard so they can see their performance changing over time as they move on to other descriptions (or send their own). This incentivizes them to do their damndest to figure out what the sender meant, important I think to make this work.

    The wait time problem is gone. Kids are either steadily responding or sending. (We’ll need to meter things so the sending and responding keep everyone busy, but that sounds trivial.)

    Receivers — after getting scored and seeing the intended graph — can respond with suitable whining about the sender, but this is one-way and anonymous: again, we want social without personal. Perhaps senders can like/unlike these responses and we rank receivers on that as well. Social!

    Senders are scored based on their receivers’ scores, their incentive to do well.

    Everyone can see the best-performing descriptions in a side-panel and click on them to see the graph, the description, a scatter plot of the scores and the actual response graphs. These will evolve to be full of precise math language, pulling everyone along by example.

    A help panel offers conventional reference material on the math register. Senders use it or not at their own risk, but maybe when they look at successful descriptions and see the vocabulary in use they can click on the terms and jump to the reference section. Receivers have the same reference material at hand (but mebbe they have to actually look things up).

    One nice touch: score a student only on their five last descriptions/response or something.

    One devious touch: after a suitable delay, feed descriptions back to their senders for response. Mwuahahahahaaa….

  12. Thanks for the kind words, everybody.

    MQ asks a question (also asked by others) that I addressed in the study proper, but I may as well take it on here:

    With no significant difference in learning outcomes as measured, in aggregate, between trad’l and Functionary, is there a strong reason to favor one over the other?

    One, the fact that the two treatments had similar aggregate outcomes in spite of the Functionary group’s significantly quicker instructional time speaks in favor of Functionary. This is the “you get the same value for less cost” argument.

    Two, the aggregated assessment is less illustrative than the disaggregated assessment. The aggregate mixes together a number of different constructs around precision – from recall to transfer to description to graphing. When I found a floor effect on any one of them (or several, as it turned out) the aggregated difference grew less significant.

    But one of those constructs did see significant differences in favor of Functionary. Students increased in their ability to use a correct coordinate from pre to post in the Functionary condition. Not so in the other conditions.

    That’s an important finding, one which the aggregate conceals. This is the “you get better value for less cost” argument.


    How long did it take you to make/edit/refine? and don’t count the years of prep and speaking engagements, etc. ;^)

    Let’s call it 15 hours.

  13. Hi, Dan. Teaching and family obligations have kept me from your discussions for a while. I just want to that I will read most of your dissertation sometime, so thanks for giving it that conversational tone.

    I have a comment related to your Twitter thread this week about cognitive load, but I don’t want to hijack your thread by posting it here. Is there a place you’d like me to post it?

  14. Dan,

    Congrats on the dissertation! Incredibly well deserved. You continue to offer the world your gift!

    What I find fascinating- Dan correct me here- is that the functionary group outperformed on item 2 “describing to a partner” the precise location of the coordinate? I’d want that group giving me the directions to my next whereabouts or communicating, perhaps, communicating the proper dosage of medicine to my kid.

  15. Scott:

    What I find fascinating- Dan correct me here- is that the functionary group outperformed on item 2 “describing to a partner” the precise location of the coordinate?

    Hi Scott, thanks for the note. The Functionary group outperformed at levels that were painfully close to significance (p = .0542, IIRC) on #2 but still statistically insignificant.

  16. One of the nifty things about focus on MOOC-type issues is that retention is a easy statistic to keep track of that works as a proxy for motivation.

    In other words, it would be possible to do large, rigorous studies of motivation.

    I can’t think of any prior study I’ve seen that actually tracks motivation. It always seemed to be considered a “soft” attribute not worthy of consideration, but besides it is hard to quantify.

  17. Congratulations Dan. As always, clear and engaging. The hallmarks of good teaching. Angie and I are so proud of you. I’d like to play with this with my 5th graders next school year.

  18. I really like how well you sell the problem to us. You have such a gift for making a problem clear enough as to be tangible and conveyable.

    Definitely liked how you broadened “correct” and “conventional” to include “precise” as well. Then how you infer building upon correct and precise to include conventional.

    BTW, how did you figure out that L3 was battleship notation? Did the student explain herself or did you deduct it yourself? When I saw it, I was thinking that maybe it was making an L shape that started at the origin, went down to (0, -6) and ended at (3, -6). That is so cool.

    I am happy for you and your accomplishments and I can’t wait to see how you implement changes based on your research’s findings.

  19. 1. The next study should look at an assessment after the students have a long break. Which intervention, traditional or Functional has the most retention. Having a control group might be difficult since the instruction is required for all students, so maybe instead frame the experiment more directly as “does Functionary increase retention over time away from mathematics instruction?”

    2. You mention that students were frustrated with their partners even though their own definitions were imprecise and then you mention that the time waiting to do something is a limitation of the Functionary program. I feel that both problems could be helped by giving both students their own graphs at the beginning. Working on their own description gives them something to do while their partner writes. For many students getting back a description similar to their own and seeing how hard it is to work with an imprecise definition could help them recognize the problem with their own.

    The benefit of seeing where their partner struggles with drawing the graph could be duplicated by recording the drawing attempt and playing it back.

    So you’d have the following steps:
    1. describe this graph
    2. here’s your partner’s description of their graph, draw it
    3. here’s how your partner understood your description
    4. see if you can give your partner a better description (and so on)

    The key being that the student would work with their partner’s description before seeing how their partner works with theirs.

    I don’t think this would replace the point intervention you developed because I don’t think it would work for students who have wildly different initial description styles, but it could relieve some frustration in advance of the intervention. And being less frustrated could make waiting easier particularly if I’m right that this format would reduce the waiting in general.

  20. Thanks for your thoughts here, Liz. In an earlier prototype, I actually tried out the workflow you suggest. It turns out there was still a great deal of variance in the wait time in between step 2 (draw your partner’s description) and step 3 (wait for your partner to describe your own). It’s likely that moving away from a strict partner-partner model will relieve some of the waiting.

  21. Let me guess, the descriptions are longer to write than trying to draw from the description? Maybe some variation where the student has the option to get random descriptions from a stored collection of them to try drawing while they wait?