About 
I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.

12 Comments

  1. I have no intention of defending Teach to One, but since when have school administrator decisions had anything to do with whether a program was pedagogically successful or not? I’d have to see the pedagogic results to determine whether or not they were mixed.

  2. There are many reasons why a school makes a decision like dropping a program. There could be budget issues, new policies in place, new leadership, politics, logistics, class size, etc. Although it may be that the program didn’t work, it may also be that the schools dropped the program for any number of reasons.

  3. I think those measuring sticks are because traditional education is often viewed as a failed (or, at best, stalled) experiment, while ed tech stuff is seen as an emerging technology that is well-positioned to improve.

    People aren’t looking for ed tech to be successful. They just want it to be promising.

  4. I don’t think you’re necessarily wrong with your broader point, but I’m pretty sure the quote you pulled includes only the negative half of the “mixed results.” I think the 2/3 of schools dropping the program is obviously negative, but the next couple paragraphs about the improved test scores in the school that kept the program are considered positive. Hence “mixed results” overall.

    Again, I don’t think you’re wrong overall about different standards, I just wanted to point out that the quote you pulled might be a bit misleading, as far as the argument Emma Brown was making.

  5. Sorry Dan. I have to agree that’s a little bit of an unfair/unscientific cherry pick of a comment. The 3-act system doesn’t work for me every time either. I don’t necessarily use that to say it’s a failed system. I don’t think blended learning is a failed system either just because it’s failing right now.

    Is all of the technology ideas really immature right now? yes. But, I see that it’s quickly maturing and becoming more cognitively sophisticated.

  6. These aren’t results. Since when was 3 a statistically significant sample?? We need evidence based policy making, not policy based evidence making…

  7. @Dan, you wrote: “In what universe are those results “mixed”?”

    I am going to take issue with this one, although in so doing I am not advocating the approach being discussed.

    Suppose someday there is a systemwide effort to implement DanMeyersApproachtoMathInstruction. Suppose further that “after the first year, 2/3 of schools dropped it.” Conclusion A: Dan Meyer sucks. Conclusion B: The schools sucked at implementing DM cuz it was radically different and they only tried it 1 year.

    Just saying.

  8. the next few sentence in the original rounds things out somewhat.

    “But Boody, where test scores hadn’t budged the first year, kept at it. And the second year yielded gains. The proportion of students proficient in math on state tests grew nearly five percentage points – faster than the city average and faster than New York schools with comparable demographics.”

    i don’t want to defend it as a sole pedagogy – not that i know what the curriculum is exactly – but they might have something right – their instincts as to what they think they’re doing sound at least partly valid (customised pathways etc) – might be part of what is needed, another way of ‘making over’ math class with the new possibilities of tech – not that 1:1 sounds like a full solution

  9. Jeremy Bell commented “Is all of the technology ideas really immature right now? yes. But, I see that it’s quickly maturing and becoming more cognitively sophisticated.” I’ve been involved in math education for only a few years, but I’ve been involved with cognitive science for decades, and I was a software architect and engineer for many years. What I see is basically the same thing that’s been going on since the 1960’s: outrageous claims for progress in artificial intelligence, supported by evidence that’s superficially impressive but proves very little. That’s not to say there hasn’t been real progress; we might have useful Teaching to One in only 10 or 15 years.

  10. I dunno, I’ve got to go with Dan on this one. When you have a 2/3rds drop out rate (especially after the costs associated with getting this set up) — it says it’s not working.

    It may not be working for a lot of different reasons, but…it’s not working.

    The behavior/noise problem seems the clearest issue to me. Imagine a work space of cubicles with that many people in them — the only place that works is somewhere that uses a script. That is, the person working there is making calls, using a script, not thinking intently.

  11. Jen, I don’t think people are necessarily advocating for this “one-to-one” model. We are just commenting that it seemed unfair that Dan used this example to make a point that “computer education” is inferior to the method he prefers.

    I think everybody besides admins hate Khan Academy videos (and it’s variations), but I still think Khan has proven itself to be a voice at the table at least. I agree that one-to-one is at least 10 years off and actually may never be useful in primary education, but I think blended models are definitely a worthy and unstoppable force that is quickly coming.

    Check out Paul Anderson on YouTube. He has come up with an incredible blended model that has student centered inquiry AND some good old-fashioned rigor.

  12. See, I just saw him pointing out that many other techniques probably wouldn’t be given a fairly glowing newspaper write-up under the same circumstances.

    Keeping track of who’s paid for what and what that seems to buy you from the press in terms of questioning or lack thereof *should* be a question that educators are asking as outside money pours in to try to get a handle on the pile of public money.

    It’s not about this model specifically, but about the way the same result would be treated if it didn’t have big money and the allure of being new and tech-y about it.