Category: tech contrarianism

Total 126 Posts

Why Secondary Teachers Don’t Want a GitHub for Lesson Plans

Chris Lusto calls for a GitHub for lesson plans:

To say that the community repository model has done wonders for open source software is a massive understatement. To what extent that success translates to curriculum I’m obviously unsure, but I have randomly-ordered reasons to suspect it’s appreciable.

I attended EdFoo earlier this year, an education conference at Google’s campus attended by lots of technologists. Speakers posed problems about education in their sessions and the solutions were often techno-utopian, or techno-optimistic at the very least.

One speaker wondered why teachers spend massive amounts of time creating lessons plans that don’t differ all that much from plans developed by another teacher several states away or several doors down the hall. Why don’t they just build it once, share it, and let the community modify it? Why isn’t there a GitHub for lesson plans?

I’m not here to say that’s a bad idea in theory, just to say that the idea very clearly hasn’t caught on in practice.

Exhibit A: BetterLesson, which pivoted from its original community lesson repository model to a lesson repository stocked by master teachers and now to professional development. (Its lesson repository is currently a blink-and-you’ll-miss-it link in the footer of their homepage.) The idea has failed to catch on with secondary educators to such a degree that it’s worth asking them why they don’t seem to want it.

Our room at EdFoo was notably absent of practicing secondary teachers so I went on Twitter to ask a few thousand of them, “Why don’t you use lesson download sites?” (I asked the same question two years ago as well.) Here are helpful responses from actual, really real current and former secondary teachers:

Nancy Mangum:

Using someone else’s lesson plan is like wearing a friend’s underwear. It may do the job but ultimately doesn’t fit quite right.

Jonathan Claydon:

Their wheels aren’t the right size for my car.

Justin Reich:

Linux works because code compiles. Syllabi don’t compile. If I add a block/lesson, I never know who it helps.

Bob Lochel:

I don’t require a script, just decent ideas now and then.

Grace Chen:

I’m not sure they solve for the problems they think they’re trying to solve. It takes time to read / internalize / modify others’ plans.

David Wees:

It’s challenging to sequence, connect, plan, and enact someone else’s lesson.

Mark Pettyjohn:

The plan itself is the least important element. The planning is what’s critical.

2016 Jun 11. Dwight Eisenhower:

In preparing for battle I have always found that plans are useless, but planning is indispensable.

In sum: “Small differences between lessons plans are enormously important, enormously time-consuming to account for and fix, and whatever I already have is probably good enough.” It turns out that even if two lesson plans don’t differ all that much they already differ too much.

Any lesson sharing site will have to account for that belief before it can offer teachers even a fraction of GitHub’s value to programmers.

2016 Jun 8. Check out Bob Lochel’s tweet above and Julie Reulbach’s tweet below. Both express a particular sentiment that the nuts and bolts of a lesson plan are less important than the chassis. (I don’t know a thing about cars.)

I was chatting with EdSurge’s Betsy Corcoran about that idea at EdFoo and she likened it to “the head” in jazz music. (I don’t know a thing about jazz music.) The head contains crucial information about a piece of music – the key, the tempo, the chord changes. Jazz musicians memorize the head but they’ll build and develop the performance off of it. The same head may result in several different performances. What I want – along with Bob and Julie and many others – is a jazz musician’s fake book – a repository of creative premises I can easily riff off of.

(Of course, it’s worth noting here that many people believe that teachers should be less like jazz musicians and more like player pianos.)

Featured Tweets

Featured Comments

Chris Lusto:

There seems to be a general distrust of “other people’s lessons.” Which I get. But nothing about this model would change the extent to which you do or do not teach other people’s lessons, or the fidelity with which you do it. Again, the whole thing that got me thinking in this vein was the problem of managing, in some kind of coherent way, all the changes that teachers already make as a matter of course. If you’re starting with an existing curriculum, then you’re using other people’s stuff to some extent. And once you alter that extent, it might be nice to track it, for all sorts of reasons. Maybe classroom teachers don’t find that interesting, but somebody in the chain between publisher and implementer certainly does. Not totally sure who the best target audience might be.

Jo:

As an elementary math coach I don’t want a repository of lesson plans either but my teachers long for one. However, when given pre-written lesson plans they’re not happy with them–for all the reasons listed above.
The hardest thing about elementary math is that most elementary teachers go into teaching because they love reading and they want to share that. They rarely feel that way about math. So, they want a guided lesson that will teach the requisite skills. Unfortunately, it doesn’t work for them any better than it works for secondary teachers.

Even in elementary it’s the process of planning that’s important. My brain needs to go through the work of planning–what leads to what, what is going to confuse the kids, what mistakes are they likely to make, what false paths are they likely to follow. The only way to deeply understand the material and how to present it is to plan it. The only way to truly understand the standard is to wrestle with what it really means.

Planning is the work; teaching is just the performance.

Ethan Weker:

I get a lot out of reading other lesson plans/approaches to teaching/ideas, and steal activities fairly regularly, but my actual lesson plans aren’t copies of others’. It’s more like they’re inspired by what other people do. This is where the artistry of teaching comes in.

Brandon Dorman:

I get it – we don’t want a repository of lessons, but what happens once those lessons get downloaded and re-worked? Right now there isn’t a way to see derivatives of those lessons, which could be very important.

Stephanie:

Brandon, I love that idea. Recipe websites do this — what can be substituted for what.. how can different teachers with different ingredients, different tools and in different places.. these are good parallels for the teaching world.

2016 Jun 13. Mike Caulfield offers an illustration of the value of planning relative to plans.

Moving the Goalposts on Personalized Learning

Mike Caulfield:

But the biggest advantage of a tutor is not that they personalize the task, it’s that they personalize the explanation. They look into the eyes of the other person and try to understand what material the student has locked in their head that could be leveraged into new understandings. When they see a spark of insight, they head further down that path. When they don’t, they try new routes.

EdSurge misreads Mike pretty drastically, I think:

What if technology can offer explanations based on a student’s experience or interest, such as indie rock music?

Mike is summarizing what great face-to-face tutors do. They figure out what the student already knows, then throw hooks into that knowledge using metaphors and analogies and questions. That’s a personalized tutor.

But in 2016 computers are completely inept at that kind of personalization. Worse than your average high school junior tutoring on the side for gas money. Way worse than your average high school teacher. I don’t think this is a controversial observation. In a follow-up post, Michael Feldstein writes, “For now and the foreseeable future, no robot tutor in the sky is going to be able to take Mike’s place in those conversations.”

So it’s interesting to see how quickly EdSurge pivots to a different definition of personalization, one that’s much more accommodating of the limits of computers. EdSurge’s version of personalization asks the student to choose her favorite noun (eg. “indie rock music”) and watch as the computer incorporates that noun into the same explanation every other student receives. Find and replace. In 2016 computers are great at find and replace.

This is just a PSA to say: technofriendlies, I see you moving the goalposts! At the very least, let’s keep them at “high school junior-level tutor.”

BTW. I don’t think find-and-replacing “indie rock music” will improve what a student knows, but maybe it will affect her interest in knowing it. I’ve hassled edtech over that premise before. In my head, I always call that find-and-replacing approach the “poochification” of education, but I never know if that reference will land for anybody who isn’t inside my head.

The Future Of Handwriting Recognition & Adaptive Feedback In Math Education

In math education, the fields of handwriting recognition and adaptive feedback are stuck. Maybe they’re stuck because the technological problems they’re trying to solve are really, really hard. Or maybe they’re stuck because they need some crank with a blog to offer a positive vision for their future.

I can’t help with the technology. I can offer my favorite version of that future, though. Here is a picture of the present and the future of handwriting recognition and adaptive feedback, along with some explanation.

In the future, the computer will recognize my handwriting.

160131_1

Here I am trying hopelessly to get the computer to understand that I’m trying to write 24. This is low-hanging fruit. No one needs me to tell them that a system that recognizes my handwriting more often is better than a system that doesn’t.

But I don’t worry about a piece of paper recognizing my handwriting. If I’m worried about the computer recognizing my handwriting, that worry goes in the cost column.

In the future, I won’t have to learn to speak computer while I’m learning to speak math.

160131_3

In this instance, I’m learning to express myself mathematically – hard enough for a novice! – but I also have to learn to express myself in ways that the computer will understand. Even when the computer recognizes my numbers and letters, it doesn’t recognize the way I have arranged them.

Any middle school math teacher would recognize my syntax here. I’ll wager most would sob gratefully for my aligned operations. (Or that I bothered to show operations at all.) If the computer is confused by that syntax, that confusion goes in the cost column.

In the future, I’ll have the space to finish a complete mathematical thought.

160131_2

Here I am trying to finish a mathematical thought. I’m successful, but only barely. That same mathematical thought requires only a fraction of the space on a piece of paper that it requires on a tablet, where I always feel like I’m trying to write with a bratwurst. That difference in space goes in the cost column.

That’s a lot in the cost column, but lots of people eagerly accept those costs in other fields. Computer programmers, for example, eagerly learn to speak unnatural languages in unusual writing environments. They do that because the costs are dwarfed by the benefits.

What is the benefit here?

Proponents of these handwriting recognition systems often claim their benefit is feedback – the two-sigma improvement of a one-on-one human tutor at a fraction of the cost. But let’s look at the feedback they offer us and, just as we did for handwriting recognition, write a to-do list for the future.

In the future, I’ll have the time to finish a complete mathematical thought.

If you watch the video, you’ll notice the computer interrupts my thought process incessantly. If I pause to consider the expression I’m writing for more than a couple of seconds, the computer tries to convert it into mathematical notation. If it misconverts my handwriting, my mathematical train of thought derails and I’m thinking about notation instead.

Then I have to check every mathematical thought before I can write the next one. The computer tells me if that step is mathematically correct or not.

It offers too much feedback too quickly. A competent human tutor doesn’t do this. That tutor will interject if the student is catastrophically stuck or if the student is moving quickly on a long path in the wrong direction. Otherwise, the tutor will let the student work. Even if the student has made an error. That’s because a) the tutor gains more insight into the nature of the error as it propagates through the problem, and b) the student may realize the error on her own, which is great for her sense of agency and metacognition.

No ever got fired in edtech for promising immediate feedback, but in the future we’ll promise timely feedback instead.

In the future, computers will give me useful feedback on my work.

I have made a very common error in my application of the distributive property here.

160131_4lo

A competent human tutor would correct the error after the student finished her work, let her revise that work, and then help her learn the more efficient method of dividing by four first.

But the computer was never programmed to anticipate that anyone would use the distributive property, so its feedback only confuses me. It tells me, “Start over and go down an entirely different route.”

The computer’s feedback logic is brittle and inflexible, which teaches me the untruth that math is brittle and inflexible.

In the future, computers will do all of this for math that matters.

I’ve tried to demonstrate that we’re a long way from the computer tutors our students need, even when they’re solving equations, a highly structured skill that should be very friendly to computer tutoring. Some of the most interesting problems in K-12 mathematics are far less structured. Computers will need to help our students there also, just as their human tutors already do.

We want to believe our handwriting recognition and adaptive feedback systems result in something close to a competent human tutor. But competent tutors place little extraneous burden on a student’s mathematical thinking. They’re patient, insightful, and their help is timely. Next to a competent human tutor, our current computer tutors seem stuttering, imposing, and a little confused. But that’s the present, and the future is bright.

Need A Job?

I work for Desmos where we’re solving some of the biggest problems in math edtech. Teachers and students love us and we’re hiring. Come work with us!

Tracy Zager Offers You And Your Fact Fluency Game Some Advice

Thoughtful elementary math educator Tracy Zager offers app developers some best practices for their fact fluency games:

I’ve been looking around since, and the big money math fact app world is enough to send me into despair. It’s almost all awful. As I looked at them, I noticed I use three baseline criteria, and I’m unwilling to compromise on any of them.

She later awards special merits to DreamBox Learning and Bunny Times.

What Students Do (And Don’t Do) In Khan Academy, Ctd.

My analysis of Khan Academy’s eighth-grade curriculum was viewed ~20,000 times over the last ten days. Several math practice web sites have asked me to perform a similar analysis on their own products. All of this gives me hope that my doctoral work may be interesting to people outside my small crowd at Stanford.

Two follow-up notes, including the simplest way Khan Academy can improve itself:

One. Several Khan Academy employees have commented on the analysis, both here and at Hacker News.

Justin Helps, a content specialist, confirmed one of my hypotheses about Khan Academy:

One contributor to the prevalence of numerical and multiple choice responses on KA is that those were the tools readily available to us when we began writing content. Our set of tools continues to grow, but it takes time for our relatively small content team to rewrite item sets to utilize those new tools.

But as another commenter pointed out, if the Smarter Balanced Assessment Consortium can make interesting computerized items, what’s stopping Khan Academy? Which team is the bottleneck: the software developers or the content specialists? (They’re hiring!)

Two. In my mind, Khan Academy could do one simple thing to improve itself several times over:

Ask questions that computers don’t grade.

A computer graded my responses to every single question in eighth grade.

That means I was never asked, “Why?” or “How do you know?” Those are seriously important questions but computers can’t grade them and Khan Academy didn’t ask them.

At one point, I was even asked how m and b (of y = mx + b fame) affected the slope and y-intercept of a graph. It’s a fine question, but there was no place for an answer because how would the computer know if I was right?

So if a Khan Academy student is linked to a coach, make a space for an answer. Send the student’s answer to the coach. Let the coach grade or ignore it. Don’t try to do any fancy natural language processing. Just send the response along. Let the human offer feedback where computers can’t. In fact, allow all the proficiency ratings to be overridden by human coaches.

Khan Academy does loads of A/B testing right? So A/B test this. See if teachers appreciate the clearer picture of what their students know or if they prefer the easier computerized assessment. I can see it going either way, though my own preference is clear.