[LOA] Hypothesis #3: Test Your Abstractions

#3: Let your students test their abstractions.

The Google team has one hell of an abstraction on their hands. They’ve distilled the complicated process of driving a car and its infinite judgment calls, muscle twitches, and cursing into a finite set of variables. That set of variables is so finite, in fact, they say that a computer can compute it in real-time and drive a car by itself.

That just isn’t plausible. At various points, I’ll wager the Google team didn’t think their abstraction was plausible either.

I’ll put any sum of money on this: the team wanted to know if their abstraction was any good. They’d thrown away so much data for the sake of a manageable abstraction. Did they throw away too much? Could they have thrown away more? Is the abstraction just right? They could have turned to existing theory and models in artificial intelligence and said, “Well, the literature says the model should work.” But no one would have walked away from that conversation satisfied.

People like to test their abstractions. They want to see the driverless car drive.


The thing is, you and I are in on the joke. A lot of our abstractions are flimsy. If the basketball falls off the plane defined by the player and the hoop, the model falls apart. We abstract runners into particles moving at constant speed – no acceleration or deceleration. Try that abstraction with Playing Catch-Up. It falls apart. But it falls apart interestingly, and we win twice over. First, our students work on the abstractions we need them to work on. But we get a discussion about the limitations of those abstractions as a bonus. How much error should we tolerate? Are there ways we could improve the abstraction?

So for all these reasons and because there’s very little downside, give students the opportunity to test out and refine their abstractions.

I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.


  1. At this point you have reached a very specific abstraction – a *model*. I prefer to gradually switch to calling it that.

    We are no longer just talking about which are the extraneous details that can be dropped in describing the attributes of a given situation. i.e “Which are the efficient abstractions?”

    we are now talking about making a prediction about the *behavior* of a system. Instead of a building a full- or reduced-scale model, we replace parts of the system with equations (an abstraction, yes), that describe the behavior of specific components. If we get this right, our model has predictive ability; a necessary feature for a model to be valid.

    Models-with-predictive-ability is such a powerful concept, some of them acquire the status of a theory in physics. (e.g. The Standard Model)

    This aspect, the predictive ability of models, is sufficiently key, that I feel our students ought to become familiar with that term – model – used in appropriate context.

    So I vote for “Test your model”

  2. I have a challenge for the team: when testing measurable phenomenon (shooting baskets, counting Facebook users) it’s easy to see what testing the abstraction means. But some of the abstractions we hope students learn and use in math class refer only internally, to other mathematical objects.

    So… I wonder what some good examples of those abstractions are, and what it means when you test them? Is defining a quadrilateral with 4 congruent sides as a special type of quadrilateral an abstraction? Clearly testing that abstraction wouldn’t just be looking up the definition of rhombus in the back of the book… What makes that definition robust, useful, etc.?