Danielson, doing his best Howard Beale:

THE STEPS WIN, PEOPLE! The steps trump thinking. The steps trump number sense. The steps triumph over all.

Let me add to the conversation the category of “steps that are correct but useless.” These are great. They come from a conversation I had, like, fifteen minutes ago with a teacher named Leah Temes here at NWMC 2014.

Leah teaches Algebra II. We were talking about solving systems of equations. It’s really easy to teach the solution to a system like this as a series of correct, useful steps:

2x + 3y = 10

5x – 3y = 4

- Add the second equation to the first one.
- Solve for x.
- Substitute x in either equation to solve for y.
- Check that pair in the other equation for full credit.

Leah said she was tired of seeing her students mimic those correct steps without understanding why they worked. So instead of showing her students steps that were useful and correct, she asked them if she was allowed to add the following two equations:

2x + 3y = 10

5 = 5

To get:

2x + 3y + 5 = 10 + 5

Is everything still *correct*? Yes.

Was that *useful*? No.

This experience awakened her students to a category of steps in addition to the *correct and useful ones* they’re supposed to memorize and the *incorrect and useless ones* they’re supposed to avoid – *correct and useless steps*.

Alerting your students to that category of steps may make math seem less intimidating and more interesting. Math isn’t any longer a matter of staying on the right side of a line between the incorrect and correct steps. There’s another region out there, one that’s a bit less *tame*, a place for explorers, a place where the worst thing that can happen is you did something right but it just wasn’t useful. That category of steps also requires *justification* – “how do you *know* this is correct?” – which can help bend the student away from memorization and back towards understanding.

**BTW**. All of this implies a *fourth* category of steps – incorrect but useful. Can anybody give an example?

**Featured Comments**

I do a similar thing when solving equations in one variable by asking students if I

canadd 1,000,000, let’s say, to each side of an equation… or if Icansubtract 27 from both sides… or divide both sides by 200… etc. etc. We talk about what is “legal” (have we followed the rules of algebra and the concept of “balance” and equivalence?) and what is “helpful” (have we done something “legal” that helps us isolate the variable so we can solve this thing?”) Exaggerated examples like adding 1,000,000 to both sides seem to make an impression on kids.

I have long been a fan of deliberately sabotaging a solution to something that I might be doing on the board so that somewhere down the road things become obviously wrong. This is so students can start to develop strategies for what to do when this happens.

Many will tell you that it’s important for students to make mistakes (in fact, that they learn the most when they do). But that sometimes runs counter to what they see in class. That is, a teacher demonstrating flawless execution of mathematics. Even some of our best students often won’t even attempt a problem unless they are sure they will get it correct. If they are ever going to become comfortable with making mistakes as part of the normal process then we have to include managing those mistakes as part of our day to day in class.

]]>[It’s] incorrect but useful to estimate things like area problems, in order to find out a ballpark figure and check if you’ve done the math right.

In the September 2014 edition of *Mathematics Teacher*, reader Thomas Bannon reports that his research group has found that the applications of algebra haven’t changed much throughout history.

310:

Demochares has lived a fourth of his life as a boy; a fifth as a youth; a third as a man; and has spent 13 years in his dotage; how old is he?

1896:

A man bought a horse and carriage for $500, paying three times as much for the carriage as for the horse. How much did each cost?

1910:

The Panama Canal will be 46 miles long. Of this distance the lower land parts on the Atlantic and Pacific sides will together be 9 times the length of the Culebra Cut, or hill part. How many miles long will the Culebra Cut be? Prove answer.

2013:

Shandra’s age is four more than three times Sherita’s age. Write an equation for Shandra’s age. Solve if Sherita is 3 years old.

I’m grateful for Bannon’s research but his conclusion is, in my opinion, overly sunny:

Looking through these century-old mathematics book can be a lot of fun. Challenging students to find and solve what they consider the most interesting problem can be a great contest or project.

My alternate reading here is that the primary application of school algebra throughout history has been to solve contrived questions. Instead of challenging students to answer the most interesting of those contrived questions, we should ask questions that aren’t contrived and that actually do justice to the power of algebra. Or skip the whole algebra thing altogether.

**Different**

If you told me there existed a book of arithmetic problems that *didn’t include* any numbers, I’d wonder which progressive post-CCSS author wrote it. Imagine my surprise to find *Problems Without Figures*, a book of 360 such problems, published in 1909.

For example, imagine the interesting possible responses to #39:

What would be a convenient way to find the combined weight of what you eat and drink at a meal?

That’s great question development. Now here’s an alternative where we rush students along to the answer:

Sam weighs 185.3 pounds after lunch. He weighed 184.2 before lunch. What was the weight of his lunch?

So much less interesting! As the author explains in the powerful foreword:

Adding, subtracting, multiplying and dividing do not train the power to reason, but deciding in a given set of conditions which of these operations to use and why, is the feature of arithmetic which requires reasoning.

Add the numbers back into the problem later. *Two minutes* later, I don’t care. But subtracting them for just two minutes allows for that many more interesting answers to that many more interesting questions.

[via @lucyefreitas]

*This is a series about “developing the question” in math class.*

**New Blog Subscriptions**

- Tracy Zager has been one of my favorite math voices on Twitter this school year and she’s now blogging. She’s also recently announced a fight with breast cancer and has requested that we “Please help me remember that I have thinking and ideas to share, and am involved in a world bigger than this right now.”
- Annie Fetter’s work at the Math Forum has always been impressive and it’s a total oversight I hadn’t realized she writes a blog until now.
- Tim McCaffrey and I share a lot of the same enthusiasms. He helps districts run lesson studies around three-act tasks and just started blogging about it.
- Matt Bury had positively invaluable commentary during last month’s adaptive learning discussions.
- Dan Burf, a/k/a Quadrant Dan, is a new teacher who has been using my old, old lessons, which is kind of fun to watch.
- Amy Roediger, whose writing on Classkick was extremely useful.
- Julie Wright is full of promise.
- Just Mathness is full of promise.

**New Twitter Follows**

- Patrick Honner: “I’m sure Benny would do quite well at this.”
- Bob Lochel, who is a regular in our Great Classroom Action features.
- Kelly Stidham, who lit up my blog this month with a comment about online professional development.

**Multimedia Math**

I make an open offer to my workshop participants to help them with their video editing. A couple of newcomers to multimedia modeling came up with these two tasks:

- Candy & Chips, for systems of equations.
- Apples for All, for unit fractions.

**Great Tweets**

Proofs are social documents not compiled code.

**Press Clippings**

- The Ontario Ministry of Education filmed an interview series with me and other math education-types in Toronto.
- An interview with a teen writer from The Santa Fe New Mexican.
- An interview with AFEMO, a Francophone group of math educators.

Anderson took a task from the Shell Centre and delayed all the calculation questions, making room for a lot of informal dialog first.

Patterson took a *Discovering Geometry* task and removed the part where the textbook specified that the solution space ran from zero to eight.

“It turns out that by shortening the question,” Joel Patterson said, “I opened the question up, and the kids surprised themselves and me!”

I believe EDC calls these “tail-less problems.” I call it being less helpful.

**BTW**. These are *great* task designers here. I spent the coldest winter of my life at the Shell Centre because I wanted to learn their craft. *Discovering Geometry* was written by friend-of-the-blog Michael Serra. This only demonstrates how unforgiving the print medium is to interesting math tasks, like asking Picasso to paint with a toilet plunger. You have to add everything at once.

**Computer & Mouse v. Paper & Pencil**

Jebara:

Just like learning Math requires persistence and struggle, so too is learning a new interface.

I think Mathspace has made a poor business decision to blame their user (the daughter of an earlier commenter) for misunderstanding their user interface. Business isn’t my business, though. I’ll note instead that adaptive math software here again requires students to learn a new *language* (computers) before they find out if they’re able to speak the language they’re trying to learn (math).

For example, here is a tutorial screen from software developed by Kenneth Tilton, a frequent commenter here who has requested feedback on his designs:

Writing that same expression with paper and pencil instead is more intuitive by an order of magnitude. Paper and pencil is an interface that is omnipresent and easily learned, one that costs a bare fraction of the computer Mathspace’s interface requires, one that never needs to be plugged into a wall.

None of this means we should reject adaptive math software, especially not Mathspace, the interface of which *allows* handwriting. But these user interface issues pile high in the “cost” column, which means the software cannot skimp on the benefits.

**Misunderstanding the Status Quo**

Jebara:

Does a teacher have time to sit side by side with 30 students in a classroom for every math question they attempt?

[..]

But teachers can’t watch while every student completes 10,000 lines of Math on their way to failing Algebra.

[..]

I talk to teachers every single day and they are crying out for [instant feedback software].

Existing classroom practice has its own cost and benefit columns and Jebara makes the case that classroom costs are exorbitant.

Without adaptive feedback software, to hear Jebara tell it, students are wandering in the dark from problem to problem, completely uncertain if they’re doing anything right. Teachers are beleaguered and unsure how they’ll manage to review *every* student’s work on *every* assigned problem. Thirty different students will reveal thirty unique misconceptions for each one of thirty problems. That’s 27,000 unique responses teachers have to make in a 45 minute period. That’s ten responses per second! No wonder all these teachers are crying.

This is all Dickens-level bleak and misunderstands, I believe, the possible sources of feedback in a classroom.

There is the textbook’s answer key, of course. Some teachers make regular practice of posting all the answers in advance of an exercise set, also, so students have a sense that they’re heading in the right direction and focus on process not product.

Commenter Matt Bury also notes that a student’s classmates are a useful source of feedback. Since I recommended Classkick last week, several readers have tried it out in their classes. Amy Roediger writes about the feature that allows students to help other students:

… the best part was how my students embraced collaborating with each other. As the problems got progressively more challenging, they became more and more willing to pitch in and help each other.

All of these forms of feedback exist within their own webs of costs and benefits too, but the idea that without adaptive math software the teacher is the only source of feedback just isn’t accurate.

**Immediate v. Delayed Feedback**

Most companies in this space make the same set of assumptions:

*Any*feedback is better than*no*feedback.*Immediate*feedback is better than*delayed*feedback.

Tilton has written here, “Feedback a day later is not feedback. Feedback is immediate.”

In fact, Kluger & DeNisi found in their meta-analysis of feedback interventions that feedback *reduced* performance in more than one third of studies. What evidence do we have that adaptive math software vendors offer students the *right* kind of feedback?

The immediate kind of feedback isn’t without complication either. With immediate feedback, we may find students trying answer after answer, looking for the red x change to a green check mark, learning little more than systematic guessing.

Immediate feedback risks underdeveloping a student’s *own* answer-checking capabilities also. If I get 37 as my answer to 14 + 22, immediate feedback doesn’t give me any time to reflect on my knowledge that the sum of two even numbers is always even and make the correction myself. Along those lines, Cope and Simmons found that *restricting* feedback in a Logo-style environment led to better discussions and higher-level problem-solving strategies.

**What Computers Do To Interesting Exercises**

Jebara:

Can you imagine a teacher trying to provide feedback on 30 hand-drawn probability trees on their iPad in Classkick?

[..]

Can you imagine a teacher trying to provide feedback on 30 responses for a Geometric reasoning problem letting students know where they haven’t shown enough of a proof?

I can’t imagine it, but not because that’s too much grading. I can’t imagine assigning those problems because I don’t think they’re worth a class’ limited time and I don’t think they do justice to the interesting concepts they represent.

Bluntly, they’re boring. They’re boring, but that isn’t because the team at Mathspace is unimaginative or hates fun or anything. They’re boring because a) computers have a difficult time assessing interesting problems, and b) interesting problems are expensive to create.

Please don’t think I mean “interesting” week-long project-based units or something. (The costs there are enormous also.) I mean interesting *exercises*:

Pick any candy that has multiple colors. Now pick two candies from its bag. Create a probability tree for the candies you see in front of you. Now trade your tree with five students. Guess what candy their tree represents and then compute their probabilities.

The students are working five exercises there. But you won’t find that exercise or exercises like it on Mathspace or any other adaptive math platform for a very long time because a) they’re very hard to assess algorithmically and b) they’re more expensive to create than the kind of problem Jebara has shown us above.

I’m thinking Classkick’s student-sharing feature could be very helpful here, though.

**Summary**

Jebara:

So why don’t we try and automate the parts that

canbe automated and build great tools like Classkick to deal with the parts that can’t be automated?

My answer is pretty boring:

**Because the costs outweigh the benefits.**

In 2014, the benefits of that automation (students can find out instantly if they’re right or wrong) are dwarfed by the costs (see above).

That said, I can envision a future in which I use Mathspace, or some other adaptive math software. Better technology will resolve some of the problems I have outlined here. Judicious teacher use will resolve others. Math practice is important.

My concerns are with the 2014 *implementations* of the idea of adaptive math software and not with the idea itself. So I’m glad that Jebara and his team are tinkering at the edges of what’s possible with those ideas and willing, also, to debate them with this community of math educators.

**Featured Comment**

Mercy – *all* of them. Just read the thread if you want to be smarter.

I’d like to write a column re: how sports could be an effective tool to teach probability/fractions/ even behavioral economics to kids. Wonder if you have thoughts here….

My response, which will hopefully serve to illustrate my last post:

I tend to side with Daniel Willingham, a cognitive psychologist who wrote in his book

Why Students Don’t Like School, “Trying to make the material relevant to students’ interests doesn’t work.” That’s because, with math, there are contexts like sports or shopping but then there’s the work students do in those contexts. The boredom of the work often overwhelms the interest of the context.To give you an example, I could have my students take the NBA’s efficiency formula and calculate it for their five favorite players. But calculating – putting numbers into a formula and then working out the arithmetic – is boring work. Important but boring. The interesting work is in coming up with the formula, in asking ourselves, “If you had to take all the available stats out there, what would your formula use? Points? Steals? Turnovers? Playing time? Shoe size? How will you assemble those in a formula?” Realizing you need to subtract turnovers from points instead of adding them is the interesting work. Actually doing the subtraction isn’t all that interesting.

So using sports as a context for math could surely increase student interest in math but only if the work they’re doing in that context is interesting also.

**Featured Email**

Marcia Weinhold:

]]>After my AP stats exam, I had my students come up with their own project to program into their TI-83 calculators. The only one I remember is the student who did what you suggest — some kind of sports formula for ranking. I remember it because he was so into it, and his classmates got into it, too, but I hardly knew what they were talking about.

He had good enough explanations for everything he put into the formula, and he ranked some well known players by his formula and everyone agreed with it. But it was

buildingthe formula that hooked him, and then he had his calculator crank out the numbers.

I’ve heard dozens of variations on that recommendation in my task design workshops. I heard it at Twitter Math Camp this summer. That statement measures tasks along one axis only: the realness of the world of the problem.

But teachers report time and again that these tasks don’t measurably move the needle on student engagement in challenging mathematics. They’re real world, so students are disarmed of their usual question, “When will I ever use this?” But the questions are still boring.

That’s because there is a second axis we focus on less. That axis looks at *work*. It looks at what students *do*.

That work can be real or fake also. The fake work is narrowly focused on precise, abstract, formal calculation. It’s necessary but it interests students less. It interests the world less also. Real work – interesting work, the sort of work students might like to do later in life – involves problem formulation and question development.

That plane looks like this:

We overrate student interest in doing fake work in the real world. We underrate student interest in doing real work in the fake world. There is so much gold in that top-left quadrant. There is much less gold than we think in the bottom-right.

**BTW**. I really dislike the term “real,” which is subjective beyond all belief. (eg. What’s “real” to a thirty-year-old white male math teacher and what’s real to his students often don’t correlate at all.) Feel free to swap in “concrete” and “abstract” in place of “real” and “fake.”

**Related**. Culture Beats Curriculum.

*This is a series about “developing the question” in math class.*

**Featured Tweet**

@ddmeyer @mpershan my kids working with http://t.co/ig98dLeSGS are doing real work, fake world, and loving it.

— Pamela Rawson (@rawsonmath) September 17, 2014

**Featured Comment**

I would add that tasks in the bottom-right quadrant, those designed with a “SIMS world” premise, provide less transfer to the abstract than teachers hope during the lesson design process. This becomes counter-productive when a seemingly “progressive” lesson doesn’t produce the intended result on tests, then we go back not only to square 1, but square -5.

]]>I love this distinction between real world and real work, but I wonder about methods for incorporating feedback into real work problems. In my experience, students continue to look at most problems as “fake” so long as they depend on the teacher (or an answer key or even other students) to let them know which answers are better than others. We like to use tasks such as “Write algebraic functions for the percent intensity of red and green light, r=f(t) and g=f(t), to make the on-screen color box change smoothly from black to bright yellow in 10 seconds.” Adding the direct, immediate feedback of watching the colors change makes the task much more real and motivating.

My daughter just tried the sine rule on a question and was asked to give the answer to one decimal place. She wrote down the correct answer and it was marked wrong. But it is correct!!! No feedback given just – it’s wrong. She is now distraught by this that all her friends and teacher will think she is stupid. I don’t understand! It’s not clear at all how to write down the answer – does it have to be over at least two lines? My daughter gets the sine rule but is very upset by this software.

My skin crawls – seriously. Math involves enough *intrinsic* difficulty and struggle. We don’t need our software tying extraneous weight around our students’ ankles.

Enter Classkick. Even though I’m somewhat curmudgeonly about this space, I think Classkick has loads of promise and it charms the hell out of me.

Five reasons why:

**Teachers provide the feedback. Classkick makes it faster.**This is a really ideal division of labor. In the quote above we see the computer fall apart over an assessment a novice teacher could make. With Classkick, the computer organizes student work and puts it in front of teachers in a way that makes smart teacher feedback faster.- Consequently,
**students can do more interesting work.**When computers have to assess the math, the math is often trivialized. Rich questions involving written justifications turn into simpler questions involving multiple choice responses. Because the teacher is providing feedback in Classkick, students aren’t limited to the kind of work that is easiest for a computer to assess. (Why the demo video shows students completing multiple choice questions, then, is befuddling.) **Written feedback templates**. Butler is often cited for her finding that certain kinds of written feedback are superior to numerical feedback. While many feedback platforms only offer numerical feedback, with Classkick, teachers can give students freeform written feedback and can also set up written feedback templates for the remarks that show up most often.**Peer feedback**. I’m very curious to see how much use this feature gets in a classroom but I like the principle a lot. Students can ask questions and request help from their peers.**A simple assignment workflow for iPads**. I’m pretty okay with these computery things and yet I often get dizzy hearing people describe all the work and wires it takes to get an assignment to and from a student on an iPad. Dropbox folders and WebDAV and etc. If nothing else, Classkick seems to have a super smooth workflow that requires a single login.

Issues?

Handwriting math on a tablet is a chore. An iPad screen stretches 45 square inches. Go ahead and write all the math you can on an iPad screen – equations, diagrams, etc – then take 45 square inches of paper and do the same thing. Then compare the difference. This problem isn’t exclusive to Classkick.

Classkick doesn’t specify a business model though they, like everybody, think being free is awesome. In 2014, I hope we’re all a little more skeptical of “free” than we were before all our favorite services folded for lack of revenue.

This isn’t “instant student feedback” like their website claims. This is feedback from humans and humans don’t do “instant.” I’m great with that! Timeliness is only one important characteristic of feedback. The quality of that feedback is another far more important characteristic.

In a field crowded with programs that offer mediocre feedback instantaneously, I’m happy to see Classkick chart a course towards offering good feedback just a little faster.

**2014 Sep 17**. Solid reservations from Scott Farrar and some useful classroom testimony from Adrian Pumphrey.

**2014 Sep 21**. Jonathan Newman praises the student sharing feature.

**2014 Sep 21**. More positive classroom testimony, this entry from Amy Roediger.

**2014 Sep 22**. Mo Jebara, the founder of Mathspace, has responded to my initial note with a long comment arguing for the adaptive math software in the classroom. I have responded back.

Then the blogosphere’s intrepid Clayton Edwards extracted an answer from the manufacturers of the duck, which gave us all some resolution. For every lot of 300 ducks, the Virginia Candle Company includes one $50, one $20, one $10, one $5, and the rest are all $1. That’s an expected value of $1.27, netting them a neat $9.72 profit per duck on average.

That’s a pretty favorable distribution:

They’re only able to get away with that distribution because competition in the animal-shaped cash-containing soap marketplace is pretty thin.

So after developing the question and answering the question, we then *extended* the question. I had every group decide on a) an animal, b) a distribution of cash, c) a price, and put all that on the front wall of the classroom – our marketplace. They submitted all of that information into a Google form also, along with their rationale for their distribution.

Then I told everybody they could buy any three animals they wanted. Or they could buy the same animal three times. (They couldn’t buy their own animals, though.) They wrote their names on each sheet to signal their purchase. Then they added that information to another Google form.

Given enough time, customers could presumably calculate the expected values of every product in the marketplace and make really informed decisions. But I only allowed a few minutes for the purchasing phase. This forced everyone to judge the distribution against price on the level of intuition only.

During the production and marketing phase, people were practicing with a purpose. Groups tweaked their probability distributions and recalculated expected value over and over again. The creativity of some groups blew my hair back. This one sticks out:

Look at the price! Look at the distribution! You’ll walk away a winner over half the time, a fact that their marketing department makes sure you don’t miss. And yet their expected profit is *positive*. Over time, they’ll bleed you dry. Sneaky Panda!

I took both spreadsheets and carved them up. Here is a graph of the number of customers a store had against how much they marked up their animal.

Look at that downward trend! Even though customers didn’t have enough time to calculate markup exactly, their intuition guided them fairly well. Question here: which point would you most like to be? (Realization here: a store’s profit is the area of the rectangle formed around the diagonal that runs from the origin to the store’s point. Sick.)

So in the *mathematical* world, because all the businesses had given themselves positive expected profit, the *customers* could all expect negative profit. The best purchase was no purchase. Javier won by losing the least. He was down only $1.17 all told.

But in the real world, chance plays its hand also. I asked Twitter to help me rig up a simulator (thanks, Ben Hicks) and we found the *actual* profit. Deborah walked away with $8.52 because she hit an outside chance just right.

Profit Penguin was the winning store for both expected and actual profit.

Keep the concept simple and make winning $10s and $20s fairly regular to entice buyers. All bills – coins are for babies!

So there.

We’ve talked already about *developing the question* and *answering the question*. Daniel Willingham writes that we spend too little time on the former and too much time rushing to the latter. I illustrated those two phases previously. We could reasonably call this post: extending the question.

To extend a question, I find it generally helpful to a) flip a question around, swapping the knowns and unknowns, and b) ask students to *create* a question. I just hadn’t expected the *combination* of the two approaches to bear so much fruit.

I’ve probably left a lot of territory unexplored here. If you teach stats, you should double-team this one with the economics teacher and let me know how it goes.

*This is a series about “developing the question” in math class.*

Math students : Answer-getting :: Math teachers : Resource-finding.

— Dan Meyer (@ddmeyer) September 3, 2014

Math students : "What's the formula for __ ?" :: Math teachers : "Who's got a good lesson for __ ?"

— Dan Meyer (@ddmeyer) September 3, 2014

Math students : Understanding math :: Math teachers : Understanding what makes a good lesson good.

— Dan Meyer (@ddmeyer) September 3, 2014

“Answer-getting” sounds pejorative but it doesn’t have to be. Math is full of interesting answers to get. But what Phil Daro and others have criticized is our fixation on getting answers at the expense of understanding math. Ideally those answers (right or wrong) are means to the ends of understanding math, not the ends themselves.

In the same way, “resource-finding” isn’t necessarily pejorative. Classes need resources and we shouldn’t waste time recreating good ones. But a quick scan of a teacher’s Twitter timeline reveals lots of talk about resources that worked well for students and much less discussion overall about *what it means for a resource to “work well.”*

My preference here may just mean grad school has finally sunk its teeth into me but I’d rather fail trying to answer the question, “What makes a good resource good?” than succeed cribbing someone else’s good resource without understanding why it’s good.

**Related**

- I felt the same way about sessions at Twitter Math Camp.
- Kurt Lewin: “There is nothing so practical as a good theory.”
- Without agreeing or disagreeing with these specific bullet points, everyone should have a bulleted list like this.

**Featured Comment**

Mr K:

This resonates strongly.

I shared a lesson with fellow teachers, and realized I had no good way to communicate what actually made the lesson powerful, and how charging in with the usual assumptions of being the explainer in chief could totally ruin it.

Really worthwhile comments from Grace Chen, Bowen Kerins, and Fawn Nguyen also.

Really, we need to literally go back to questions such as ‘Why am I teaching this?’ ‘Where does this fit into the students learning journey?’ and ‘How am I going to structure the learning so that the student wants to learn this?’ before we even think about where resources fit into our lesson. This takes a lot of time to think about and process. Time and space many teachers just don’t have.

Early on I would edit resources and end up reducing cognitive demand in the interest of making things clearer for students. Now I edit resources to remove material and increase cognitive demand. Or even more often, I’m taking bits and pieces because I have a learning goal, learning process goal and study skills goal that I have to meet with one lesson.

Great lessons in the context of learning around mindset and methods are the instruments we use to “do” our work. But the reflection and coaching conversations where we “learn” about our work are critical as well. Without them, we use scalpels like hammers.

But this work is much harder, much more personal, much more in the moment of the classroom. Can we harness the power of tech to share this work as well as we have to share the tools?

**2014 Sep 8**. Elissa Miller takes a swing at “what makes a good lesson good?” Whether or not I agree with her list is besides my point. My point is that her list is better than dozens of good resources. With a good list, she’ll find them eventually and she’ll have better odds of dodging lousy ones.