Assessment Part Deux Redux

The blogosphere’s been buzzin’ about assessment. (Not the NCLB kind.)

First, Marie, Rich, and Jackie have been asking some sharp questions on math assessment over in an earlier post.

Second, the Teacher Leaders Network blog is picking through the question, “How do you handle a student with an A on tests and an F on homework?”

My answer there, without even a little equivocation, is to pass her and then figure out why your homework is so totally inessential to class success. If you’re gutsy, you give her an A, but regardless you evaluate what it means to pass a student. Does it mean she did her homework, attended, participated in class discussion, raised her hand x times, wasn’t a discipline issue, brought baked goods on her assigned day, etc. etc., getting increasingly petty here. Basically, which of those behaviors is worth sandbagging a kid for a semester who knows the material, knows how to compute fractions, write persuasive essays, identify continents?

Third, Todd wrote an extraordinary post awhile back called “The Shrinking Educational Middle Class” which I’ve been meaning to pick up.

Todd sez, back in the day, you’d have histograms like this, with a bell-shaped distribution of grades (the graphics are his):

But that nowadays, the middle class is shrinking: the good grades get better, the bad grades get worse.

He’s right on; it’s a phenomenon that seems particularly exaggerated in low-performing populations. I’m going to proceed totally anecdotally here.

The teachers I worked with for my first two years of Title I were convinced you should do everything in your power to pass a kid’s first semester. One teacher went so far as to give an incomplete grade to every student who failed and then remediate with them after-school until he felt they had earned a C-.

They were positively rabid on the matter because, from their experience, a kid who failed first semester no longer bought into her future success in the class. In any case, the correlation of first semester failure to second semester failure seems extremely high, though administrators, superintendents, anyone with a more global perspective on this are welcome to set me straight.

However, good times ahead. See, I’ve been running this scheme for a few years that disrupts that system, that throws up just enough flak to pull some students out. It’s far from a guarantee but check out what it does to this combined histogram of my lowest-performing, most-math-unfriendly Algebra classes.

By Todd’s recognition, the middle typically tends to flab towards the extremes, depending on a given student’s resistance to the cycle of failure. The middle here tends towards greater achievement. This is par for all my math classes.

It requires of me a particular understanding that when a student who knows very little takes some creaky, tentative steps towards knowing something — whether that’s by studying more, attending class more often, or coming in for tutoring — I have to reward as many of those steps as possible as soon as possible.

I can’t say, “Great. It was good you came in. You’re gonna do really well on next week’s test.”

I can’t say, “Great. I’m glad you’re doing more homework. Keep that up and your grade’s really gonna climb.”

I can’t say, “Great. You’re doing the right thing coming to class more often. Your grade’s only going to go up because of it.”

Rather, I can say those things, but it won’t conjure up the positive risk-reward ratio my student needs to maintain her momentum, to keep from falling back into the cycle of failure.

So I disaggregate my assignments and tests. I break up these chunky gradebook entries, stuff labeled “Unit 6 Test,” into individual concepts and skills and then I let my students remediate those. I allow re-takes on these tiny concepts and I re-grade them immediately, dropping new scores into my gradebook immediately and reporting the grade increase immediately. My students’ gratification needs to be immediate.

Micromanaging my assessments and assignments is hard. It’s certainly more work than pulling a test out of the teacher’s edition every few weeks. But I get kids coming into my class every day before school, at lunch, kids who know what they need to learn and who are willing to learn it because they know their efforts will make a material difference. I’m doing my best to keep the middle class in the game around here and it only happens because I’ve made the cycle of risk-reward more appealing than that of failure-failure.

[Update: check out the comprehensive resource.]

About 
I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.

32 Comments

  1. okay – I’m playing devil’s advocate here for once and it may be I just don’t understand the process but as the person who does electronic gradebook support I have to ask – how do you fit this on paper. Is each test a collection of grades for each concept group?
    I love what you are doing and how you’re doing it – it just falls apart for me when you get to the paperwork!

  2. Still making me think here. Making all this work for the elementary world. Breaking it into little pieces and teaching and testing and grading on those bits and pieces. Thinking, thinking, thinking.

  3. This is my third year teaching. In my first year the district had implemented standards based grading. So instead of giving a math grade I give a grade for specific concepts. The first year everyone fought it. Second year everyone complained about it. This year some are starting to say “This is easier then just giving a basic grade.” It runs in cycles I guess.

    We don’t give A, B, C, D & F grades. We give E – exceeds the standard, M – meets the standard, A – approaches the standard, and F – falls below the standard. It makes more sense to me to use a system like this. I can tell (like you said) which students need more help in a certain area. There is no reason for me to go over something again and again that only a few students need help with.

    In response to Dee, I would check out – http://www.microsoft.com/education/ManagingGrades.mspx

    I used an Excel spreadsheet before all of our grading was available online.

  4. “whether that’s by studying more, attending class more often, or coming in for tutoring” Please, oh please, don’t tell me that you base any of your grades on attendance.

    It’s more than just immediate reward for work (as you suggest in the second paragraph after your histogram). Even though it’s through different means, the momentum you talk about exists in my classroom, too. So does the immediate reward. But my grades are what they are.

    I subscribe to the concept of your first year colleagues: failure in first semester doesn’t tend to produce success second semester. If that student can go on to earn a C or better second semester, why shouldn’t the first semester grade change to at least a D? Yet I’ve just been told that the grade contracts I sign with my students will not be allowed next year. My principal is violently against such things.

    So do you see the inverted bell curve anywhere in your grades? You mention that I’m right on, but it seems like your grades don’t reflect that phenomenon. What gives?

  5. And I’ve got to remember to write a post about my Excel gradebook. Excel is clunky by nature, but there are ways to make things much easier that Microsoft and other tutorials don’t embrace.

  6. My students’ gratification needs to be immediate. Small point, but maybe feedback (which leads to gratification) might be more accurate. Prompt feedback means students know immediately how well they are doing.

    BTW, what you are doing and writing about here fits right into an (old) book I’m reading, “Preparing Instructional Objectives”.
    http://www.amazon.com/Preparing-Instructional-Objectives-Development-Instruction/dp/1879618036/ref=pd_bbs_sr_1/002-5552215-4229620?ie=UTF8&s=books&qid=1179898716&sr=8-1

    I wish I had been forced to read this 20 years ago.

  7. Dee, I pass out a piece of paper every week that assesses our most recent six concepts. My students complete it. I grade it. I go to my computer where instead of “Unit 6 Test” I have “#25: Logical Arguments,” “#26: Logical Statements,” “#27: Venn Diagrams,” etc. Then I punch in the individual scores making sure to overwrite poor scores with better ones. The next week the oldest concept drops off and a new one (or two) enters. It’s all explained in better detail here.

    J.D., that’s wild that the district mandated something like this. We both use a four-tiered system. Can I assume that if a students scores an M where she had an F, you drop the F and give her the M? Is there a point where the student no longer has to take a given concept?

    Todd, my point with all this is that the educational status quo consistently yields an inverted bell curve. The kids who get A’s keep getting A’s. Students who saw early failures and blew it early on have a difficult time extracting themselves from that hole, particularly when the teacher can only promise a material advantage for (e.g.) coming to class, doing more homework, studying more, etc.

    I don’t see the inverted bell curve in any of my classes because I hit them hard with the idea that early failures can be undone (as they should be!) and when they make any small step towards comprehension, I can show them an proportionate and immediate grade increase.

    Marco, feedback is great but, from my experience, feedback never got a student her cell phone back from the locked chest under her parent’s bed nor did it ever get anyone curfew re-extended to 10:00PM.

    Thanks for the reading recommendations. It’s weird to think that wisdom and knowledge is stored on these — what do you call them? books? — well, these Notblog things anyway. I’ll toss this one onto the list where Alfie Kohn’s already been chillin’ for some time now. *sigh* Someday.

  8. Dan,

    You are right-on target!

    If you taught for me, you BET I’d back you up on that A…one time. But then I’d want to see what it is about your homework that causes the student to fail it. (Gee, any guesses anyone? Ha! Yawn!!…B…O…R…I…N…G…)

    That’s why I don’t believe in homework in the traditional sense. But that’s a fun post a-waiting to be a-written.

    Dan, if you haven’t read There Are No Shortcut by Rafe Esquith, you have missed a GREAT book by your clone. (I thought so much of the book, I bought copies for the entire faculty.)

    Greg

  9. Dan,

    I’m intrigued by this – intirgued enough to almost want to try it myself, despite not having the time for the complete overhaul that would be required here. I’ve got a few questions:

    With shorter teach-practice-assess cycles, how do you know that kids are really learning a concept, not just shoving it into short-term memory for long enough to pass your mini-assessment?

    Do your students take a final exam? If so, I’m curious how they do in comparison with their results on your assessments over the course of the year?

  10. Greg, I’ll toss Rafe’s book onto my pile of Books To Read (Someday). I’ve heard of him through his classroom capitalism idea, paying rent, working jobs, etc.. Sounded great then, sounds great now. *sigh* I’m pretty sure I’d be illiterate by now if it weren’t for all these bite-sized blogs and online journals (Slate, Salon, etc.) keeping me and words at least fair-weather friends.

    Sara, it’s always encouraging and depressing at the same time when people seize on the most obvious glitch in this system. Glad you people are evaluating this thing through critical eyes.

    So I’ll say here that once a student completes a concept twice and I tell her she doesn’t have to pass it again, that she can work on other concepts, there is a tendency to file that knowledge away somewhere impermanent.

    But in nearly every case, when I toss an old problem on the board and a student says, I don’t remember that, it takes the absolute minimum of prodding for her to generate full recall.

    I do a final exam. It isn’t worth a great deal. My students tend to perform exactly along the average they maintain in previous concept quizzes. I rarely see the point of the final when, after all the year’s assessment, I have a sense of my students’ comprehension that’s almost clear as crystal.

    After all this, I’ve gotta point that every system is going to have faults. If you can’t be persuaded on its merits, at least consider the individual downsides of each approach, traditional and this one. For me, there wasn’t any question that a change was necessary.

  11. Dan, I can go back in to our online report card system and change grades from any quarter, standard or concept. So yes, if I student shows me in the 3rd quarter they mastered something from the first, I can change it. The report card that goes in their file is the very last one. I moved one of my 6th graders up into 7th grade math this year, so I guess that they can also not have to worry about learning some of the concepts (he scored a 98% on the district benchmark, so I figured I wasn’t challenging him anymore). The problem with our grading system is that someone that gets a 68% on things will receive the same “grade” as someone that scores a 78%. I have had parents tell me that they hate it because their child can work their ass off and get the same grade as someone that didn’t work as hard. I don’t necessarily agree with that, but I see where someone that didn’t keep good records could end up giving out grades with that much of a range.

    We took our final math benchmark today, so next week will be me breaking it down to add into the grade book throughout the year.

    I agree with Greg about “There are No Shortcuts.” I’ve read it about 3 times. It really doesn’t take that long and kept me interested throughout it. I revisit parts every couple of months as well. I’d move it up on the “to get to” reading list. He just came out with another book recently. If I had a less busy summer I’d go pick it up, but I don’t want to take away from any of the stuff that I have to do over the next 8 weeks.

  12. Again, I don’t think the explanation for this is as simple as you suggest. I try to do things differently just about every year. Last year, I didn’t take anywhere near the amount of late work I’m taking this year (actually, my final pass rates were better than I suspect they’ll be this year, but that’s a chicken that hasn’t hatched). Accepting late work is how their grade shows an immediate change — at least a change in the percent if not a change in the letter.

    What I’m saying is I give my failing kids the chance to change things about every bit as much as you have described on this blog (though apparently for the last year) and I’m still weighed down with an inverse bell curve. So it’s not just the matter of making up for past failures. There’s something else going on. We can’t pass the buck onto the status quo for this, as much as I’d like to.

    Oh — basing a student’s grade on attendance is against Ed Code, which states something like a student’s grade needs to be based on knowledge of course content. That’s why I brought that up. We can’t grade students based on attendance.

  13. Hi Dan,

    The problem you’ve identified, re: long-term student retention of material vs. short-term mastery is something I’ve struggled with. My solution is to create two levels of achievement: 1) Master and 2) Ph.D. Mastery is achieved and recorded on those short-term objective-based assessment (say, complex sentences or characterization or topic sentence production), but to achieve Ph.D status, I need to see organic utilization of these concepts and skills in unprompted settings.

    For example, I explicitely teach how to write complete sentences that are free of run-ons. This takes approximately nine years. I give assessments where students communicate key concepts, correct errors, and so forth. But this is limiting. To Ph.D the skill, I monitor all subsequent student writing; when I’ve seen enough that is free of run-ons, they earn Ph.D status. The same is true for verb tense, paragraph structure, vocabulary, literary devices, and everything else we do.

    I feel that this would be even easier to monitor in your field, as it tends to build even more structurally and contigentally (sp?).

    As for grades, it’s a bizarre thing at my school, where we have ability grouping across the board. A first quarter “A” in my 7th grade class means you’re performing at 3rd grade level. Three rooms down the hall, that “A” means you’re performing at a 9th grade ability level. They measure the extent a student meets expectations, but nothing in terms of nearness to grade level standards.

  14. Interesting idea that could be shifted to any set of objectives. Now, if we were to set this into a system where each students objectives were identified and students were shown what they need to demonstrate, given instruction and then allowed to demonstrate/show understanding in a way that was interesting to them, we might eliminate the homework problem and have schools where students were much more engaged. Of course, this would require that students were able to do cross-curricular work and teachers were less “class oriented” and more student oriented. I think that what you are doing moves away from our traditional idea of classwork and testing. Keep it going.

  15. Todd, my instinct here is to classify our two systems of remediation (yours with late work and mine with re-assessment) as “different.” Kind of a selfish instinct, at that, but I can’t help it when I’m seeing such radical bell-curve-reshaping results. New faces have been coming in for tutoring and re-assessment lately, faces which’ve been on the bummer side of that bell curve for awhile, faces which have kept coming in as they realized how simple it is to learn something and how fairly this system rewards them for it.

    Just quickly for the record, I don’t grade on attendance. I’m not sure where I hinted that I did (that paragraph where I called attendance-based grading “petty”?) but I don’t. I’d go 100% assessment if I thought my class would buy in.

    TMAO, Maybe you’re wrong about math. The general impression is that math all builds, one concept on top of another, but that isn’t exactly true. After completing a topic, we’ll jog to entirely distant territory for several weeks. It isn’t a matter of better curriculum mapping. It’s just kind of Geometry’s way.

    I like the game you’re playing (particularly the honorary doctorate — bet they dig that). By comparison, I’m watering down the value of the diploma. Can’t figure out how to strengthen it.  I maintain a strong aversion to assessment-as-usual, even though I’m not sure what the best alternative is.

    Kelly, you’re posting interesting extensions both here and at Christian’s. I allow the students to demonstrate math competency in several ways and I’m open to new ones. A coupla times I’ve given a kid her mastery stamp simply by tutoring another student on to success. I should pursue that customization a little more.

  16. Good distinction between late work and re-assessment. That gives me an idea for my failing seniors. Still, the late work I collect is a re-assessment of their skill at that moment (most of that late work is part of their writing grade). That’s why I’m seeing the big inverse bell curve: students who keep their English skills sharp by finishing assignments do well on assessments (again, mostly writing, but occasional tests, too); the others don’t improve their skills and don’t do well on assessments.

    And it was the second paragraph after your histogram where you wrote “whether that’s by studying more, attending class more often, or coming in for tutoring” — that’s where you hinted that you grade attendance. Got it, though.

  17. This is a situation that I’m dealing with – my son has As on homework and Fs on quizzes and tests. He’s putting in the effort at home – I see it, but faring poorly with demonstrating mastery on tests/quizzes.

    How would you handle that as a teacher? as a parent? Does that reflect on the quality of teaching (especially since he tells me that many students are doing poorly on quizzes and tests)? At what point do teachers reflect back on the quality of their instruction?

  18. Got it, Todd. My point with that list of risks students take towards greater understanding — attending more, seeking more help, completing more homework — is that each of them defers any serious material reward until later. Typically. Contingent on a teacher’s grading scale, of course.

    Karen, I’m loathe to leap on one of our own, but it sounds like you’ve got this one figured out. Lots of hard-working students doing poorly, homework bearing little connection to assessment. New teacher?

    From my perspective, I hope a parent would call me first and ask me to explain the discrepancy between effort and grade. If I gave you the run-around or didn’t seem like I knew what I was doing, I’d understand if you dropped a brief, Concerned Parent call to the office. Tough spot, though.

  19. Dan, A few more questions (thanks for answering so completely last time!):

    Any chance you’d be willing to post (or email) a list of your assessed concepts for algebra? With 13 chapters that I teach and 8 – 10 sections per chapter in my text, having that many assessments seems ridiculous, but…

    How long IS your teach-practice-assess cycle? Do you give warning about an assessment, have one every week, or just throw a short assessment into a class on another topic?

    How often have you realized, hey, that question didn’t assess what I wanted it to assess? How many re-tries does a student get? Do you have multiple versions of an assessment, or just make up new/similar problems as needed?

    I’m still seriously considering a complete overhaul of at least ONE of my 5 preps for next year… a student should not be able to fail algebra 1 – there needs to be a better way, one that keeps my kids in the game and learning from failures, rather than giving up because of them.

  20. Sara, if you’re game I’d love to spend more focused time getting you up and running. If these comment questions alone are doing it for you, then great.

    The only annoying part about the system is that you’re now creating your own tests instead of using the textbook’s. This invariably makes you a more reflective teacher, however, and anyway, once you’ve got a template, you can kind of keep rocking it forever and ever.

    I’m away from my work materials but I’ll post that concept list soon. There are about forty for the year and keeping that manageable becomes an issue of crafting a full test question without over-stocking it. You don’t want to waste your time and theirs assessing their ability to solve 2x = 6. (Though a textbook would.) You want to assess their ability to solve 5x – 7 = 14, which encompasses the shallower question, but you don’t want to assess their ability to solve 5.1x – 7.3 = 14.7. You want your assessments to be so clear that scanning a student’s concept scores gives a crystal clear picture of her understanding. (Something that a student’s score on “Unit 6 Test” does not.) If she gets the decimalized problem wrong, for example, it wouldn’t be clear from her concept score if you needed to remediate two-step equations or decimals.

    Continuing: I do one test a week. A one- or two-sided test with three questions a side. Even if we haven’t covered anything I want to assess, I’ll still give them a shot to demonstrate competency on old stuff, provided we’ve spent some time on openers or with review games covering that stuff. If it’s been full charge ahead, there’s no reason for me to expect a different score.

    A student can take as many shots as she wants to demonstrate competency though I only allow one attempt at an individual concept per day. For these impromptu re-takes (really, the lifeblood of the system) I just pull out a blank half-sheet of paper and scribble a question down. Again, way more work than the textbook assessments, but damn if you aren’t real familiar with your content area and expectations after a year of this process.

  21. Dan,

    how wide is the ability level in your classes? I ask, because I am fairly certain that the bimodal curve was a feature of dumping ground classes, no matter how experienced the teacher, at my first, sub-mediocre high school.

    I spent five years learning that I had everyone from almost at grade level students, down to students whose ability to do mathematics was maybe at a second or thrid grade level. (and the spread in reading levels was more extreme).

    Now, there was some ability grouping in the school, but it just took the at-grade level and above kids (relatively small number) out of the pool.

    Also, each class was linked to a state test in June, so there was little flexibility in what was covered.

    My last year I had a class that was really grouped – by low reading level (bottom 10%?), but really grouped. I soon discovered that there was no top half to the curve; they could not assimilate enough fast enough. Now, what I am about to describe is not really right, but it’s what happened. I decided that no one cared enough about these kids to pay attention to what was going on. And I decided that I could keep most of these kids off the bottom of the chart by teaching them less. I taught what I thought they could assimilate, and it worked. The class more or less stayed with me the full year. Perhaps they learned two-thirds of what they should have, and few passed the state assessment – but few would have passed it anyhow, and all were in a much better position to do it on their second try. (this was also probably the class in my entire career that made me feel most appreciated.)

    So, right or wrong, I couldn’t have pulled this off if the ability range in the room was very wide.

    And I think the relation to your description is this: that bottom hump often comes form kids feeling overwhelmed and just quitting. If you can find a way (and it sounds like you have) to keep them from quitting, you have a good shot at avoiding the lower hump. What I did was different, but to the same end.

  22. Oops, I forgot my follow-up.

    By testing in little pieces, one skill at a time, are they ever asked to put skills together, to use more than one at a time?

  23. Jonathan’s last comment brings to light some of my thoughts regarding this topic.

    Dan, when assessing one concept at a time, how do you assess your students’ ability to synthesize the concepts? Ability to problem solve? Ability to communicate mathematical thinking? In short, when do the higher order skills come into play?

    Also, I’ve found that students often can do well when presented with one concept at a time. Yet when assessed on multiple ideas/topics at once, they don’t know what to do. I often hear — “I didn’t know what to do, because I didn’t know which type of problem it was”. Arghhhh! Basically it amounts to, ”tell me what to do and I’ll do it.” I then question — have they actually learned anything, if they cannot determine a strategy with which to attack the problem?

  24. I just realized that my last comment did not come off as I intended. I DID NOT mean to demean your efforts at reforming assessment (and practice). I agree a “unit 6” test tells me very little about what a student actually knows. I like the idea of assessing one concept at a time – that tells me (and them) where the gaps are.

    I’m just struggeling with how to get my students to start thinking for themselves and how to assess this process. Then how to balance this with computational fluency.

    So, I want them to understand the mathematics, be able to apply it to new situations, and be able to effectively communicate this thinking to others. Um, am I reaching too high?

    Keep up the great posts – please understand that my comments are really just the questions I ask myself. However, I’m sick of my own answers and need input from others. Thanks.

  25. I need to tag onto Jackie’s second comment. What you propose is novel and interesting and clearly has value. In testing it for weaknesses I do not mean to imply otherwise.

    I am also curious about why what you are doing appears effective. In wondering about several possible reasons, I don’t mean to dismiss the explanation you have offered.

  26. Jonathan: And I think the relation to your description is this: that bottom hump often comes form kids feeling overwhelmed and just quitting. If you can find a way (and it sounds like you have) to keep them from quitting, you have a good shot at avoiding the lower hump. What I did was different, but to the same end.

    My inverted bell curve hypothesis doesn’t have a whole lot of scientific ground to stand on, I’ll admit. It’s a pseudo-science rationale for the strategy you’re referencing. In a way I don’t have the energy to prove, the way I assess is keeping more of these bottom decile kids from quitting.

    I mean, I’ve got a few kids rocking forty percents with a week and a half until finals who are convinced they’re going to remediate and pass. Thing is, I’m pretty sure they can.