This is a math-related post. I’ve tried to keep it as broad-minded as possible because, as much as I believe I’ve found the best way to assess mathematics, I haven’t the foggiest how to translate it to other disciplines. And I need help.
(Prerequisite: It’s essential to assess math by concepts and skills rather than by chapters, for reasons I outlined here, but specifically in this case because assessing by concepts means I can remediate like a pro. A student comes in with a low overall grade in hand and I know exactly which of our (currently) 21 concepts are bringing her down. We tutor, we reassess, grades go up, comprehension goes up, everyone’s happy.)
Ranking a close second in importance to concept-based assessment is the selection of good concepts. Here’s where I almost went wrong this last week.
We’ve been assessing Cones (#19) for a couple weeks now. It’s a straightforward concept. All you need to find the surface area of a cone is the slant height (19 inches in the picture) and the radius of the circular base (7 inches).
So here’s the temptation. We’ve now covered Pythagorean Theorem, which means I could conceivably give them the height of the cone (which is just a distractor, typically) and through our new theorem, find the slant height, which is essential. We’d be tying new knowledge to old, adapting new skills to serve old ones, and making a richer, more complex assessment.
This is a problem.
Because we’d be tying new knowledge to old, adapting new skills to serve old ones, and making a richer, more complex assessment.
If a student fails an assessment that’s been stitched together Frankenstein-style from several disparate concepts, what do I know about her failing grade? Did she fail because she doesn’t understand the Pythagorean Theorem or because she doesn’t understand Cones?
The triumph isn’t obvious here, particularly, I imagine, to a lot of math teachers, the kind who are thrilled by these rich, multilayered problems. I’m no different and I will be tossing the new format onto openers and classwork. My triumph was to resist assessing it, to instead give the Pythagorean Theorem its own concept (#20).
Under the alternative I would’ve given up one of my best weapons to combat ignorance simply to showcase something clever and challenging. It would’ve felt satisfying to some extent, but I’m too deep into this system to ignore how far it would’ve set me back as an educator.
[Update: check out the comprehensive resource.]
SteveMarch 12, 2007 - 7:10 pm -
I think the main value in those types of complicated problems is the assurance that the student understands things when she or he answers them correctly. The risk is not knowing which part they missed. Perhaps you can assess first on isolated concepts, and then have an “integration” concept that involves piecing together other concepts?
mrcMarch 12, 2007 - 8:57 pm -
I think as long as you’re giving them a shot at the “rich” problems in other places, it’s fine to assess in a very controlled and directed way. However, they eventually your students will need to be able to successfully test on more complex problems too. (Like, say, those on STAR or SAT tests.) I think the way you’re choosing to be disciplined and targeted in your assessments absolutely makes sense in the context of your overall standards-based grading. It feels almost scientific in diagnosing weaknesses!
RichMarch 13, 2007 - 10:58 am -
Not sure that I’m adding some brilliant insight to your posting (or the two previous commenters), but I definitely do recognize the care that must go into creating a good problem for assessment, and how we can unwittingly create complicated problems that prevent us, the teachers, from truly seeing where there are needs and where there are strengths. For example, my sixth graders just finished multiplying fractions — but even before we quizzed on that, I was realizing that their greater challenge lay not in the multiplying of fractions, but in simplifying the results.
If our assessments really ARE designed to help inform us and our students about where they are strong and where they need some more work, then we must create appropriate problems to “diagnose” that (to use mrc’s word). But I do also agree that we can’t dismiss multi-layered problems altogether, since rarely in the *real world* do we come across naked, isolated mathematical problems. Steve makes an interesting point about assessing first on the specific concepts, and later assessing on integrated concepts.
danMarch 13, 2007 - 11:17 am -
Yeah, I dig Steve’s idea of a recurring integrated concept — say, every fourth one. If I didn’t feel like we were getting enough complexity in our diet, I’d implement it pronto.
And it really does feel like a diagnosis, per both Rich and Mr. C. Like a 40-point vehicle inspection, which is a metaphor intentionally chosen to provoke the cries of, “Kid aren’t cars!”
Which is true. And fair, even. But their comprehension of Algebra 1 is like a well-running car, easily divided into closed systems, each of which can be damaged and repaired independently. Not terribly romantic, I realize, but it is awfully empowering for me.
Marco PoloMarch 14, 2007 - 4:44 am -
Just wanna say I enjoyed reading your “How Math Must Assess” manifesto, even tho I teach EFL which is much, MUCH, harder to break down into unit skills. Also, there’s the added variable of performance because learners may know something but be inhibited by performing it in front of their peers (yet if they don’t perform it, how do you know they know it?), and there’s also the testing of passive versus active knowledge, all elements which are probably foreign to math teaching. Still, I think your post did its job, i.e. getting me to think again about assessment.
PS You might get something out of Ed Nuhfer’s articles. Here’s a sample. He’s prompted me to re-think my teaching philosophy, not just in the area of assessment. He’s university-based, but a science teacher, so you may glean something.
danMarch 14, 2007 - 6:56 am -
Thanks for the link. Guy clearly knows his stuff.
It’s kind of disappointing that this competency-based system doesn’t seem eager to leap over content boundaries. For good reasons, as you point out. My largest concern is that a student who’s been blowing it for a semester has the opportunity and incentive to remediate her own knowledge. Any system that has that built in is fine by me.
PS. How the hell did you finally get past the velvet rope? With a link, no less. Nice.
eMarch 19, 2007 - 10:53 am -
You say: “Did she fail because she doesn’t understand the Pythagorean Theorem or because she doesn’t understand Cones?”
I am curious as to what exactly she is supposed to understand about cones. Your problem asked her to find surface area of the given cone. To solve it, she needs to know the formula for surface area. To be able to evaluate it, she needs to know what in that formula corresponds to quantities given in the figure. Which part of this problem then relates to her understanding of the cone? In other words, I would like to know what is being assessed by this problem.
danMarch 19, 2007 - 11:55 am -
“What is being assessed by this problem?” is always a good question to ask.
This being the first cones assessment, I am assessing their ability to recall the correct formula for surface area of a cone, evaluate based on information given, and correctly interpret the answer (i.e. in3, in2, in, etc.). Not terribly demanding, but the way I structure the system, a correct answer on this problem is worth a B- overall.
Subsequent assessments give the surface area of the cone and ask them to solve for either the slant height or radius.
How would you change the assessment, e?
eMarch 19, 2007 - 5:11 pm -
I was actually thinking about what it is that we deem important that our students know, not about changing your assessment. Personally, I don’t find value in their knowing the formula for surface area of a cone by heart. In other words, this is not something I would want to assess (I don’t know however whether the standards require it). For example, did they see the derivation of the formula? If so, was it insightful? Did it bring anything new to their understanding of cones? Or to their understanding of area? Or to their understanding of …? Contrary to the cone example, I can see how and why we would want them to know surface area of say a cube. Not by heart mind you, but by knowing what surface area is, they’d be able to find the formula and hopefully evaluate it. I am not sure that we can say the same for the cone (at least not without significant amount of work). So, to humor me, let’s take the knowing (memorization) out of this assessment. That leaves plugging in the quantities and interpreting the answer. I don’t think either one of these tasks is trivial. Or should I say, the demand of each of the task depends on where these students are in their mathematical education. Evaluating functions and making sense of the answer you got is, I believe, an important part of what the students should learn. However, correct evaluation of a function is certainly less impressive if a student in a calculus class does it, than if a middle school student does it. That type of consideration determines whether it’s a B- or an A or a C answer, and this is something you’d know better than I do.
I would also like to agree with Rich that although “simple” problems easily indicate where exactly students may have troubles, rarely do the real problems come isolated in a single concept format. Besides, one of the things we need to make sure is that our students can make connections between various ideas. I know that we can say that we do it in class, but the problem in that is that we can’t truly know how each of the students made those connections.
danMarch 19, 2007 - 5:47 pm -
Why? What can I determine from a summative assessment that I can’t from openers / homework / classwork / conversations?
eMarch 19, 2007 - 6:17 pm -
This was the only thing that bothered you? Good. First, I thought you hardly ever gave homework, and I don’t know whether you look through the ones you do give. But I would venture that homework would be the next best way to determine how well they make connections, by themselves. As for the other forms: openers / classwork / conversations, I personally, find it hard to assess ALL of my students that way. Why? Some students don’t speak as much as others, some are shy. We may think that all of our students participate equally in classwork, because we call on all of them, but that usually isn’t the case either (try to pay attention, or have somebody else visit and write down how many times each student has spoken during 1, 2 or however many classes). Let’s say that you make a point to ask a question of each student (can you call on each in one period?). Some will be able to give you answer immediately, some won’t. If they don’t know the answer immediately, how much time do you have to figure out what exactly gives them problems? And even when you’ve gone over something with the whole class, does it mean they all got it?
I’m just saying it’s not all black and white. Not all your problems should be simple, and not all your problems should be complex. Balance. I’m for balance.
danMarch 19, 2007 - 7:15 pm -
I think we agree that summative assessments offer a more accurate look at how our students solve problems — incl. the multi-layered complex ones. But if I chose to toss them in, I would lose my ability to target areas of weakness as well as I can now. That’s not a trade-off I’m willing to make, especially when I’ve found assessing those problems pretty easy in long classwork assignments. Maybe it’s important to note that I’m only dealing with classes sized in the mid- to high-30s.
eMarch 20, 2007 - 3:37 am -
Mid to high 30s are huge classes in my mind. Mine were in mid 20s and I still had to struggle to make some people talk. Maybe I suck. I wasn’t saying you should remove the problems you have now, but you may choose to add the other ones in. If I am not mistaken that wouldn’t damage your ability to “target area of weakness”, but would allow for assessment of other things as well.
PeterFebruary 24, 2008 - 6:16 am -
I’m Belgian and although I already know my maths but I’m still learning how to teacht it properly. In Belgium, teacher are thought to identify and classify their goals before starting to design lessons or evaluation. Your evaluation scheme fits this perfectly and I’m going to give it a chance in my own classroom. It’ll probably need some adjusting and since some of my goals are ‘integration’ as steve called it (I teach math to math-majors in High school) I’ll have to put in some goals in that direction as well.
Thanks for sharing your thoughts.
TouzelApril 28, 2009 - 1:57 pm -
Dan, your blog has helped my teaching more in the last 3 years than all professional-development done at my schools combined, however, I disagree with the procedural nature of assessment items like these. To be honest, the cone example you gave seems more like an “exercise” than a “problem”.
It seems like you value–above all else–the ability to look at an assessment score and know exactly what that number means. I like that and would like to find a way to do that. In doing so, however, it seems that you break everything down into such compartmentalized, small individual skills, that students are never assessed on their ability to solve rich, layered, challenging mathematical problems that require critical-thinking skills and/or require them to communicate their thinking.
Perhaps I’m assuming too much. Perhaps you have changed in the 2 years since you initially posted this entry. Are you assessment items still this procedural? Also, do you give Problems of the Week or portfolios or projects or any alternate ways of measuring what kids know?
[Thanks for your blogging. As I said earlier, reading it (and the comments) really improves my teaching abilities.]
Dan MeyerApril 29, 2009 - 8:15 am -
A primary function of my assessments is to sort students who should advance to harder study from those who shouldn’t. If a student is failing my class then there is a greater than 95% chance she will fail the next one. If a student is passing my class then there is a greater than 95% chance she is prepared for the next one.
For that reason, my assessments are heavily procedural. I litter my curriculum with applications, extensions, project-based work, sure, but I won’t force a student who can find the roots of a quadratic trinomial but who couldn’t do that in the context of a poster project on projectile motion to repeat Algebra 1.
Yours isn’t an uncommon concern with this kind of assessment. If it doesn’t work for you, there’s nothing cooler than splitting from your mentors and saying, “This is how I would do it.” Set up a blog, do your own thing, and tell us about it.
ClintCApril 29, 2009 - 10:39 pm -
In structuring assessment, I try to balance procedural exercises with richer problems that attempt to assess how well students are able to decide what tools to use. I believe both types of skills are important in determining success at the next level.
This year in my PreCalculus class, I have many students who are excellent at solving exercises such as “find the roots of the following quadratic trinomial,” where students are asked to apply various skills in isolation. (My quizzes, which are worth 25% of the course grade are made up of mostly these skill-based questions) However, those same students genuinely struggle with word/story problems where students must decide what information is relevant and decide which skills to apply. (Much like the excellent situations and problems you give us to consider in your blog) While I try as much as possible to include these in classwork, I also try to show that these problems and the problem-solving and questioning skills developed by working on these problems are important and worth measuring by including them on more formal assessments.
I try to make it so that a student earning an “A” in my class has demonstrated mastery of basic skills and has demonstrated the ability to know in what context those skills should be applied. Many times students ask why bother taking advanced math, when they might never factor a polynomial or solve a quadratic equation ever outside of math class. For many students, the critical-thinking skills and communication skills they build by figuring out how to apply the procedures they’ve learned is the main skill that they’ll draw on when it comes time to communicate detailed, techincal information in the future.