Posts

Comments

Get Posts by E-mail

Archive for the 'assessment' Category

Daniel Schneider, in a must-read piece:

I believe that standards-based grading, at its fundamental level, is only changing your gradebook so you grade individual standards. However, this change forces you to face realities about a traditional classroom that you can’t ignore and that you are forced to react to.

If this piece were only about the implementation of standards-based grading, it'd be indispensable. If you're thinking about making a constructive change to how you grade and treat your students, you should read the Schneider's how-to guide.

But it's also about changes Schneider made from year one to year two in that implementation, which makes it rarer and more valuable among all the SBG literature you can find.

But he also diagnoses how this one change to assessment then rolls along and affects every other aspect of his classroom. Curriculum, homework, relationships, the definition of math itself — nothing is spared. Assessment is only the first domino.

It's the best examination of the classroom as a thriving, codependent ecosystem I've read in a long while.

Tom Hoffman's perspective on Rhode Island's summative graduation exam is worth your time:

Another question I thought was typical showed two spinners that would give you random numbers from 1 to four. It wanted to know the probability that the sum of the two would be a prime number. I drew a complete blank, until I realized I could easily write out all 16 combinations and just circle the ones that resulted in a prime number. That more clearly took mathematical reasoning, problem solving and content knowledge.

I like the question, and I like the direction it should push math curriculum. But I'm also aware that if even if kids have been taught probability, if they haven't been taught it in a way that encourages flexible and resourceful problem solving — rather than pulling numbers out of stereotypical word problems and following procedures — they will be completely screwed.

I'm glad parents, policymakers, and stakeholders are taking these exams (or shortened versions of them) and reflecting on their results. But again we should be careful not to write expansive prescriptions for what we teach kids based on the test results of grownups. The proposition, "A middle-aged bureaucrat hasn't used algebraic expressions in three decades and turned out fine therefore we shouldn't teach algebraic expressions to fourteen year-olds" has yet to be nailed down for me.

The Smarter, Balanced Assessment Consortium:

Five swimmers compete in the 50-meter race. The finish time for each swimmer is shown in the video. Explain how the results of the race would change if the race used a clock that rounded to the nearest tenth.

You should take a tour through the Smarter, Balanced Assessment Consortium's released items, make an opinion about them, and share it here. California is a member state of SBAC, one of two consortia charged with assessing the Common Core State Standards, so I'm comparing these against our current assessments. Without getting into how these assessments should be used (eg. for merit pay, teacher evaluation, etc.) they compare extremely favorably to California's current assessment portfolio. If assessment drives instruction, these assessments should drive California's math instruction in a positive direction.

The assessment item above uses an animation to drive down its word count and language demand. It's followed by an expansive text field where students are asked to explain their reasoning. That stands up very well next to California's comparable grade five assessment [pdf]:

  • Elsewhere, we find number sense prized alongside calculation (here also) which is a step in a very positive direction. (ie. Our students should know that $14.37 split between three people is between $4 and $5 but it's a waste of our time to teach that division longhand.)
  • I've been assuming the assessment consortia would run roughshod over the CCSS modeling practice but on the very limited evidence of the sample items, we're in good shape.
  • The assessments do a lot of interesting and useful things with technology. (Reducing word count, at the very least.) I only found one instance where the technology seemed to get in the way of a student's expression of her mathematical understanding.

I can't really make an apples-to-apples comparison between these items and California's current assessments because California currently has nothing like this. No constructed responses. No free responses. No explanation. It's like comparing apples to an apple-flavored Home Run pie.

Featured Comment:

Candice Frontiera:

Next thing to explore: Technology Enhanced Item Supporting Materials [zip]. [The "Movie Files" folder is extremely interesting. -dm]

FeedThresh

Shawn Cornally coins the term FeedThresh (short for "Feedback Threshold") and gives it a definition that feels exactly right:

  1. The student knows that first attempts are rarely perfect, and often require serious revising.
  2. The student wants expert feedback on work that is established and based on research and the literature.
  3. The student knows that his learning is not tied to class time or any other arbitrary unit of time or space.

Assessment is too complicated for any of us to do any more than say, "We're trying to optimize for a certain set of values," and then make those values explicit. Standards-based grading involves some compromise, but I don't know of another assessment strategy that optimizes the values that Shawn's made so explicit here.

I met Greg Schwanbeck at Apple Distinguished Educator sleepaway camp last month. He teaches science. I teach math. We set those differences aside and found a connection. I use multimedia in my curriculum. Greg uses video for assessment in a way I found compelling.

Let's say he wants to assess the impulse-momentum theorem, which is the theorem that explains why boxers roll with punches rather than against them. ie. If you double the duration of the impact, you halve its force. (Cut me some slack here, science-buds. Everybody knows I have no idea what I'm talking about.)

He gave me permission to share with you three versions of the same assessment of a student's understanding of the impulse-momentum theorem. Let me invite you to assess the assessments in the comments. List some advantages and disadvantages. Ask yourself, "What is each option really assessing?" Greg will be along shortly to offer his own commentary and to assess your assessment of the assessments.

Option 1

An 80 kg stuntman jumps off of a platform high in the air and lands on an airbag. The stuntman hits the airbag with an initial velocity of 45 m/s downward. 0.1 s elapses between the moment the stuntman first touches the airbag and the moment the airbag completely deflates and he comes to rest. Assume that the maximum force that the stuntman can experience and survive is 39200 N. Does the stuntman survive the fall?

Option 2

A stuntman jumps off of the top of a crane extended high up in the air. Below him is an airbag–a large inflatable cushion that has a thickness of 3 meters. When the stuntman comes into contact with the airbag, the impact deflates the airbag over a period of time, compressing the airbag from 3 meters thick to 0 meters thick while slowing him down to a stop. Explain, making reference to the impulse momentum theorem, why the stuntman is able to survive.

Option 3

Explain, making reference to the impulse momentum theorem, why the stuntman is able to survive the jump.

Next »