Search Results for "assessment"

Total 126 Posts

Desmos Now Embedded in Year-End Assessments Across the United States

Amy X. Wang, writing in Quartz:

Enter Desmos, a San Francisco-based company that offers a free online version of TI’s graphing calculator. Users across 146 countries, most of them teachers or students, are currently logging 300,000 hours a day on the platform—and today, Desmos announced a major partnership with testing consortium Smarter Balanced, which administers academic exams in 15 US states. Beginning this spring, students in those areas will use the online tool in math classrooms and on statewide performance assessments.

When students take their year-end assessment in 15 states, they’ll see the same free calculator they’ve been using at school and at home the rest of the year. That assessment will more closely reflect what they know, rather than what they were able to express through unfamiliar or costly technology.

USA Today has the reaction quote from Texas Instruments president Peter Balyta:

Peter Balyta, president of TI Education Technology and a former math teacher, defended the purchases, saying a TI calculator “is a one-time investment in a student’s future, taking students through math and science classes from middle school through high school and into college and career.” He said TI’s technology is evolving, but that models like the TI-84 Plus come with “only the features that students need in the classroom, without the many distractions that come with smartphones, tablets and internet access.”

This is interesting. In a world where more and more assessments are delivered digitally (and pre-loaded with digital calculators) the sales pitch for hardware calculators is their lack of features, rather than their abundance.

There is clearly a market today for a calculator that lacks internet access. Around 20% of teachers in my survey said they wouldn’t let students use mobile devices on exams for reasons of “test security” and another 10% cited “distraction.”

Open, interesting questions:

  • Are those figures trending upwards or downwards?
  • Will schools and parents continue to pay Texas Instruments an estimated 50% profit margin for more test security and fewer distractions?
  • How do math coaches and instructional technologists help teachers harness the advantages of the internet while also managing concerns about security and distraction?

2017 May 12. Peter Balyta makes a longer case for hardware calculators, one which won’t surprise anyone who has followed this discussion. He mentions (1) lower cost, (2) fewer distractions, (3) greater test security, (4) more features, and (5) availability on tests.

Assessment Is The First Domino

Daniel Schneider, in a must-read piece:

I believe that standards-based grading, at its fundamental level, is only changing your gradebook so you grade individual standards. However, this change forces you to face realities about a traditional classroom that you can’t ignore and that you are forced to react to.

If this piece were only about the implementation of standards-based grading, it’d be indispensable. If you’re thinking about making a constructive change to how you grade and treat your students, you should read the Schneider’s how-to guide.

But it’s also about changes Schneider made from year one to year two in that implementation, which makes it rarer and more valuable among all the SBG literature you can find.

But he also diagnoses how this one change to assessment then rolls along and affects every other aspect of his classroom. Curriculum, homework, relationships, the definition of math itself — nothing is spared. Assessment is only the first domino.

It’s the best examination of the classroom as a thriving, codependent ecosystem I’ve read in a long while.

Video-Based Assessment In Science

I met Greg Schwanbeck at Apple Distinguished Educator sleepaway camp last month. He teaches science. I teach math. We set those differences aside and found a connection. I use multimedia in my curriculum. Greg uses video for assessment in a way I found compelling.

Let’s say he wants to assess the impulse-momentum theorem, which is the theorem that explains why boxers roll with punches rather than against them. ie. If you double the duration of the impact, you halve its force. (Cut me some slack here, science-buds. Everybody knows I have no idea what I’m talking about.)

He gave me permission to share with you three versions of the same assessment of a student’s understanding of the impulse-momentum theorem. Let me invite you to assess the assessments in the comments. List some advantages and disadvantages. Ask yourself, “What is each option really assessing?” Greg will be along shortly to offer his own commentary and to assess your assessment of the assessments.

Option 1

An 80 kg stuntman jumps off of a platform high in the air and lands on an airbag. The stuntman hits the airbag with an initial velocity of 45 m/s downward. 0.1 s elapses between the moment the stuntman first touches the airbag and the moment the airbag completely deflates and he comes to rest. Assume that the maximum force that the stuntman can experience and survive is 39200 N. Does the stuntman survive the fall?

Option 2

A stuntman jumps off of the top of a crane extended high up in the air. Below him is an airbag–a large inflatable cushion that has a thickness of 3 meters. When the stuntman comes into contact with the airbag, the impact deflates the airbag over a period of time, compressing the airbag from 3 meters thick to 0 meters thick while slowing him down to a stop. Explain, making reference to the impulse momentum theorem, why the stuntman is able to survive.

Option 3

Explain, making reference to the impulse momentum theorem, why the stuntman is able to survive the jump.

[PS] Assessment

It’s bad enough when you’re trying to gin up interest in math by way of pseudocontext. It’s worse when you’re trying to assess math by way of pseudocontext. If the student isn’t interested in math by now, what do you think an assessment is going to do?

If your students miss these problems, how certain are you they really misunderstood the mathematics? How certain are you they weren’t distracted by the problem design?

I Do Not Get Assessment At All Sometimes

I let Chuck, Shelley, and Robert skip the final exam. We logged fifteen concepts in the first semester of Algebra 1 and those students studied them, practiced them, and demonstrated mastery on all of them. Take a break, kids.

But what if I had given them all fifteen of those concepts again. How accurate is my ranking not just of those three kids but of all of my kids? I have ranked everyone on a four point scale on each of those concepts. Will a student ranked at 2 (“major conceptual errors”) again score a 2?

In lieu of a 50 question scantron final, I re-assessed every student on every concept, entered the current ranking into Excel alongside the student’s old ranking, and took the difference.

Should’ve left well enough alone, right?

How Accurate Were The Old Rankings?

  • Okay, so big sigh of relief that, in 313 instances, my old ranking was an accurate assessment of a student’s current knowledge. Could’ve been worse.
  • Could’ve been a lot better. That’s only 47% accuracy. And in 43 instances, my old ranking was three levels too high. That would be putting a student at a 3 (“minor mechanical errors”) and watching the student stare totally blankly at the question on the final forty-three times.

What Does Mastery Mean?

If I have a student ranked at mastery, would she master the same concept on the final exam?

  • This isn’t awful. This isn’t great. I don’t know at what point I should be unhappy.

Enduring Questions

  • What do we mean when we say “mastery”? Does that mean a student will score perfectly on the same concept every time? Should I be unhappy that the correct/incorrect balance wasn’t 100/0?
  • What do we mean when we say “retention”? This is a common question of my assessment strategies. “Don’t kids forget?” Obviously, I can now answer that question, “yes, sometimes.”
  • What do we mean when we say “grades”? I don’t know what kind of results here would prompt me to pack up the shop and dole out monthly, summative unit exams (“Chapter 6 Test”) with the rest of my department. The fact is that this kind of precision analysis isn’t even possible under a unit exam model, which puts other teachers in an enviable position; the question “do these assessment scores represent my students’ current knowledge?” cannot be answered so it goes unasked. The answer, I’m afraid, is that their assessment scores underestimate student knowledge since Chapter 7 clarified many of Chapter 6’s concepts but these teachers have no mechanism for class-wide re-assessment. So they lower assessment’s grade weight beneath that of homework, instead, and inflate their grades with a few extra credit assignments. Look, I’m open to absolutely anything. I just want my grades to mean something. And I need to respect what few guiding principles for assessment make sense to me.