Month: September 2013

Total 11 Posts

Explore The Math/Twitter Blogosphere

File this as Reason #437 I’m proud to be a part of this enormous professional community.

Tina Cardone, Julie Reulbach, Justin Lanier, and Sam Shah have decades of blogging and tweeting experience between the four of them and they’d like to put those decades to work on your behalf. If you’ve enjoyed sitting on the sidelines of the math ed blog scene until now but would like to get in the game, they’re offering their coaching.

Sign up at Exploring the Math Twitter/Blogosphere for “eight weeks of fun missions and prompts.” I’ll be subscribing to every blog that participates. Can’t wait to see some new faces and new insights in my RSS feed.

Teacher Data Dashboards Are Hard, Pt. 2

[See part one.]

Can you help me shuffle my thoughts on teacher data dashboards?

The Current State of Teacher Data Dashboards

Generalizing from my own experience and from my reading, teacher data dashboards seem to suffer in three ways:

  • They confuse easy data with good data. It’s easy to record and report the amount of time a student had a particular webpage open, for instance, but that number isn’t indicative of all that much.
  • They aren’t pedagogically useful. They’ll tell you that a student got a question wrong or that the student spent seven minutes per problem but they won’t tell you why or what to do next beyond “Tell the student to rewind the lecture video and really watch it this time.”
  • They’re overwhelming. If you’ve never managed a classroom with more than 30 students, if you’re a newly-minted-MBA-turned-edtech-startup-CEO for instance, you might have the wrong idea about teachers and the demands on their time and attention. Teaching a classroom full of students isn’t like sitting in front of a Bloomberg terminal with a latte. The same volume of statistics, histograms, and line graphs that might thrill a financial analyst with few other demands on her attention might overwhelm a teacher who’s trying to ensure her students aren’t setting their desks on fire.

If you have examples of dashboards that contradict me here, I’d love to see screenshots.

We Tried To Build A Better Data Dashboard

With the teacher dashboard on our pennies lesson, the Desmos team and I tried to fix those three problems.

130906_3lo

We attempted to first do no harm.

We probably left some good data on the table, but at no point did we say, “Your student knows how to model with quadratic equations.” That kind of knowledge is really difficult to autograde. We weren’t going to risk assigning a false positive or a false negative to a student, so we left that assessment to the teacher.

We tailored the dashboard to the lesson.

We created filters that will be mostly useless for any other lesson we might design later.

130906_4lo

We filtered students in ways we thought would lead to rich teacher-student interactions. For example:

  • If a student changed her pennies model (say from a linear to a quadratic or vice versa) we thought that was worth mentioning to a teacher.
  • We made it easy to find out which students filled up large circles with pennies and which students found some cheap and easy data by filling up a small circle.
  • We made it easy to find out which students had the closest initial guesses.

These filters don’t design themselves. They require an understanding of pedagogy and a willingness to commit developer-hours to material that won’t scale or see significant reuse outside of one lesson. That commitment is really, really uncommon for edtech startups. It’s one reason why the math edublogosphere gets so swoony about Desmos.

130906_6

Contrast that with filters from Khan Academy, which read, “Struggling,” “Needs Practice,” “Practiced,” “Level One,” “Level Two,” and “Mastered.” Broadly applicable, but generic.

We suggested teacher action.

For each of those filters, we gave teachers a brief suggestion for action. For students who changed models, we suggested teachers ask:

Why did you change your model? Why are you happy with your final choice instead of your first choice?

For students who filled up large circles, we suggested teachers say something like:

A lot of you filled small circles with pennies but these students filled large circles with pennies. That’s harder and it’s super useful to have a wide range of data when we go to fit our model.

For students who filled up small circles, we suggested teachers say something like:

Big data help us come up with a model, but so do small data. A zero-inch circle is really easy to draw and fill with circles so don’t forget to collect it.

Even with this kind of concise, focused development, one teacher, Mike Bosma, still found our dashboard difficult to use in class:

While the students were working, I was mostly circulating around the classroom helping with technology issues (frozen browsers) and also clarifying what need to be done (my students did not read directions very well). I was hoping to be checking the dashboard as students went so I could help those students who were struggling. The data from the dashboard were helpful more so after the period for me. As I stated above, I was very busy during the period managing the technology and keeping students on track so I was not able to engage with what they were doing most of the time.

So we’d like to hear from you. Have you used the pennies task in class? Have you used the dashboard? What works? What doesn’t? What would make a dashboard useful — actually usable — for you?

Featured Comments

Tom Woodward, arguing that these platforms are tougher to customize than the usual paper-and-pencil lesson plan:

The other piece I worry about is the relatively unattainable nature of some of the skills needed for building interesting/useful digital content for most teachers. I really want to provision content for teachers and then be able to give them access to changing/building their own content. While many are happy consuming what’s given, there are people who will want to make it their own or it will spark new ideas. I hate the idea that the next step would be out of reach of most of that subset.

And there’s Eric Scholz looking for exactly that kind of customization:

I would add a “bank” of variables at the top of the page that teachers from when building their lesson plan the night before. This would allow for a variety of objectives for the lesson.

Bob Lochel, being helpful:

While many adaptive systems propose to help students along the way, they are often mis-interpreted as summative assessments, through their similarities to traditional grading terms and mechanisms.

Tom Woodward, also being helpful:

There could/should be some value to a dashboard that guides formative synchronous action but it’d have to be really low on cognitive demand.

Teacher Data Dashboards Are Hard, Pt. 1

Posted without comment. (Comments tomorrow.)

A study published earlier this year on teacher data dashboards, summarized by Matthew Di Carlo:

Teachers in these meetings were quite candid in expressing their opinions about and experiences with Dashboard. One factor that arose with relative frequency was an expressed concern that the Benchmark tests lacked some validity because they often tested material the teachers had yet to cover in class. A second factor that was supported across the focus group discussions was a perceived lack of instructional time to act on information a teacher might gain from Dashboard data. In particular, teachers expressed frustration with the lack of time to re-teach topics and concepts to students that had been identified on Dashboard as in need of re-teaching. A third concern was a lack of training in how to use Dashboard effectively and efficiently. A fourth common barrier to Dashboard use cited by teachers was a lack of time for Dashboard-related data analysis.

Khan Academy intern Josh Netterfield, in June 2013, on Khan Academy’s coach reports:

Currently over 70,000 teachers actively use KA in their classrooms, but few actually use coach reports. Already we’ve seen how the right kind of insights can transform classrooms, but some of the data has historically been quite difficult to navigate.

Stanford d.school’s 2011 analysis of Khan Academy [pdf]:

Generally speaking, the student data available on the Khan dashboard was impressive, but it also was challenging at times for the teacher to figure out how best to synthesize and use all the data — a key future needed if teachers are to maximize the potential of blended learning

Screenshots from a video of Khan Academy’s recent redesign of their coach reports:

130906_2

2013 Sep 12. Part two.

Dead On

Karen Head, on her “First-Year Composition 2.0” MOOC:

Too often we found our pedagogical choices hindered by the course-delivery platform we were required to use, when we felt that the platform should serve the pedagogical requirements. Too many decisions about platform functionality seem to be arbitrary, or made by people who may be excellent programmers but, I suspect, have never been teachers.

Related: What Silicon Valley Gets Wrong About Math Education Again And Again

[via Jonathan Rees]

2013 Sep 18. Karen Head comments:

Just to remind everyone of the context of my statement. We asked that certain parameters in the coding be changed (like the one governing how much we could penalize students for not doing an assignment) and were given the answer that the penalty number was “hard coded” into the program. The tech support person couldn’t understand why it was a big deal to us. To be fair, I couldn’t be made to understand why it was a big deal to change the parameter from a fixed number of 20 to a range of 0-100, but I seem to remember from my basic undergrad programming class that it isn’t a big deal to do this. Of course, in the end, I’m just an English teacher. :-)