Posts

Comments

Get Posts by E-mail

David Cox:

I'm noticing that more kids are gaining confidence in looking for patterns, forming hypotheses and then seeing if they can make the hypothesis fail. The phrase that seems to be gaining ground when it comes to hypothesis testing is "wreck it" – as in, "Oh, you think you have a rule? See if you can wreck it."

There are two things I love about this:

  1. The phrase "see if you can wreck it," and the toddler-knocking-down-a-tower-of-blocks spirit of destruction it conveys.
  2. The fact that you are supposed to wreck your own conjecture. Your conjecture isn't something you're supposed to protect from your peers and your teacher as though it were an extension of your ego. It's supposed to get wrecked. That's okay! In fact, you're supposed to wreck it.

BTW. When David Cox finds a free moment to blog, he makes it count. Now he's linked up this spherical Voronoi diagram that shows every airport in the world and the regions of points that are closer to them than every other airport. "Instead of having to teach things like perpendicular bisectors and systems of equations," he says, "I just wish we could do things like this."

Of course you need perpendicular bisectors to make a Voronoi diagram, so David's in luck.

NCTM 2014 Schedule

This is your official dy/dan conference planner® for next week's conventions.

My Sessions

I'll be doing three lecture-y things, then a panel with the #netkidz, then happy hour with our hosts, Mathalicious and Desmos.

Planning

The conference program is enormous. After making an initial list of every session I wanted to attend, I had three sessions listed for every hour of every day. Here's how I decided where I'm going:

First, search for all the reliable people I've already seen or read.

That list includes:

Ani, Ball, Bass, Boaler, Callahan, Coffey, Danielson, Daro, Dougherty, Douglas, Garneau, Khalsa, Leinwand, Luberoff, McCallum, Mills, Milou, Murray, Olson, Pickford, Serra, Shih, Silbey, Wray, anyone from EDC, anyone from Math Forum, anyone from Conceptua Math, anyone from Key Curriculum Press, anyone from the #netkidz strand.

Then, admit your biases.

This year I'm partial to sessions on a) the transition from arithmetic to algebra, b) modeling with math, c) technoskepticism, d) technology.

In general, I shy from sessions on dead technologies and session titles with exclamation points. (Though exceptions have to be made sometimes!)

Use Google.

So I'm still looking at lots of session conflicts. There's nothing quite as fun as discovering a new voice with new ideas at NCTM so I'll head online and scan blogs, professional websites, or Twitter feeds. Occasionally, I'll find the presenter's slides online, which helps me make an informed decision.

How do you map out and prepare for an event as huge (in every dimension) as NCTM?

A Few Recommendations

I figure if you're reading this you're already going to Ignite, the keynotes, and the same #netkidz sessions I am. So here are some sessions I'm looking forward to attending that you may have missed. (Some of these are for ASSM and NCSM.)

Jere Confrey + Amplify

Jere Confrey has been working on Amplify's tablet for the last four years as their chief math officer. She isn't a technologist by training but obviously understands math and math education so I've been very curious to see what she's been up to. She's obliging my curiosity with three sessions at NCSM, all concerning digital curriculum.

  • Monday. 9:30AM. Using Digital Environments to Foster Student Discourse.
  • Tuesday. 11:15AM. Using Complex Problems, Rich Media, and Rubrics to Develop the Standards for Mathematical Practice.
  • Wednesday. 2:30PM. Jazz Fusion: Uniting Curriculum, Pedagogy, Assessment, and Teacher Support in a Tablet-Based Environment.

Treisman's Back

  • Monday. 12:15PM. Navigating the Waters of Change and the Role of our Professional Organizations.

After his exceptional address last year, I don't even check Uri Treisman's titles or descriptions anymore.

Equity Strand

Treisman isn't speaking at NCTM but we get Gutierrez and Gutstein in his stead.

Technology + Technoskepticism

I don't know Kevin Lawrence but it takes some nerve to throw the gauntlet down at graphing calculators so I'll hear him out. David Masunaga is just endlessly fun, which would be enough, but I'm especially interested in his provocation here. Former blogger Avery Pickford has a background in computer science so you know his technoskepticism comes from an informed position. Steketee and his co-speaker Daniel Scher both blog for Key Curriculum Press at Sine of the Times and their recent postings have been outstanding.

Judging Books By Their Covers

These were my favorite titles:

  1. Thursday. 2:00PM. The Mathematics of Casino Management. Micah Stohlmann.
  2. Friday. 11:30AM. Avoid Teaching Rules That Expire! Sarah Bush.
  3. Friday. 2:00PM. The Great Nutella Heist. Bonnie Spence.

Just Kill Me Now #1

Just Kill Me Now #2

What have I missed?

Also: be sure to say hello if we see each other.

And: I can't recommend happy hour enough. It was one of my favorite sessions at Denver last year. Let's make some memories.

PS: I may recap some sessions over at MathRecap. Toss your email address in the little slot if you'd like to receive those via email.

I spent some time recently with the Leadership, Curriculum and Instruction department of Oakland Unified School District and I think they're doing some of the most thoughtful work around. They nurture their talent, celebrate successes, promote good ideas from within, and sustain what seems (to this outsider) to be a very health professional community.

Their Instructional Toolkit for Mathematics [pdf] deserves your attention. It describes their defining "strategies and experiences," including:

  • Number talks
  • 3-reads
  • Participation quizzes

I particularly like their "Evidence-Gathering Card," which grounds a lot of abstract ideas (like "A growth mindset matters") into "student vital actions."

140402_1lo

SRI's report on Khan Academy usage, released earlier this month, has the potential to make us all a lot wiser. They studied Khan Academy use at nine sites over two years, recording field notes, survey results, usage logs, and achievement measures, all well-specified in an 82-page implementation report and summarized in a shorter briefing. Their report has sharpened some of my concerns about the use of Khan Academy in math classrooms while blunting others.

First, there is irony to be found in SRI's reporting of usage rather than efficacy. The Gates Foundation underwrote the SRI report and while Gates endorses value-added models of quality for teachers it doesn't extend the same scrutiny towards its portfolio company here. After reading SRI's report, though, I'm convinced this exploratory study was the right study to run. SRI found enormous variation in Khan Academy use across the nine sites. We gain a great deal of insight through their study of that variation and we'd be much poorer had they chosen to study one model exclusively.

SRI found some results that are favorable to the work of Khan Academy. Other results are unfavorable and other results seem to contradict each other. You can find many of the favorable results summarized at Khan Academy's blog. I intend to summarize, instead, the concerns and questions the SRI report raises.

It isn't clear which students benefit from Khan Academy.

Over the two years of the study, 74% of teachers (63 teachers in SY 2011-12 and 60 teachers in SY 2012-13) said Khan Academy was "very effective" at meeting the learning needs of "students whose academic work is ahead of most students their age." Meanwhile, only 25% of teachers gave Khan Academy the same rating for students who are behind most students their age.

One teacher reports that "the same students who struggled in her classroom before the introduction of Khan Academy also struggled to make progress in Khan Academy." She continues to state that those students "were less engaged and less productive with their time on Khan Academy [than their peers]."

Participating teachers don't seem to have a great deal of hope that Khan Academy can close an achievement gap directly, though they seem to think it enhances the learning opportunities of advanced learners.

But that hypothesis is contradicted by the surveys from Site 1, a site which SRI states "had some of the highest test scores in the state [of California], even when compared with other advantaged districts." In question after question regarding Khan Academy's impact on student learning, Site 1 teachers issued a lower rating than the other less-advantaged sites in the study. For example, 21% of Site 1 teachers reported that Khan Academy had "no impact" on "students' learning and understanding of the material." 0% of the teachers from the less-advantaged sites shared that rating.

SRI writes: “Whatever the reason, teachers in sites other than Site 1 clearly found greater value in their use of Khan Academy to support their overall instruction.” SRI is strangely incurious about that reason. Until further revelation there, we should file this report alongside notices of Udacity's struggles in serving the needs of lower-achieving students in their pilot course with San Jose State University in 2013. Their struggles likely relate.

Khan Academy use is negatively associated with math interest.

I'm going to jump quickly to clarify that a) Khan Academy use was positively associated with anxiety reduction, self-concept, and self-efficacy, b) all of these non-achievement measures are measures of correlation, not causation, and c) the negative association with interest isn't statistically significant.

But I'm calling out this statistically-insignificant, non-causal negative association between Khan Academy and interest in math because that measure matters enormously to me (as someone who has a lot of interest in math) and its direction downward should concern us all. It's very possible to get very good at something while simultaneously wishing to have nothing to do with that thing ever again. We need to protect against that possibility.

Teachers don't use the videos.

While Khan Academy's videos get lots of views outside of formal school environments, "more than half the teachers in SY 2011-12 and nearly three-quarters in SY 2012-13 reported on the survey that they rarely or never used Khan Academy videos to support their instruction."

One teacher explains: "Kids like to get the interaction with me. Sal is great at explaining things, but you can’t stop and ask questions, which is something these kids thrive on."

Khan Academy seems to understand this and has recently tried to shift focus from its videos to its exercises. In a recent interview with EdSurge, Sal Khan explains this shift as a return to roots. "The original platform was a focus on interactive exercises," he says, "and the videos were a complement to that."

Elizabeth Slavitt, Khan Academy's math content lead, shifts focus in a similar direction. "For us, our goal isn’t necessarily that Khan introduces new concepts to students. We want to give practice."

Khan Academy is shifting its goal posts here, but we should all welcome that shift. In his speech for TED and his interview with 60 Minutes and my own experiences working with their implementation team, Khan Academy's expressed intent was for students to learn new concepts by watching the video lectures first. Only 10% of the teachers in SY 2012-13 agreed and said that "Khan Academy played a role in introducing new concepts." Khan Academy seems to have received this signal and has aligned their rhetoric to reflect reality.

The exercises are Khan Academy's core classroom feature, but teachers don't check to see how well students perform them.

73% of teachers in SY 2012-13 said "Khan Academy played its greatest role by providing students with practice opportunities." Over both years of the study, SRI found that 85% of all the time students spent on Khan Academy was spent on exercises.

Given this endorsement of exercises, SRI's strangest finding is that 59% of SY 2012-13 teachers checked Khan Academy reports on those exercises "once a month or less or not at all." If teachers find the exercises valuable but don't check to see how well students are performing them, what's their value? Students have a word for work their teachers assign and don't check. Are Khan Academy's exercises more than busywork?

SRI quotes one teacher who says the exercises are valuable as a self-assessment tool for students. Another teacher cites the immediate feedback students receive from the exercises as the "most important benefit of using Khan Academy." But at Site 2, SRI found "the teachers did not use the Khan Academy reports to monitor progress," electing instead to use their own assessments of student achievement.

SRI's report is remarkably incurious about this difference between the value teachers perceive of a) the exercises and b) the reports on the exercises, leaving me to speculate:

Students are working on individualized material, exercises that aren't above their level of expertise. They find out immediately how well they're doing so they get stuck less often on those exercises. This makes for less challenging classroom management for teachers. That's valuable. But in the same way that teachers prefer their own lectures to Khan's videos, they prefer their own assessments to Khan's reports.

One hypothesis here is that teachers are simply clinging to their tenured positions, refusing to yield way to the obvious superiority of computers. My alternative hypothesis is that teachers simply know better, that computers aren't a natural medium for lots of math, that teacher lectures and assessments have lots of advantages over Khan Academy's lectures and assessments. In particular, handwritten student work reveals much about student learning that Khan Academy's structured inputs and colored boxes conceal.

My hypothesis that teachers don't trust Khan Academy's assessment of student mastery is, of course, extremely easy to test. Just ask all the participating teachers something like, "When Khan Academy indicates a student has attained mastery on a given concept, how does your assessment of the student's mastery typically compare?"

Which it turns out SRI already did.

140331_1

Unfortunately, SRI didn't report those results. At the time of this posting SRI hasn't returned my request for comment.

Conclusion

It isn't surprising to me that teachers would prefer their own lectures to Khan Academy's. Their lectures can be more conversational, more timely, and better tailored to their students' specific questions. I'm happy those videos exist for the sake of students who lack access to capable math teachers but that doesn't describe the majority of students in formal school environments.

I'm relieved, then, to read Elizabeth Slavitt's claim that Khan Academy doesn't intend any longer for its video lectures to introduce new concepts to students. Slavitt's statement dials down my anxiety about Khan Academy considerably.

SRI minimizes Khan Academy's maximal claims to a "world-class education," but Khan Academy clearly has a lot of potential as self-paced math practice software. It's troubling that so many teachers don't bother to check that software's results, but Khan Academy is well-resourced and lately they've expanded their pool of collaborators to include more math teachers, along with the Illustrative Mathematics team. Some of the resulting Common Core exercises are quite effective and I expect more fruits from that partnership in the future.

But math practice software is a crowded field and, for totally subjective reasons, not one that interests me all that much. I wish Khan Academy well but going forward I suspect I'll have as much to say about them as I do about Cognitive Tutor, TenMarks, ALEKS, ST Math, and others, which is to say, not all that much.

BTW. Read other smart takes on SRI's report:

tl;dr – "Real world" is tougher to measure than "interest" and less important overall. So rather than asking which of these three different versions of a word problem is more "real world" I asked a couple hundred people which is more interesting. Only the geometry treatment was significantly better than a coin flip at generating questions.

————————–

Here are some closing words about "real world" math, mostly distilled from your comments on the last post. As with previous investigations, I am indebted to the folks who stop by this blog to comment and make me smarter.

Real-World Math Is Hard To Define

What other conclusion can we draw from the dozen-or-so definitions of "real-world math" I found here and on Twitter?

  • It depends on whether we're talking about the world of procedures, concepts, or applications. [Karim Ani]
  • A problem simulating a project / job / task being performed by someone performing their normal job duties, such as the example of the contractor building a pool to meet municipal code. [Ben Rimes]
  • A problem involving objects or tasks that would be considered an experience students are likely to have in the "real world." [Ben Rimes]
  • If it came from a math teacher it’s basically not real-world, unless it was a math teacher doing something in the outside world leading to an interesting problem in mathematics. [Bowen Kerins]
  • An observation / question that would be interesting to humans outside a math class. [Kate Nowak]
  • Something is “real” to a student if it’s concrete, attainable, comprehensible. [Michael Pershan]
  • Something is “real” to a student if it has a non-mathematical purpose. [Michael Pershan]

Real-World Math Doesn't Guarantee Interest

140314_1

David Taub argued the whole question confused "interest" with "real world." M Ruppel listed other criteria for judging the value of a question, one of which was "kids want to solve it."

Their arguments may seem obvious to you. They aren't obvious to the three people who emailed me here or these presenters at the California STEM Symposium or Conrad Wolfram or the New York Times editorial board.

As Liz said, "Real world is just a means to an end. The goal is interest." We should reject simple explanations of student interest.

So Which Version Is More Interesting?

Asking which of these three versions (candy, geometry, text) of the same math problem is more "real-world" was a pointless task since basically everyone has a different definition of "real-world" and it doesn't guarantee interest anyway

So let's ask about interest instead.

I used Mechanical Turk (and Evan Weinberg's invaluable Internet skills) to show a random version of the problem to 99 people. I asked them if they had a question or not.

Of the three treatments, only the geometry treatment was statistically better than a coin flip at generating questions. (Here's the experiment and the data.)

Then I showed another 80 people the same three treatments and asked how interested they were in the equal area question as measured on a Likert scale from -3 to 3, including 0. (This measured interest another way. Perhaps a question didn't occur to you impulsively but once you heard it you were interested in it.)

Here again only the geometry treatment had an interest rating that was significantly different from "neutral." (Experiment and data.)

Why the geometry treatment? I don't know. It's more abstract than the candy treatment, which features objects from outside the math classroom. 88% of the people I surveyed in the first experiment answered "within the last year" to the question "When did you last use math to solve a problem in life, work, or school?" That's a math-friendly crowd. It's possible that a class of elementary schoolers would find the candy treatment more interesting and that a coffee klatch of research mathematicians would tend towards the text treatment.

I don't know. I'm just speculating here that real world is a pretty porous category. And for the sake of interesting your students in mathematics, it's more important to know their world.

2014 Mar 26. Fawn Nguyen asked her eighth grade geometry students which version they preferred.

« Prev - Next »