Category: tech contrarianism

Total 130 Posts

Moving the Goalposts on Personalized Learning

Mike Caulfield:

But the biggest advantage of a tutor is not that they personalize the task, it’s that they personalize the explanation. They look into the eyes of the other person and try to understand what material the student has locked in their head that could be leveraged into new understandings. When they see a spark of insight, they head further down that path. When they don’t, they try new routes.

EdSurge misreads Mike pretty drastically, I think:

What if technology can offer explanations based on a student’s experience or interest, such as indie rock music?

Mike is summarizing what great face-to-face tutors do. They figure out what the student already knows, then throw hooks into that knowledge using metaphors and analogies and questions. That’s a personalized tutor.

But in 2016 computers are completely inept at that kind of personalization. Worse than your average high school junior tutoring on the side for gas money. Way worse than your average high school teacher. I don’t think this is a controversial observation. In a follow-up post, Michael Feldstein writes, “For now and the foreseeable future, no robot tutor in the sky is going to be able to take Mike’s place in those conversations.”

So it’s interesting to see how quickly EdSurge pivots to a different definition of personalization, one that’s much more accommodating of the limits of computers. EdSurge’s version of personalization asks the student to choose her favorite noun (eg. “indie rock music”) and watch as the computer incorporates that noun into the same explanation every other student receives. Find and replace. In 2016 computers are great at find and replace.

This is just a PSA to say: technofriendlies, I see you moving the goalposts! At the very least, let’s keep them at “high school junior-level tutor.”

BTW. I don’t think find-and-replacing “indie rock music” will improve what a student knows, but maybe it will affect her interest in knowing it. I’ve hassled edtech over that premise before. In my head, I always call that find-and-replacing approach the “poochification” of education, but I never know if that reference will land for anybody who isn’t inside my head.

The Future Of Handwriting Recognition & Adaptive Feedback In Math Education

In math education, the fields of handwriting recognition and adaptive feedback are stuck. Maybe they’re stuck because the technological problems they’re trying to solve are really, really hard. Or maybe they’re stuck because they need some crank with a blog to offer a positive vision for their future.

I can’t help with the technology. I can offer my favorite version of that future, though. Here is a picture of the present and the future of handwriting recognition and adaptive feedback, along with some explanation.

In the future, the computer will recognize my handwriting.

160131_1

Here I am trying hopelessly to get the computer to understand that I’m trying to write 24. This is low-hanging fruit. No one needs me to tell them that a system that recognizes my handwriting more often is better than a system that doesn’t.

But I don’t worry about a piece of paper recognizing my handwriting. If I’m worried about the computer recognizing my handwriting, that worry goes in the cost column.

In the future, I won’t have to learn to speak computer while I’m learning to speak math.

160131_3

In this instance, I’m learning to express myself mathematically – hard enough for a novice! – but I also have to learn to express myself in ways that the computer will understand. Even when the computer recognizes my numbers and letters, it doesn’t recognize the way I have arranged them.

Any middle school math teacher would recognize my syntax here. I’ll wager most would sob gratefully for my aligned operations. (Or that I bothered to show operations at all.) If the computer is confused by that syntax, that confusion goes in the cost column.

In the future, I’ll have the space to finish a complete mathematical thought.

160131_2

Here I am trying to finish a mathematical thought. I’m successful, but only barely. That same mathematical thought requires only a fraction of the space on a piece of paper that it requires on a tablet, where I always feel like I’m trying to write with a bratwurst. That difference in space goes in the cost column.

That’s a lot in the cost column, but lots of people eagerly accept those costs in other fields. Computer programmers, for example, eagerly learn to speak unnatural languages in unusual writing environments. They do that because the costs are dwarfed by the benefits.

What is the benefit here?

Proponents of these handwriting recognition systems often claim their benefit is feedback – the two-sigma improvement of a one-on-one human tutor at a fraction of the cost. But let’s look at the feedback they offer us and, just as we did for handwriting recognition, write a to-do list for the future.

In the future, I’ll have the time to finish a complete mathematical thought.

If you watch the video, you’ll notice the computer interrupts my thought process incessantly. If I pause to consider the expression I’m writing for more than a couple of seconds, the computer tries to convert it into mathematical notation. If it misconverts my handwriting, my mathematical train of thought derails and I’m thinking about notation instead.

Then I have to check every mathematical thought before I can write the next one. The computer tells me if that step is mathematically correct or not.

It offers too much feedback too quickly. A competent human tutor doesn’t do this. That tutor will interject if the student is catastrophically stuck or if the student is moving quickly on a long path in the wrong direction. Otherwise, the tutor will let the student work. Even if the student has made an error. That’s because a) the tutor gains more insight into the nature of the error as it propagates through the problem, and b) the student may realize the error on her own, which is great for her sense of agency and metacognition.

No ever got fired in edtech for promising immediate feedback, but in the future we’ll promise timely feedback instead.

In the future, computers will give me useful feedback on my work.

I have made a very common error in my application of the distributive property here.

160131_4lo

A competent human tutor would correct the error after the student finished her work, let her revise that work, and then help her learn the more efficient method of dividing by four first.

But the computer was never programmed to anticipate that anyone would use the distributive property, so its feedback only confuses me. It tells me, “Start over and go down an entirely different route.”

The computer’s feedback logic is brittle and inflexible, which teaches me the untruth that math is brittle and inflexible.

In the future, computers will do all of this for math that matters.

I’ve tried to demonstrate that we’re a long way from the computer tutors our students need, even when they’re solving equations, a highly structured skill that should be very friendly to computer tutoring. Some of the most interesting problems in K-12 mathematics are far less structured. Computers will need to help our students there also, just as their human tutors already do.

We want to believe our handwriting recognition and adaptive feedback systems result in something close to a competent human tutor. But competent tutors place little extraneous burden on a student’s mathematical thinking. They’re patient, insightful, and their help is timely. Next to a competent human tutor, our current computer tutors seem stuttering, imposing, and a little confused. But that’s the present, and the future is bright.

Need A Job?

I work for Desmos where we’re solving some of the biggest problems in math edtech. Teachers and students love us and we’re hiring. Come work with us!

Tracy Zager Offers You And Your Fact Fluency Game Some Advice

Thoughtful elementary math educator Tracy Zager offers app developers some best practices for their fact fluency games:

I’ve been looking around since, and the big money math fact app world is enough to send me into despair. It’s almost all awful. As I looked at them, I noticed I use three baseline criteria, and I’m unwilling to compromise on any of them.

She later awards special merits to DreamBox Learning and Bunny Times.

What Students Do (And Don’t Do) In Khan Academy, Ctd.

My analysis of Khan Academy’s eighth-grade curriculum was viewed ~20,000 times over the last ten days. Several math practice web sites have asked me to perform a similar analysis on their own products. All of this gives me hope that my doctoral work may be interesting to people outside my small crowd at Stanford.

Two follow-up notes, including the simplest way Khan Academy can improve itself:

One. Several Khan Academy employees have commented on the analysis, both here and at Hacker News.

Justin Helps, a content specialist, confirmed one of my hypotheses about Khan Academy:

One contributor to the prevalence of numerical and multiple choice responses on KA is that those were the tools readily available to us when we began writing content. Our set of tools continues to grow, but it takes time for our relatively small content team to rewrite item sets to utilize those new tools.

But as another commenter pointed out, if the Smarter Balanced Assessment Consortium can make interesting computerized items, what’s stopping Khan Academy? Which team is the bottleneck: the software developers or the content specialists? (They’re hiring!)

Two. In my mind, Khan Academy could do one simple thing to improve itself several times over:

Ask questions that computers don’t grade.

A computer graded my responses to every single question in eighth grade.

That means I was never asked, “Why?” or “How do you know?” Those are seriously important questions but computers can’t grade them and Khan Academy didn’t ask them.

At one point, I was even asked how m and b (of y = mx + b fame) affected the slope and y-intercept of a graph. It’s a fine question, but there was no place for an answer because how would the computer know if I was right?

So if a Khan Academy student is linked to a coach, make a space for an answer. Send the student’s answer to the coach. Let the coach grade or ignore it. Don’t try to do any fancy natural language processing. Just send the response along. Let the human offer feedback where computers can’t. In fact, allow all the proficiency ratings to be overridden by human coaches.

Khan Academy does loads of A/B testing right? So A/B test this. See if teachers appreciate the clearer picture of what their students know or if they prefer the easier computerized assessment. I can see it going either way, though my own preference is clear.

What Students Do (And Don’t Do) In Khan Academy

tl;dr — Khan Academy claims alignment with the Common Core State Standards (CCSS) but an analysis of their eighth-grade year indicates that alignment is loose. 40% of Khan Academy exercises assessed the acts of calculating and solving whereas the Smarter Balanced Assessment Consortium’s assessment of the CCSS emphasized those acts in only 25% of their released items. 74% of Khan Academy’s exercises resulted in the production of either a number or a multiple-choice response, whereas those outputs accounted for only 25% of the SBAC assessment.

Introduction

My dissertation will examine the opportunities students have to learn math online. In order to say something about the current state of the art, I decided to complete Khan Academy’s eighth grade year and ask myself two specific questions about every exercise:

  • What am I asked to do? What are my verbs? Am I asked to solve, evaluate, calculate, analyze, or something else?
  • What do I produce? What is the end result of my work? Is my work summarized by a number, a multiple-choice response, a graph that I create, or something else?

I examined Khan Academy for several reasons. First, because they’re well-capitalized and they employ some of the best computer engineers in the world. They have the human resources to create some novel opportunities for students to learn math online. If they struggle, it is likely that other companies with equal or lesser human resources struggle also. I also examined Khan Academy because their exercise sets are publicly available online, without a login. This will energize our discussion here and make it easier for you to spotcheck my analysis.

My data collection took me three days and spanned 88 practice sets. You’re welcome to examine my data and critique my coding. In general, Khan Academy practice sets ask that you complete a certain number of exercises in a row before you’re allowed to move on. (Five, in most cases.) These exercises are randomly selected from a pool of item types. Different item types ask for different student work. Some item types ask for multiple kinds of student work. All of this is to say, you might conduct this exact same analysis and walk away with slightly different findings. I’ll present only the findings that I suspect will generalize.

After completing my analysis of Khan Academy’s exercises, I performed the same analysis on a set of 24 released questions from the Smarter Balanced Assessment Consortium’s test that will be administered this school year in 17 states.

Findings & Discussion

Khan Academy’s Verbs

141202_7lo

The largest casualty is argumentation. Out of the 402 exercises I completed, I could code only three of their prompts as “argue.” (You can find all them in “Pythagorean Theorem Proofs.”) This is far out of alignment with the Common Core State Standards, which has prioritized constructing and critiquing arguments as one of its eight practice standards that cross all of K-12 mathematics.

141202_1lo

Notably, 40% of Khan Academy’s eighth-grade exercises ask students to “calculate” or “solve.” These are important mathematical actions, certainly. But as with “argumentation,” I’ll demonstrate later that this emphasis is out of alignment with current national expectations for student math learning.

The most technologically advanced items were the 20% of Khan Academy’s exercises that asked students to “construct” an object. In these items, students were asked to create lines, tables, scatterplots, polygons, angles, and other mathematical structures using novel digital tools. Subjectively, these items were a welcome reprieve from the frequent calculating and solving, nearly all of which I performed with either my computer’s calculator or with Wolfram Alpha. (Also subjective: my favorite exercise asked me to construct a line.) These items also appeared frequently in the Geometry strand where students were asked to transform polygons.

141202_2lo

I was interested to find that the most common student action in Khan Academy’s eighth-grade year is “analyze.” Several examples follow.

141202_5lo

Khan Academy’s Productions

These questions of analysis are welcome but the end result of analysis can take many forms. If you think about instances in your life when you were asked to analyze, you might recall reports you’ve written or verbal summaries you’ve delivered. In Khan Academy, 92% of the analysis questions ended in a multiple-choice response. These multiple-choice items took different forms. In some cases, you could make only one choice. In others, you could make multiple choices. Regardless, we should ask ourselves if such structured responses are the most appropriate assessment of a student’s power of analysis.

Broadening our focus from the “analysis” items to the entire set of exercises reveals that 74% of the work students do in the eighth grade of Khan Academy results in either a number or a multiple-choice response. No other pair of outcomes comes close.

141202_8lo

Perhaps the biggest loss here is the fact that I constructed an equation exactly three times throughout my eighth grade year in Khan Academy. Here is one:

141202_6lo

This is troubling. In the sixth grade, students studying the Common Core State Standards make the transition from “Number and Operations” to “Expressions and Equations.” By ninth grade, the CCSS will ask those students to use equations in earnest, particularly in the Algebra, Functions, and Modeling domains. Students need preparation solving equations, of course, but if they haven’t spent ample time constructing equations also, those advanced domains will be inaccessible.

Smarter Balanced Verbs

The Smarter Balanced released items ask comparatively fewer “calculate” and “solve” items (they’re the least common verbs, in fact) and comparatively more “construct,” “analyze,” and “argue.”

141202_9lo

This lack of alignment is troubling. If one of Khan Academy’s goals is to prepare students for success in Common Core mathematics, they’re emphasizing the wrong set of skills.

Smarter Balanced Productions

Multiple-choice responses are also common in the Smarter Balanced assessment but the distribution of item types is broader. Students are asked to produce lots of different mathematical outputs including number lines, non-linear function graphs, probability spinners, corrections of student work, and other productions students won’t have seen in their work in Khan Academy.

141202_10lo

SBAC also allows for the production of free-response text while Khan Academy doesn’t. When SBAC asks students to “argue,” in a majority of cases, students express their answer by just writing an argument.

141202_11lo

This is quite unlike Khan Academy’s three “argue” prompts which produced either a) a multiple-choice response or b) the re-arrangement of the statements and reasons in a pre-filled two-column proof.

Limitations & Future Directions & Conclusion

This brief analysis has revealed that Khan Academy students are doing two primary kinds of work (analysis and calculating) and they’re expressing that work in two primary ways (as multiple-choice responses and as numbers). Meanwhile, the SBAC assessment of the CCSS emphasizes a different set of work and asks for more diverse expression of that work.

This is an important finding, if somewhat blunt. A much more comprehensive item analysis would be necessary to determine the nuanced and important differences between two problems that this analysis codes identically. Two separate “solving” problems that result in “a number,” for example, might be of very different value to a student depending on the equations being solved and whether or not a context was involved. This analysis is blind to those differences.

We should wonder why Khan Academy emphasizes this particular work. I have no inside knowledge of Khan Academy’s operations or vision. It’s possible this kind of work is a perfect realization of their vision for math education. Perhaps they are doing exactly what they set out to do.

I find it more likely that Khan Academy’s exercise set draws an accurate map of the strengths and weaknesses of education technology in 2014. Khan Academy asks students to solve and calculate so frequently, not because those are the mathematical actions mathematicians and math teachers value most, but because those problems are easy to assign with a computer in 2014. Khan Academy asks students to submit their work as a number or a multiple-choice response, not because those are the mathematical outputs mathematicians and math teachers value most, but because numbers and multiple-choice responses are easy for computers to grade in 2014.

This makes the limitations of Khan Academy’s exercises understandable but not excusable. Khan Academy is falling short of the goal of preparing students for success on assessments of the CCSS, but that’s setting the bar low. There are arguably other, more important goals than success on a standardized test. We’d like students to enjoy math class, to become flexible thinkers and capable future workers, to develop healthy conceptions of themselves as learners, and to look ahead to their next year of math class with something other than dread. Will instruction composed principally of selecting from multiple-choice responses and filling numbers into blanks achieve that goal? If your answer is no, as is mine, if that narrative sounds exceedingly grim to you also, it is up to you and me to pose a compelling counter-narrative for online math education, and then re-pose it over and over again.