Tag: khanacademy

Total 4 Posts

What Students Do (And Don’t Do) In Khan Academy

tl;dr — Khan Academy claims alignment with the Common Core State Standards (CCSS) but an analysis of their eighth-grade year indicates that alignment is loose. 40% of Khan Academy exercises assessed the acts of calculating and solving whereas the Smarter Balanced Assessment Consortium’s assessment of the CCSS emphasized those acts in only 25% of their released items. 74% of Khan Academy’s exercises resulted in the production of either a number or a multiple-choice response, whereas those outputs accounted for only 25% of the SBAC assessment.

Introduction

My dissertation will examine the opportunities students have to learn math online. In order to say something about the current state of the art, I decided to complete Khan Academy’s eighth grade year and ask myself two specific questions about every exercise:

  • What am I asked to do? What are my verbs? Am I asked to solve, evaluate, calculate, analyze, or something else?
  • What do I produce? What is the end result of my work? Is my work summarized by a number, a multiple-choice response, a graph that I create, or something else?

I examined Khan Academy for several reasons. First, because they’re well-capitalized and they employ some of the best computer engineers in the world. They have the human resources to create some novel opportunities for students to learn math online. If they struggle, it is likely that other companies with equal or lesser human resources struggle also. I also examined Khan Academy because their exercise sets are publicly available online, without a login. This will energize our discussion here and make it easier for you to spotcheck my analysis.

My data collection took me three days and spanned 88 practice sets. You’re welcome to examine my data and critique my coding. In general, Khan Academy practice sets ask that you complete a certain number of exercises in a row before you’re allowed to move on. (Five, in most cases.) These exercises are randomly selected from a pool of item types. Different item types ask for different student work. Some item types ask for multiple kinds of student work. All of this is to say, you might conduct this exact same analysis and walk away with slightly different findings. I’ll present only the findings that I suspect will generalize.

After completing my analysis of Khan Academy’s exercises, I performed the same analysis on a set of 24 released questions from the Smarter Balanced Assessment Consortium’s test that will be administered this school year in 17 states.

Findings & Discussion

Khan Academy’s Verbs

141202_7lo

The largest casualty is argumentation. Out of the 402 exercises I completed, I could code only three of their prompts as “argue.” (You can find all them in “Pythagorean Theorem Proofs.”) This is far out of alignment with the Common Core State Standards, which has prioritized constructing and critiquing arguments as one of its eight practice standards that cross all of K-12 mathematics.

141202_1lo

Notably, 40% of Khan Academy’s eighth-grade exercises ask students to “calculate” or “solve.” These are important mathematical actions, certainly. But as with “argumentation,” I’ll demonstrate later that this emphasis is out of alignment with current national expectations for student math learning.

The most technologically advanced items were the 20% of Khan Academy’s exercises that asked students to “construct” an object. In these items, students were asked to create lines, tables, scatterplots, polygons, angles, and other mathematical structures using novel digital tools. Subjectively, these items were a welcome reprieve from the frequent calculating and solving, nearly all of which I performed with either my computer’s calculator or with Wolfram Alpha. (Also subjective: my favorite exercise asked me to construct a line.) These items also appeared frequently in the Geometry strand where students were asked to transform polygons.

141202_2lo

I was interested to find that the most common student action in Khan Academy’s eighth-grade year is “analyze.” Several examples follow.

141202_5lo

Khan Academy’s Productions

These questions of analysis are welcome but the end result of analysis can take many forms. If you think about instances in your life when you were asked to analyze, you might recall reports you’ve written or verbal summaries you’ve delivered. In Khan Academy, 92% of the analysis questions ended in a multiple-choice response. These multiple-choice items took different forms. In some cases, you could make only one choice. In others, you could make multiple choices. Regardless, we should ask ourselves if such structured responses are the most appropriate assessment of a student’s power of analysis.

Broadening our focus from the “analysis” items to the entire set of exercises reveals that 74% of the work students do in the eighth grade of Khan Academy results in either a number or a multiple-choice response. No other pair of outcomes comes close.

141202_8lo

Perhaps the biggest loss here is the fact that I constructed an equation exactly three times throughout my eighth grade year in Khan Academy. Here is one:

141202_6lo

This is troubling. In the sixth grade, students studying the Common Core State Standards make the transition from “Number and Operations” to “Expressions and Equations.” By ninth grade, the CCSS will ask those students to use equations in earnest, particularly in the Algebra, Functions, and Modeling domains. Students need preparation solving equations, of course, but if they haven’t spent ample time constructing equations also, those advanced domains will be inaccessible.

Smarter Balanced Verbs

The Smarter Balanced released items ask comparatively fewer “calculate” and “solve” items (they’re the least common verbs, in fact) and comparatively more “construct,” “analyze,” and “argue.”

141202_9lo

This lack of alignment is troubling. If one of Khan Academy’s goals is to prepare students for success in Common Core mathematics, they’re emphasizing the wrong set of skills.

Smarter Balanced Productions

Multiple-choice responses are also common in the Smarter Balanced assessment but the distribution of item types is broader. Students are asked to produce lots of different mathematical outputs including number lines, non-linear function graphs, probability spinners, corrections of student work, and other productions students won’t have seen in their work in Khan Academy.

141202_10lo

SBAC also allows for the production of free-response text while Khan Academy doesn’t. When SBAC asks students to “argue,” in a majority of cases, students express their answer by just writing an argument.

141202_11lo

This is quite unlike Khan Academy’s three “argue” prompts which produced either a) a multiple-choice response or b) the re-arrangement of the statements and reasons in a pre-filled two-column proof.

Limitations & Future Directions & Conclusion

This brief analysis has revealed that Khan Academy students are doing two primary kinds of work (analysis and calculating) and they’re expressing that work in two primary ways (as multiple-choice responses and as numbers). Meanwhile, the SBAC assessment of the CCSS emphasizes a different set of work and asks for more diverse expression of that work.

This is an important finding, if somewhat blunt. A much more comprehensive item analysis would be necessary to determine the nuanced and important differences between two problems that this analysis codes identically. Two separate “solving” problems that result in “a number,” for example, might be of very different value to a student depending on the equations being solved and whether or not a context was involved. This analysis is blind to those differences.

We should wonder why Khan Academy emphasizes this particular work. I have no inside knowledge of Khan Academy’s operations or vision. It’s possible this kind of work is a perfect realization of their vision for math education. Perhaps they are doing exactly what they set out to do.

I find it more likely that Khan Academy’s exercise set draws an accurate map of the strengths and weaknesses of education technology in 2014. Khan Academy asks students to solve and calculate so frequently, not because those are the mathematical actions mathematicians and math teachers value most, but because those problems are easy to assign with a computer in 2014. Khan Academy asks students to submit their work as a number or a multiple-choice response, not because those are the mathematical outputs mathematicians and math teachers value most, but because numbers and multiple-choice responses are easy for computers to grade in 2014.

This makes the limitations of Khan Academy’s exercises understandable but not excusable. Khan Academy is falling short of the goal of preparing students for success on assessments of the CCSS, but that’s setting the bar low. There are arguably other, more important goals than success on a standardized test. We’d like students to enjoy math class, to become flexible thinkers and capable future workers, to develop healthy conceptions of themselves as learners, and to look ahead to their next year of math class with something other than dread. Will instruction composed principally of selecting from multiple-choice responses and filling numbers into blanks achieve that goal? If your answer is no, as is mine, if that narrative sounds exceedingly grim to you also, it is up to you and me to pose a compelling counter-narrative for online math education, and then re-pose it over and over again.

What We Can Learn About Learning From Khan Academy’s Source Code

121212_5

It’s great, first of all, that Khan Academy has all their student exercise code on GitHub for everybody to see. I don’t know any other adaptive system that does that. I figured there had to be a better way to reward them for that transparency than the criticism and judgment I’m about to post here, so I made them a badge also.

121212_4

Their code illustrates the different ways good math teachers and good programmers try to figure out what students know.

Take proportions, for instance. Here is the code that runs beneath Khan Academy’s proportions assessment.

In each of the dozen files I’ve reviewed, Khan Academy first generates some random numbers that meet certain criteria. In the proportions assessment, they call for three random unique integers between 5 and 12. No decimals. No negatives. No zeroes.

var numbers = randRangeUnique( 5, 12, 3 );

Then they use those numbers to generate exercises. With proportions, they insert an “x” randomly into that list of numbers. The final order of that list determines the proportional relationship that students will have to solve.

numbers.splice( randRange( 0, 3 ), 0, "x" );

But good teachers are more than random number generators. They create exercise sets that increase in difficulty, that ask students to demonstrate mastery in different contexts, all because proportions are conceptually difficult but procedurally simple. It’s extremely easy for students to get by on an instrumental understanding of proportions alone. (eg. “All you hafta do is multiply the two numbers that are across from each other and divide by the number across from the x.” Boom. It’s badge time.) It’s especially easy when the only thing that changes about the problem is the random numbers.

But forget good teachers for a minute. Let’s look at the bar set by various standard-setting organizations. Here is what you have to do to demonstrate mastery of proportions on a) Khan Academy, b) the California Standards Test, c) the Smarter Balanced Assessment.

Khan Academy

You’ll do a handful of problems just like this, with different random numbers in different places.

121212_2

California Standards Test

121212_1

Smarter Balanced Assessment

121212_3

The difficulty and value of the assessments clearly increases from Khan Academy to the CST and then Smarter Balanced. (I’m hesitant to guess how well a student’s score on the Smarter Balanced Assessment will correlate to all her practice on Khan Academy.)

Here we find a difference between good math teachers and good programmers. The good math teacher can create good math assessments. The good programmer can make things scale globally across the Internet. The two of them seem like a match made in math heaven. Just get them in a room together, right? But the very technology that lets Khan Academy assess hundreds of concepts at global scale – random number generators, string splices, and algorithmically generated hints – has downgraded, perhaps unavoidably, what it means to know math.

2012 Dec 13. Peter Collingridge points out in the comments that Khan Academy has a proportions assessment comparable to the California Standards Test. If they have anything similar to the Smarter Balanced Assessment, please let me know.

I’ll Be On Al Jazeera’s The Stream With Sal Khan Tomorrow

I’ll be on Al Jazeera’s The Stream with Sal Khan tomorrow 10/2 at 3:30PM EDT as part of a segment on Khan Academy. You can watch live from their website if that’s what you’re into. I’ll update this post with the segment afterwards if that’s possible.

2012 Oct 3. Here’s a link to the entire broadcast. They give me two questions – one about the best use for those lecture videos in the classroom and the other comparing the Khan Academy model to math instruction in high-performing countries.

At first, Khan poses his lectures as a “first pass” or a “first scaffold” at new material. This is less effective and less engaging than a lecture posed in response to a precursor activity that sets students up to need that lecture and understand its context.

I pressed that angle in my second question and Khan then took a fairly agnostic approach to the instructional sequence. Basically, “do whatever works.”

Personalization is the point and Khan Academy has certainly figured out how personalize lecture delivery. But personalizing the precursor activity that sets students up to need those lectures is much, much harder. I didn’t get the sense from our exchange that that kind of personalization is anywhere on Khan Academy’s horizon.

#MTT2K Contest Winners Announced

The judges’ prize goes to Michael Pershan’s What if Khan Academy was Made in Japan?, followed by Kate Nowak’s critique of Khan Academy’s lecture on the coordinate plane, and then to Susan Jones’ faithful homage to MST3K’s talking robots. Dr. Tae’s sharp critique of Khan Academy’s enthusiasm for gamification won the People’s Choice Award.

Each one is worth your while but special merits, again, to Pershan’s video which is optimistic, constructive, and exhaustively researched. He edits himself extremely well throughout the video, maintaining this unflagging narration that’s almost Ze Frankian. 13 minutes pass by in an instant. You should watch it, then subscribe to his blog, then follow him on Twitter, then visit him at his home.

Co-sponsor Justin Reich has his announcement over at Ed Tech Researcher. I echo his summary judgment:

Of course, the real winners of the competition are everyone who looked critically at Khan Academy (and looked critically at its critics) and developed a more nuanced view. If after reading some of the conversation generated about Khan Academy this summer, you have a stronger position that Khan Academy is [completely awesome/situationally useful/seriously problematic] then I’m pleased to have played a tiny role in nudging the conversation.