Why Learning Analytics Aren’t Like Netflix Recommendations

Bill Jerome, in an excellent post aimed at people who perceive an obvious connection between learning analytics and Netflix recommendations:

The more a user stays engaged with [Netflix and Amazon], the more profit they generate. The comparisons to those kinds of analytics pretty much end there. Unfortunately for those looking for the easy path, our outcomes are complex and the inputs aren’t actually that obvious either.

Then later:

Now what happens if we tell a student they aren’t achieving learning outcomes when in fact we are wrong about that? The potential for demotivating the student comes at a high cost. This could happen with errors in reporting the other way, as well. If learning analytics inform a student they are succeeding but in fact they are not prepared for their next exam or job, the disservice is just as bad. Getting learning analytics wrong on the learning dimension is a recipe for disaster and must be done carefully and with understanding.

As far as I’m concerned, between this post and Michael Feldstein’s earlier “A Taxonomy of Adaptive Analytic Strategies“, the e-Literate blog has cornered the market on nuance and insight in the learning analytics discussion.

BTW. Probably related: What We Can Learn About Learning From Khan Academy’s Source Code.

I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.


  1. Just thinking about how this connects to what is happening in classrooms with teachers. We have a rather large percentage of students who go to college unprepared for the courses they are about to take.(I know I need some data here, don’t have time to look it up right now) Yet they leave high school with a diploma and a belief that they are ready for college. I’m not saying a computer is going to do a better job than a teacher, but I have seen students coast through high school classes without really learning what they need to know to be successful later. I guess the real question for me is not how can we measure skills, which is generally the focus of teachers as well as computers, but “How can we effectively measure critical thinking and problem solving”. It is a much more abstract idea than simply saying a student can factor a quadratic.

  2. Well, I agree with Todd mostly. Maybe if our teaching challenged students to know what they know as opposed to relying on us (or a Khan Academy drill) then we might start getting somewhere. If my sole charge is to get students ready for some other exam or course, let’s go out back and shoot me right now.

  3. We are working with some Cognitive Scientists (Phil Kellman and Christine Massey) who have spent the last 20 years researching how to systemetize both perceptual learning and learning of facts in different domains. They make extensive use of learning analytics in their systems.

    I am not an expert in their field, but it seems to me that they have created some unique algorithms to aid in both perception and memory, both of which can be helpful in learning math.

    I’d love to discuss this further if anyone wants to contact me through my blog: http://academicbiz.typepad.com.