Get Posts by E-mail

Let me highlight another conversation from the comments, this time between Kevin Hall, Don Byrd, and myself, on the merits of direct instruction, worked examples, inquiry learning, and some blend of the three.

Some biography: Kevin Hall is a teacher as well as a student of cognitive psychology research. His questions and criticisms around here tend to tug me in a useful direction, away from the motivational factors that usually obsess me and closer towards cognitive concerns. The fact that both he and Don Byrd have some experience in the classroom keep them from the worst excesses of cognitive science, which is to see cognition as completely divorced from motivation and the classroom as different only by degrees from a research laboratory.

Kevin Hall:

While people tend to debate which is better, inquiry learning or direct instruction, the research says sometimes it’s one and sometimes the other. A recent meta study found that inquiry is on average better, but only when “enhanced” to provide students with assistance [1]. Worked examples actually can be one such form if assistance (e.g., showing examples and prompting students for explanations of why each step was taken).

One difficulty with just discussing this topics that people tend to disagree about what constitutes inquiry-based learning. I heard David Klahr, a main researcher in this field, speak at a conference once, and he said lots of people considered his “direct instruction” conditions to be inquiry. He wished he had just labelled his conditions as Condition 1, 2, and 3 because it would have avoided lots of controversy.

Here’s where Cognitive Load Theory comes in: effectiveness with inquiry (minimal guidance) depends in the net impact of at least 3 competing factors: (a) motivation, (b) the generation effect, and (c) working memory limitations. Regarding (a), Dan often makes the good point that if teachers use worked examples in a boring way, learning will be poor even if students cognitive needs are being met very well.

The generation effect says that you remember better the facts, names, rules, etc that you are asked to come up with on your own. It can be very difficult to control for this effect in a study, mainly because its always possible that if you let students come up with their own explanations in one group while providing explanations to a control group, the groups will be exposed to different explanations, and then you’re testing the quality of the explanations and not the generation effect itself. However, a pretty brilliant (in my opinion) study controlled for this and verified the effect [2]. We need more studies to confirm. Here is a really portent paragraph from the second page of the paper: “Because examples are often addressed in Cognitive Load Theory (Paas, Renkl, & Sweller, 2003), it is worth a moment to discuss the theory’s predictions. The theory defines three types of cognitive load: intrinsic cognitive load is due to the content itself; extraneous cognitive load is due to the instruction and harms learning; germane cognitive load is due to the instruction and helps learning. Renkl and Atkinson (2003) note that self-explaining increases measurable cognitive load and also increases learning, so it must be a source of germane cognitive load. This is consistent with both of our hypotheses. The Coverage hypothesis suggests that the students are attending to more content, and this extra content increases both load and learning. The Generation hypothesis suggests that load and learning are higher when generating content than when comprehending it. In short, Cognitive Load Theory is consistent with both hypotheses and does not help us discriminate between them.”

Factor (c) is working memory load. The main idea is found in this quote from the Sweller paper Dan linked to above, Why Minimal Instruction During Instruction Does Not Work [3]: “Inquiry-based instruction requires the learner to search a problem space for problem-relevant information. All problem-based searching makes heavy demands on working memory. Furthermore, that working memory load does not contribute to the accumulation of knowledge in long-term memory because while working memory is being used to search for problem solutions, it is not available and cannot be used to learn.” The key here is that when your working memory is being used to figure something out, it’s not actually being used to to learn it. Even after figuring it out, the student may not be quite sure what they figured out and may not be able to repeat it.

Does this mean asking students to figure stuff out for themselves is a bad idea? No. But it does mean you have to pay attention to working memory limitations by giving students lots of drill practice applying a concept right after they discover it. If you don’t give the drill practice after inquiry, students do worse than if you just provided direct instruction. If you do provide the drill practice, they do better than with direct instruction. This is not a firmly-established result in the literature, but it’s what the data seems to show right now. I’ve linked below to a classroom study [4] and a really rigorously-controlled lab study study [5] showing this. They’re both pretty fascinating reads… though the “methods” section of [5] can be a little tedious, the first and last parts are pretty cool. The title of [5] sums it up: “Practice Enables Successful Learning Under Minimal Guidance.” The draft version of that paper was actually subtitled “Drill and kill makes discovery learning a success”!

As I mentioned in the other thread Dan linked to, worked examples have been shown in year-long classroom studies to speed up student learning dramatically. See the section called “Recent Research on Worked Examples in Tutored Problem Solving” in [6]. This result is not provisional, but is one of the best-established results in the learning sciences.

So, in summary, the answer to whether to use inquiry learning is not “yes” or “no”, and people shouldn’t divide into camps based on ideology. Still unanswered question is the question when to be “less helpful” as Dan’s motto says and when to be more helpful.

One of the best researchers in the area is Ken Koedinger, who calls this the Assistance Dilemma and discusses it in this article [7]. His synthesis of his and others’ work on the question seems to say that more complex concepts benefit from inquiry-type methods, but simple rules and skills are better learned from direct instruction [8]. See especially the chart on p. 780 of [8]. There may also be an expertise reversal effect in which support that benefits novice learners of a skill actually ends up being detrimental for students with greater proficiency in that skill.

Okay, before I go, one caveat: I’m just a math teacher in Northern Virginia, so while I follow this literature avidly, I’m not as expert as an actual scientist in this field. Perhaps we could invite some real experts to chime in?

Dan Meyer:

Thanks a mil, Kevin. While we’re digesting this, if you get a free second, I’d appreciate hearing how your understanding of this CLT research informs your teaching.

Kevin Hall:

The short version is that CLT research has made me faster in teaching skills, because cognitive principles like worked examples, spacing, and the testing effect do work. For a summary of the principles, see this link.

But it’s also made me persistent in trying 3-Acts and other creative methods, because it gives me more levers to adjust if students seem engaged but the learning doesn’t seem to “stick”.

Here’s a depressing example from my own classroom:

Two years ago I was videotaping my lessons for my masters thesis on Accountable Talk, a discourse technique. I needed to kick off the topic of inverse functions, and I thought I had a good plan. I wrote down the formula A = s^2 for the area of a square and asked students what the “inverse” of that might mean (just intuitively, before we had actually defined what an inverse function is). Student opinions converged on the S = SqRt(A). I had a few students summarize and paraphrase, making sure they specifically hit on the concept of switching input and output, and everyone seemed to be on board. We even did an analogous problem on whiteboards, which most students got correct. Then I switched the representations and drew the point (2, 4) point on a coordinate plane. I said, “This is a function. What would its inverse be?” I expected it to be easy, but it was surprisingly difficult. Most students thought it would be (-2, -4) or (2, -4), because inverse meant ‘opposite’. Eventually a student, James (not his real name), explained that it would be (4, 2) because that represents switching inputs and outputs. Eventually everyone agreed. Multiple students paraphrased and summarized, and I thought things were good.

Class ended, but I felt good. The next class, I put up an similar problem to restart the conversation. If a function is given by the point (3, 7), what’s the inverse of that function? Dead silence for a while. Then one student (the top student in the class) piped up: “I don’t remember the answer, but I remember that this is where James ‘schooled’ us last class.” Watching the video of that as I wrote up my thesis was pretty tough.

But at least I had something to fall back on. I decided it was a case of too much cognitive load–they were processing the first discussion as we were having it, but they didn’t have the additional working memory needed to consolidate it. If I had attended to cognitive needs better, the question about (2, 4) would have been easier, and I should NOT have switched representations from equations to points until it seemed like the switch would be a piece of cake.

I also think knowing the CLT research has made me realize how much more work I need to do to spiral in my classroom.

Then in another thread on adaptive math programs:

Kevin Hall:

My intention was to respond to your critique that a computer can’t figure out what mistake you’re making, because it only checks your final answer. Programs with inner-loop adaptivity do, in fact, check each step of your work. Before too long, I they might even be better than a teacher at helping individual students identify their mistakes and correct them, because as as teacher I can’t even sit with each student for 5 min per day.

Don Byrd:

I have only a modest amount of experience as a math teacher; I lasted less than two years — less than one year, if you exclude student teaching — before scurrying back to academic informatics/software research. But I scurried back with a deep interest in math education, and my academic work has always been close to the boundary between engineering and cognitive science. Anyway, I think Kevin H. is way too optimistic about the promise of computer-based individualized instruction. He says “It seems to me that if IBM can make Watson win Jeopardy, then effective personalization is also possible.” Possible, yes, but as Dan says, the computer “struggles to capture conceptual nuance.” Success at Jeopardy simply requires coming up with a series of facts; that’s highly data based and procedural. The distance from winning Jeopardy to “capturing conceptual nuance” is much, much greater than the distance from adding 2 and 2 to winning Jeopardy.

Kevin also says that “before too long, [programs with inner-loop adaptivity] might even be better than a teacher at helping individual students identify their mistakes and correct them, because as as teacher I can’t even sit with each student for 5 min per day.” I’d say it’s likely programs might be better than teachers at that “before too long” only if you think of “identifying a mistake” as telling Joanie that in _this_ step, she didn’t convert a decimal to a fraction correctly. It’ll be a very long time before a computer will be able to say why she made that mistake, and thereby help her correct her thinking.

2013 Aug 14. Christian Bokhove passes along an interesting link summarizing criticisms of CLT.

29 Responses to “[Mailbag] Direct Instruction V. Inquiry Learning, Round Eleventy Million”

  1. on 14 Aug 2013 at 8:25 amRyan Muller

    If you’re learning a mechanical skill and the computer has identified the incorrect step, there is not really a “why” it’s wrong other than “it’s not the right step”–which the computer knows. I think Don is way too pessimistic about the promise of children learning a skill with precise correctness feedback.

    If there is a consistent bias in the student’s thinking, I’d hypothesize the computer is also better than the teacher at identifying what that bias is, given enough data. See for example,Nathan-04.pdf where the teachers got it wrong on what type of math problems were hardest for students.

    Kevin’s example of
    inverse of (2,4) -> (-2,-4), (-2,4), or (2,-4)
    maaybe wouldn’t be a result from the most simplistic machine learning algorithm bash, but it’s pretty well in the realm of mechanical rules and computer-representable knowledge.

    That’s just to respond to doing skill-building on a computer, not motivating a lesson or explaining “why do we subtract from both sides” in a way that is deeply satisfying at a human level. (That’s another discussion perhaps.)

  2. on 14 Aug 2013 at 10:14 amChristian

    I agree with Kevin that there are many interesting articles and principles from CLT (and related) research, including worked examples, impasses (vanLehn), fading of feedback (Renkl) and Koedingers work. It’s also interesting to read about the criticisms, it keeps us sharp. See for an overview of some points.

  3. on 14 Aug 2013 at 11:52 amJason Dyer

    Regarding Kevin’s (2,4) problem:

    I think the student confusion has less to with cognitive load issues than with raw linkage of content. The conceptual gap seems to be that with coordinate points (by tradition) the x-value is considered the input and the y-value is considered the output. Therefore it is possible to make a point or set of points and declare it a function. However, the student mental model would default to geometrical points, which doesn’t match what students are used to with functions (which are often depicted as linkages).

    Another way to think of this issue is that a geometrical coordinate point is relatively low on the abstraction level, whereas functions are an abstraction of an abstraction of an abstraction. Even without CLT issues the leap would be a pretty far one without a conceptual step in between.

  4. on 14 Aug 2013 at 2:33 pmPam Harris

    Regarding Kevin’s idea and result of asking students to come up with what the inverse of A=s^2 might be… To me, this is less of an issue about cognitive load and more an issue of what to get students to inquire about. Asking students what a term (inverse function) might mean can draw on student’s prior knowledge and thus produce the ‘opposite’ effect that Kevin saw. Then one student guesses correctly what Kevin had in mind and the rest of the students remember this “schooling”. The name “inverse” is social knowledge, convention, something someone once decided. Students have to guess this kind of stuff because it’s not something you can figure out. The concept of an inverse function – now that’s something worth inquiring about. My question to Kevin is – how could you present a problem that would perplex students about some inverse function(s) and then after they have used your accountable talk to flesh out the concept, then you tag it with the term “inverse function”. Now follow that with your examples (like ordered pairs) in a short burst of more directed, focused instruction. I called these more directed, focused instruction bits “problem strings” – series of problems that construct relationships in the learners head that makes strategies to solve them natural extensions of the relationships.
    I guess the part I’m commenting on is that when I discuss inquiry versus direct instruction, I find it helpful to parse out what the parts are that can be constructed and what the parts are that must be memorized.

  5. on 14 Aug 2013 at 3:11 pmblaw0013

    Direct Instruction, Discovery Learning, and most interpretations of Inquiry Teaching are both unethical and doomed to failure because their basic goal is to get another person to think/know/act like oneself.

    The epistemological break from Behaviorism invited learning theorists to consider the mind by creating models for how it seems to work. However, the educationalist mindset mires in the stimulus-response metaphor for trying to determine (fix, name, define) the act of teaching–as if there were a causal relationship.

    Piaget, Maturana, Varela, Bateson, Ackermann, von Glasersfeld, Laroschelle, von Foerster, Kamii, Steffe, Fosnot, and possibly even Vygotsky (among many others) created ways to study student learning, some of whose methods were very particular to what they referred to as mathematical learning.

    Most significantly, they stopped looking for their own mathematical ways of knowing, or some a priori Mathematics, to appear in the students. Instead, they allowed students to express their own mathematics.

    A post-behaviorist learning theory redefines teaching to no longer look for particular behaviors, actions indicative of knowing (as oneself does). Instead, it takes behaviors as feedback against the viability of a model for the learner as mathematical knower. That model, an idea (knowledge) of the observer/listener/researcher/teacher suggests to this observer a “zone of potential construction” (Steffe) and from one’s own ways of knowing helps to hypothesize actions; actions that may be post hoc defined as teaching.

    This teacher, one who pursues student(s’) ideas, I claim will ensure mathematical learning and do so in an ethical manner, respecting the ideas and mind of the learner.

    Behaviorist concerns such as motivation, transfer, and even working memory have little use to a teacher that embraces a Post-Behaviorist learning theory.

  6. on 14 Aug 2013 at 6:56 pmJames Cibulka

    Like grant wiggins keeps alluding to, it’s like soccer. We do have to run drills ( direct instruction of skills), we do have to do scrimmages (3 act authentic, novel context tasks) and we do have to play the game for real (assessments).
    But most students walk onto the field with no game plan!!! Hence we often see failure. Would a pro team go into a game situation without an idea of what they were going to do to try and win? Heck no!
    This is where modeling instruction comes in. It IS the game plan. By simplifying a multitude of ideas into a few elegant models of understanding, students can attack a problem like an expert. Analogically, soccer teams run only a few base formations to score, ( or prevent it!) despite the fact that every opponent is different. Teaching via models allows our students to at least have a game plan to attempt problem solving.
    I’ll still teach via DI some skills. I’ll give open ended tasks. And the students will have a game plan.
    Thanks for all you do. This was interesting!

  7. on 14 Aug 2013 at 7:17 pmKevin Hall

    Thanks, everyone, for reading and commenting.

    Ryan, if I understand you correctly, you’re saying that it’s not too subtle for a computer that’s following each step of your work to identify where you went wrong. But since cognitive tutors not only check correctness of your step, but also track the rule you most likely applied to get that step, wouldn’t you say they can actually identify your misconception? For example, if a student does 1/2 + 1/3 and gets 2/5, a cognitive tutor will not just mark it wrong. It’ll increase its estimate of the probability that the student believes the rule is to add the numerators and add the denominators, won’t it? The disappointment I’ve had with cognitive tutors in the past is that, aside from a hint message targeted to that misconception, they don’t take much tutorial action to help students see where they’re going wrong. In other words, they’re better at diagnosing than at remediating. Don, any thoughts on this?

    Christian, thanks for that link. I read it with interest. The research I was citing above was largely empirical, so even if CLT falls out of favor as an explanatory theory, would you say the practical implications of what I wrote above for inquiry vs. direct instruction would still hold? I’m thinking in terms of practical advice you’d give to teachers who are feeling buffeted by conflicting reports about using inquiry or direct instruction.

    Jason, I think you make a good point about why the jump was difficult for students. What intrigued/disappointed me most, however was not that they found the jump difficult, but that after eventually making the leap in a good discussion, they forgot it by the next class. I had been operating on the assumption that if you get students to paraphrase or challenge each others’ thoughts and gradually build toward the correct concept, the depth of that experience would increase retention. But honestly, I would have gotten better retention if I just used drill-and-kill. Of course, perhaps they didn’t really understand it at the end of Day 1. But based on the discussion, I thought they did.

    Pam, I think your comment about parsing the constructable and non-constructable ideas is very valuable. In this case, I actually didn’t resolve the dilemma for the class. They disagreed about what the “inverse” of the point (2, 4) was, and even when the correct student explained it, I didn’t tell the class he was right. I waited until other students had become convinced and had convinced others. By the time I stepped in, I’d say about 80% of the class was on board with the correct answer. So they did “construct” it on their own, but that knowledge didn’t stick somehow. Of course, maybe that’s due to shoddy teaching somewhere. If I were already the teacher I’d like to be, I wouldn’t be on Dan’s blog so much shamelessly stealing everyone’s ideas.

    blaw0013, I have to confess that I don’t really understand your comment. But that’s literature I’m not very familiar with.

  8. on 14 Aug 2013 at 10:49 pmJohn Lloyd

    Most helpful. I’ve obviously got to rethink some of my teaching.

  9. on 15 Aug 2013 at 1:22 amRyan Muller

    “For example, if a student does 1/2 + 1/3 and gets 2/5, a cognitive tutor will not just mark it wrong. It’ll increase its estimate of the probability that the student believes the rule is to add the numerators and add the denominators, won’t it?”

    Most definitely, this could be a misconception that is programmed in and easily checked since it’s a rule, incorrect or not. I recall seeing programs that do this though I can’t name one offhand.

    The more interesting question to me is whether computers can discover misconceptions that weren’t programmed in–perhaps ones that teachers haven’t recognized–and then whether the discovery can be used to improve development of the skill. My guess is yes on both counts.

  10. on 15 Aug 2013 at 4:28 amKevin Hall

    Ryan, have you read Matsuda’s work with SimStudent, such as this:

    (Here is a longer list of SimStudent publications:

    You seem conversant in the literature, so you may already know about this.

  11. on 15 Aug 2013 at 10:21 amChristian

    @Kevin yes, that’s what I meant with that there are many interesting articles and principles. When I was still teaching at secondary level (moved on to other pastures since a year) I tried to use elements from ‘both sides’, on a practical level. The reason why I posted the link, and read almost everything, is that I don’t believe in a black and white world where one theory of Everything explains everything, both on a theoretical and practical level any way. But I also don’t believe in an ‘everything goes’ attitude. There is worthwhile research in many areas that can inform us.

    Jumping in the dialogue with Ryan, and a bit weary of the fact that I don’t want to be a guy who always cites stuff related to his own work, I agree that there are interesting developments with feedback and misconceptions. Apart from Cognitive Tutor, Activemath for example, and IDEAS feedback connected to -there it is- applets from the FI ;-) See, for example, What, in my opinion, is particularly interesting is designing tools by using student and teacher knowledge on these misconceptions. Although AI has gone a long way, I’m not really sure if all those millions spent on AI and Machine Learning algorithms could not be better invested in teacher and student interviews and analyses of student work for misconceptions (actually many projects already have these lists) with 95% coverage of what students (and teachers) do wrong. In any case, this topic needs more thought than ‘just deploying some ML algorithms’ or ‘it will never happen’. Just thinking aloud here. :-)

  12. on 15 Aug 2013 at 11:26 amHoward Phillips

    Somewhere in the talk is “the function given by the point (2,4)”
    and a discussion about its inverse. With no specification of range or domain how can anybody proceed? The cognitive load from this sort of stuff is too much. Am I the only one who is not surprised by the resulting confusion in the students’ overloaded brains.

  13. […] de blog van Dan Meyer staat een heel genuanceerde discussie, die duidelijk maakt dat de tegenstelling […]

  14. on 15 Aug 2013 at 6:22 pmKevin Hall

    @Christian, that looks quite interesting. Personally, I like it when commenters say what projects they’re working on. It gives the rest of us a sense of who you are and what you bring to the discussion. I looked at the link, and it looks very interesting. When I have time, I’ll create an account and play around on the Ideas exercises collection.

    @James, the point here is a little different than how I think you took it. Clearly, drill is not a good way to teach modeling. But what is the best way to teach core concepts (not modeling, but things like the rules for integer operations)? This is where the debate between “let them discover it” and “just show them” gets tough to navigate. An example would be Dan’s last Monday Makeover. He came down on the side of having students discover how to find the least common multiple, using the cool simulator to assist with the discovery.

  15. on 15 Aug 2013 at 7:25 pmblaw0013

    @Kevin & @dan

    I attended a presentation by Dr. John Polesko, Mathematics department chair at U. Delaware and an Applied Mathematician. His talk was about Mathematical Modeling. Briefly, he delineated two types of modeling (I forgot the names he used). The first is, in essence, fitting a function to data, to interpolate or extrapolate. The second is to take an ill-defined situation, name assumptions, identify core mathematical structures, and build forth from there [sorry both definitions are lacking].

    What was most clear in his message that he as a mathematician, whether in an applied or pure field, work in ways that, while sometimes goal-directed, can lead almost anywhere. And it is his preference to teach in a similar manner (even his calculus classes, with “so much to cover”).

    Teaching in this way is NOT about beginning with “what math is to be learned”, but instead with either, “what problem is to be solved” or “what are we curious about.”

    And here is where I try to re-define teaching, and argue why the Direct Instruction vs. Discovery Learning or Inquiry Teaching is really a misguided debate. Neither respect the learners’ curiosities, competencies, or constructions of mathematical knowledge/ways of knowing. All have a particular way of knowing for the child to mimic already in mind.

    For anyone interested, check out for modeling competitions and further info.

  16. on 16 Aug 2013 at 2:49 amChris Shore

    I will never forget William Schmidt of the TIMSS studies telling me that in America we focus too much on methodology and in the high performing countries they focus on the mathematics in which they engage the kids.

  17. […] was floored when Pam started talking, and I realized I had read her comments on Dan Meyer’s blog earlier this week…!  Ah, hi!  That was your Comment-#4-voice I was reading?  Pleasure to […]

  18. on 18 Aug 2013 at 6:41 amMonty

    Concerning factor B, given by Kevin Hall, The generation effect (you remember better the facts, names, rules, etc that you are asked to come up with on your own): I get the sense that experts are still studying something (granted, it is possibly for the sake of documentation) which has long since been well established. For example, this has been the method used by Jewish rabbis for centuries (or millenia). In order for their students to obtain optimal cognizance, the students will learn far better when they apply the problem to themselves personally, thus optimal learning is achieved.
    Look at it this way: Someone wants to know the answer to a question. He asks the teacher, “What is the answer to my question?” The teacher can answer that question, but the answer will be only a one-dimensional fact for the inquirer and will soon fade into obscurity. Instead, the teacher asks a question – or better yet – a series of questions, which, in the process of answering, the inquirer supplies his own answer to his own original question. Now he “knows” the answer because it means something to him personally.
    Take this example, which was an incident that actually happened: A group of American tourists went to Israel and were browsing some shops.
    A female tourist entered a shop that sold photographs that were taken by the proprietor. The lady was very impressed by the photographs. She told the shop owner that she was looking for souvenirs to take back to her children and was interested in his photographs.
    She asked the photographer, “Of all of these pictures, which one is your favorite?”
    The photographer was a rabbi. He asked the lady, “You are thinking of giving my pictures to your children?”
    “Yes,” the lady answered.
    “How many children do you have?” the rabbi asked.
    “Three,” she answered.
    “Tell me,” said the rabbi. “Which one is your favorite?”
    Instantly the lady had the answer to her own question and it was meaningful to her.
    This is a helpful method to know about and it is interesting indeed when we read in Luke 2:46 that Jesus was in the temple courts asking the teachers questions.

  19. on 20 Aug 2013 at 10:45 amvlorbik

    this is theology.

    try something.
    if it works, do more of it.
    if it fails, do less. lather,
    rinse, repeat.

    in other words, do what
    everybody actually *trying*
    to do this impossible task
    actually *does*.

    talking about it? it’s like
    “explaining” poetry:
    “you want me to say it
    again, longer and worse?”

    going off somewhere
    to sober up. yashka
    spoke with authority.

  20. on 04 Mar 2014 at 6:29 pmDon Byrd

    This is a response mostly to Kevin (comment #7) and Ryan (#9). I somehow overlooked this interesting conversation of over six months ago! I don’t know if anyone is still paying attention, but I can’t resist commenting on a very important aspect of it.

    It should be obvious that you can’t “discover misconceptions…perhaps ones that teachers haven’t recognized” unless you understand many, if not most, of the relevant concepts in the student’s head. Unfortunately, and this may not be obvious at all, current AI programs understand no concepts at all in any reasonable sense of the word “concept”! Then how can IBM’s Watson win at Jeopardy? Because a cleverly-written program can sidestep many of the issues involved and appear to understand far more than it does. Sure, it “knows” tons of facts, and it can parse English statements of the kind involved to decide what facts apply; but it understands just about nothing.

    The recent book by the well-known cognitive scientist Douglas Hofstadter and Emmanuel Sander, _Surfaces and Essences_ gives a lot of background for the incredible subtlety of concepts. I may as well reveal that I’m a disciple of Doug’s; he was my thesis advisor, and we’ve remained in close touch. I don’t think it’s going too far to say that his view is that most AI work is a sham — perhaps well-intentioned, but a sham nonetheless, in that the AI programs (IBM’s Watson might be a good example) are cheating, very much in the tradition of Joseph Weizenbaum’s famous “psychotherapist” program of the 1960′s, ELIZA. After conversing with ELIZA (via typing), many people insisted it really understood them. But typing “My mother gave birth to me prematurely” to it tends to elicit the response “Who else in your family gave birth to you prematurely?”, suggesting what it’s really doing :-) .

    Does this mean that cognitive tutors can’t identify misconceptions? No, but only if they’ve been programmed in, and therefore only if they’ve been recognized by someone.

    To paraphrase something I wrote elsewhere apropos of a completely different discipline, people — even first graders — bring an enormous amount of context to bear on anything they do; they can’t help it, and they’re rarely even aware of it. But computers can’t bring _any_ context to bear on _anything_ unless they’re told to. Well, you can’t really understand much of anything without a lot of context.

    I’m not claiming at all that computers are inherently incapable of having concepts or of understanding anything! — just that we have a long, long way to go, and I doubt if most AI work is even going in the right direction.

    One question for Kevin. I haven’t read Matsuda’s work with SimStudent; should I?

  21. on 05 Mar 2014 at 1:05 pmKevin Hall

    @Don, glad you found this comment thread interesting. Here’s the most relevant article by Matsuda: Automatic Student Model Discovery.

    I’m no expert in AI, but for context, the AI programs I was talking about are built on the ACT-R framework of cognition, which is not just throwing “big data” at problems but instead explicitly models cognition. AI built on ACT-R really does try to build a model of what the human is thinking. Here is an example of how that’s been found to align with fMRI images of what people are actually thinking: ACT-R and fMRI.

    The authors used to have a really cool video online which showed the computer using fMRI to predict the equation-solving steps a student was taking, versus the steps the student was actually taking, and they lined up exactly, except for when the computer knew the student was about to take a step a fraction of a second BEFORE he/she actually did. They seem to have taken that down–I can’t find it anymore.

  22. on 09 Mar 2014 at 9:52 amDon Byrd

    Thanks for the links, Kevin.

    I took a quick look at the ACT-R and fMRI paper. It’s interesting and impressive research. And the video you describe sounds very impressive, though of course the fact their program did so well _once_ doesn’t mean that much. More to the point, I don’t see how this works suggests they’re anywhere near being able to identify misconceptions that haven’t been programmed in. I’m not a classroom teacher anymore, but I’m still tutoring, and still being blindsided by students misunderstanding something that never occurred to me at all. I’m sure that becomes less and less common as you get more experienced, but there are so many ways to misunderstand!

  23. on 10 Mar 2014 at 9:25 amKevin Hall

    Don, based on the Matsuda papers, here’s how the current capabilities of the cognitive modeling works. You can decide for yourself whether this counts as “discovering” a misconception that hasn’t been programmed in:

    Take a problem, such as an equation to solve: 2x+4=10. Matsuda’s program has a parser with which to read the problem (the program identifies features such as an x, a coefficient of x, a constant that’s being added, and a right-hand side).

    First let a bunch of students work through the problem, some correctly, some incorrectly. Then activate the learning component of the Matsuda’s program, which allows it to learn from examples. Play back one student’s work at a time, flagging each step as correct or incorrect. For example, if a student’s trying to solve 2x+4=10 started by writing x+4=5, you would mark it incorrect.

    At this point, Matsuda’s program would try to generate a rule that would have gotten the student from the start state to their first line. Here the rule would be something like “In Ax + B = C”, replace Ax with x, and divide C by A. Since you’ve marked this step incorrect, the computer would learn that that’s not a good rule to follow. But it would have created the rule in it’s memory, so if it saw other students doing the same thing on other problems, it would recognize the mistake. If you’ve programmed a feedback message or little video tutorial about that misconception, it can be shown to any future students. Each rule (called a production rule. in ACT-R, is coded in Jess.

    This does require human tagging of each step of a large sample of student work, but it doesn’t require a human to code the production rule in Jess. It also allows the computer to follow multiple correct solution paths. Any correct or incorrect step taken by a student, if it has been successfully generalized into a production rule, can be used in feedback with future students.

    Matsuda’s paper linked in comment #21 describes correct and incorrect parsers in more detail–sometimes the incorrect parsers are what students are using.

    Dan, sorry if my replies are too long. I can take the conversation over to my blog if you’d prefer.

  24. on 14 Mar 2014 at 7:41 amDon Byrd

    Interesting. So, for each step in solving a given problem, a human must say “these are correct” and “these are wrong”; then the program tries to infer good and bad rules that apply in that situation. Yes?

    You say “Any correct or incorrect step taken by a student, if it has been successfully generalized into a production rule, can be used in feedback with future students.” Can the program use it properly in _different problems_ where in fact the same rule applies? If the answer is no, I’d say it’s definitely not discovering a misconception in a reasonable sense. If yes, it might count as discovering a misconception — though even then, what it actually discovered would be a special case of what we would see as the real misconception. I don’t see how any “production rule” could really capture the essence of a misconception!

    Still, this does sound like it could be a useful tool, especially with a very large class.

  25. on 14 Mar 2014 at 6:30 pmKevin Hall

    Definitely–the ability to apply it to a different problem is what distinguishes a cognitive tutor from an example-tracing tutor. (See this description of tutor types.

  26. on 14 Mar 2014 at 7:57 pmblaw0013

    Yashka @vlorbik has given a quite rational response, beginning to end ;-)

    Dan @ddmeyer recognizes that he himself obsesses with motivation. Ideas about motivation, especially those of intrinsic and extrinsic motivation, live in the world of behaviorist learning theory that western culture knows so well we have a hard time knowing/thinking outside of of it (like fish & water).

    The present constructivist theory of knowing and learning, superseding behaviorism, really messes up the idea of motivation. At first, it creates a changed definition for learning. It is not a definition that relies on a “what” to be learned (what overwhelms us as math teachers), but instead focuses on hypothetical models for knowing and defines learning changes to those knowing structures modeled. So “what” is to be learned is recognized as an idea of the teacher, and something they want to “see” replicated in the learner. Now motivation has become more of a problem OF the teacher, not a lacking in the learner.

    I continue this rumination at and won’t fill Dan’s space with my meandering thoughts.

    Last comment: I was intrigued to return here by @Don Byrd’s comments, especially that “Does this mean that cognitive tutors can’t identify misconceptions? No, but only if they’ve been programmed in, and therefore only if they’ve been recognized by someone.” What struck me was the reminder that an observer must exist in any statement that is made.

  27. […] My fellow cohorts read my struggles with thinking about misconceptions and helped me consider new perspectives here and here. After a conversation with a classmate about my desire to learn Excel, I was thrilled to find a few posts that provided direct support. Also, the amazing resources provided by our UW tech class lead me to numerous blogs where teachers are thinking deeply about math and technology and ways of learning. […]

  28. on 10 Apr 2014 at 10:10 amDon Byrd

    Kevin, in belated response to your last comment (#25), I’m ready to be more negative about “cognitive tutors” :-) . The example you gave was “if a student’s trying to solve 2x+4=10 started by writing x+4=5, you would mark it incorrect… At this point, Matsuda’s program would try to generate a rule… something like “In Ax + B = C”, replace Ax with x, and divide C by A. Since you’ve marked this step incorrect, the computer would learn that that’s not a good rule to follow.” Agreed. But what’s the _misconception_ here? By itself, it looks to me more like a silly mistake! Maybe this student has made “similar” errors repeatedly, suggesting they really have a misconception, something like “You can divide _something_ on the left side of an equation by a number as long as you divide _something_ on the right side by the same number.” But no entity, human or computer, is going to know that from an isolated example, and no program I’ve ever heard of can generalize anything very well from any collection of examples of anything.

    Again, I’m not at all saying that programs like CTAT can’t be useful. But they’re still light-years away from having anything like real _concepts_. And _misconceptions_ are mistakes involving concepts.

  29. on 10 Apr 2014 at 12:11 pmKevin Hall

    @Don, the mistake is a common one for students. Students understand that to solve 2x + 4 = 10, you eventually have to divide by 2. From the expert perspective, it’s easier to subtract 4 first. But some students mistakenly divide by 2 first. That can work, but when you do that, you have to remember to divide 4 by 2 as well. The misconception is that you don’t have to distribute the dividing by 2 to both terms, the 2x and the 4.

    Of course, when training Matsuda’s program, you need to feed it LOTS of examples, so it can generalize successfully. As you say, isolated examples won’t work. But that’s just a problem of scale.

    I guess I don’t really understand your point when you talk about computers not being able to have concepts. If you taught a computer the rule, “When you divide both sides of an equation by something, you must divide each term by that quantity”, is that not a concept? That essentially is the concept of distribution.

    It really doesn’t matter whether the concept was “discovered” by a learning program such as Matsuda’s, or was programmed in (which is how most cognitive tutors acquire their production rules).

Leave a Reply