Tag: clt

Total 6 Posts

[Mailbag] Direct Instruction V. Inquiry Learning, Round Eleventy Million

Let me highlight another conversation from the comments, this time between Kevin Hall, Don Byrd, and myself, on the merits of direct instruction, worked examples, inquiry learning, and some blend of the three.

Some biography: Kevin Hall is a teacher as well as a student of cognitive psychology research. His questions and criticisms around here tend to tug me in a useful direction, away from the motivational factors that usually obsess me and closer towards cognitive concerns. The fact that both he and Don Byrd have some experience in the classroom keep them from the worst excesses of cognitive science, which is to see cognition as completely divorced from motivation and the classroom as different only by degrees from a research laboratory.

Kevin Hall:

While people tend to debate which is better, inquiry learning or direct instruction, the research says sometimes it’s one and sometimes the other. A recent meta study found that inquiry is on average better, but only when “enhanced” to provide students with assistance [1]. Worked examples actually can be one such form if assistance (e.g., showing examples and prompting students for explanations of why each step was taken).

One difficulty with just discussing this topics that people tend to disagree about what constitutes inquiry-based learning. I heard David Klahr, a main researcher in this field, speak at a conference once, and he said lots of people considered his “direct instruction” conditions to be inquiry. He wished he had just labelled his conditions as Condition 1, 2, and 3 because it would have avoided lots of controversy.

Here’s where Cognitive Load Theory comes in: effectiveness with inquiry (minimal guidance) depends in the net impact of at least 3 competing factors: (a) motivation, (b) the generation effect, and (c) working memory limitations. Regarding (a), Dan often makes the good point that if teachers use worked examples in a boring way, learning will be poor even if students cognitive needs are being met very well.

The generation effect says that you remember better the facts, names, rules, etc that you are asked to come up with on your own. It can be very difficult to control for this effect in a study, mainly because its always possible that if you let students come up with their own explanations in one group while providing explanations to a control group, the groups will be exposed to different explanations, and then you’re testing the quality of the explanations and not the generation effect itself. However, a pretty brilliant (in my opinion) study controlled for this and verified the effect [2]. We need more studies to confirm. Here is a really portent paragraph from the second page of the paper: “Because examples are often addressed in Cognitive Load Theory (Paas, Renkl, & Sweller, 2003), it is worth a moment to discuss the theory’s predictions. The theory defines three types of cognitive load: intrinsic cognitive load is due to the content itself; extraneous cognitive load is due to the instruction and harms learning; germane cognitive load is due to the instruction and helps learning. Renkl and Atkinson (2003) note that self-explaining increases measurable cognitive load and also increases learning, so it must be a source of germane cognitive load. This is consistent with both of our hypotheses. The Coverage hypothesis suggests that the students are attending to more content, and this extra content increases both load and learning. The Generation hypothesis suggests that load and learning are higher when generating content than when comprehending it. In short, Cognitive Load Theory is consistent with both hypotheses and does not help us discriminate between them.”

Factor (c) is working memory load. The main idea is found in this quote from the Sweller paper Dan linked to above, Why Minimal Instruction During Instruction Does Not Work [3]: “Inquiry-based instruction requires the learner to search a problem space for problem-relevant information. All problem-based searching makes heavy demands on working memory. Furthermore, that working memory load does not contribute to the accumulation of knowledge in long-term memory because while working memory is being used to search for problem solutions, it is not available and cannot be used to learn.” The key here is that when your working memory is being used to figure something out, it’s not actually being used to to learn it. Even after figuring it out, the student may not be quite sure what they figured out and may not be able to repeat it.

Does this mean asking students to figure stuff out for themselves is a bad idea? No. But it does mean you have to pay attention to working memory limitations by giving students lots of drill practice applying a concept right after they discover it. If you don’t give the drill practice after inquiry, students do worse than if you just provided direct instruction. If you do provide the drill practice, they do better than with direct instruction. This is not a firmly-established result in the literature, but it’s what the data seems to show right now. I’ve linked below to a classroom study [4] and a really rigorously-controlled lab study study [5] showing this. They’re both pretty fascinating reads… though the “methods” section of [5] can be a little tedious, the first and last parts are pretty cool. The title of [5] sums it up: “Practice Enables Successful Learning Under Minimal Guidance.” The draft version of that paper was actually subtitled “Drill and kill makes discovery learning a success”!

As I mentioned in the other thread Dan linked to, worked examples have been shown in year-long classroom studies to speed up student learning dramatically. See the section called “Recent Research on Worked Examples in Tutored Problem Solving” in [6]. This result is not provisional, but is one of the best-established results in the learning sciences.

So, in summary, the answer to whether to use inquiry learning is not “yes” or “no”, and people shouldn’t divide into camps based on ideology. Still unanswered question is the question when to be “less helpful” as Dan’s motto says and when to be more helpful.

One of the best researchers in the area is Ken Koedinger, who calls this the Assistance Dilemma and discusses it in this article [7]. His synthesis of his and others’ work on the question seems to say that more complex concepts benefit from inquiry-type methods, but simple rules and skills are better learned from direct instruction [8]. See especially the chart on p. 780 of [8]. There may also be an expertise reversal effect in which support that benefits novice learners of a skill actually ends up being detrimental for students with greater proficiency in that skill.

Okay, before I go, one caveat: I’m just a math teacher in Northern Virginia, so while I follow this literature avidly, I’m not as expert as an actual scientist in this field. Perhaps we could invite some real experts to chime in?

Dan Meyer:

Thanks a mil, Kevin. While we’re digesting this, if you get a free second, I’d appreciate hearing how your understanding of this CLT research informs your teaching.

Kevin Hall:

The short version is that CLT research has made me faster in teaching skills, because cognitive principles like worked examples, spacing, and the testing effect do work. For a summary of the principles, see this link.

But it’s also made me persistent in trying 3-Acts and other creative methods, because it gives me more levers to adjust if students seem engaged but the learning doesn’t seem to “stick”.

Here’s a depressing example from my own classroom:

Two years ago I was videotaping my lessons for my masters thesis on Accountable Talk, a discourse technique. I needed to kick off the topic of inverse functions, and I thought I had a good plan. I wrote down the formula A = s^2 for the area of a square and asked students what the “inverse” of that might mean (just intuitively, before we had actually defined what an inverse function is). Student opinions converged on the S = SqRt(A). I had a few students summarize and paraphrase, making sure they specifically hit on the concept of switching input and output, and everyone seemed to be on board. We even did an analogous problem on whiteboards, which most students got correct. Then I switched the representations and drew the point (2, 4) point on a coordinate plane. I said, “This is a function. What would its inverse be?” I expected it to be easy, but it was surprisingly difficult. Most students thought it would be (-2, -4) or (2, -4), because inverse meant ‘opposite’. Eventually a student, James (not his real name), explained that it would be (4, 2) because that represents switching inputs and outputs. Eventually everyone agreed. Multiple students paraphrased and summarized, and I thought things were good.

Class ended, but I felt good. The next class, I put up an similar problem to restart the conversation. If a function is given by the point (3, 7), what’s the inverse of that function? Dead silence for a while. Then one student (the top student in the class) piped up: “I don’t remember the answer, but I remember that this is where James ‘schooled’ us last class.” Watching the video of that as I wrote up my thesis was pretty tough.

But at least I had something to fall back on. I decided it was a case of too much cognitive load—they were processing the first discussion as we were having it, but they didn’t have the additional working memory needed to consolidate it. If I had attended to cognitive needs better, the question about (2, 4) would have been easier, and I should NOT have switched representations from equations to points until it seemed like the switch would be a piece of cake.

I also think knowing the CLT research has made me realize how much more work I need to do to spiral in my classroom.

Then in another thread on adaptive math programs:

Kevin Hall:

My intention was to respond to your critique that a computer can’t figure out what mistake you’re making, because it only checks your final answer. Programs with inner-loop adaptivity do, in fact, check each step of your work. Before too long, I they might even be better than a teacher at helping individual students identify their mistakes and correct them, because as as teacher I can’t even sit with each student for 5 min per day.

Don Byrd:

I have only a modest amount of experience as a math teacher; I lasted less than two years – less than one year, if you exclude student teaching – before scurrying back to academic informatics/software research. But I scurried back with a deep interest in math education, and my academic work has always been close to the boundary between engineering and cognitive science. Anyway, I think Kevin H. is way too optimistic about the promise of computer-based individualized instruction. He says “It seems to me that if IBM can make Watson win Jeopardy, then effective personalization is also possible.” Possible, yes, but as Dan says, the computer “struggles to capture conceptual nuance.” Success at Jeopardy simply requires coming up with a series of facts; that’s highly data based and procedural. The distance from winning Jeopardy to “capturing conceptual nuance” is much, much greater than the distance from adding 2 and 2 to winning Jeopardy.

Kevin also says that “before too long, [programs with inner-loop adaptivity] might even be better than a teacher at helping individual students identify their mistakes and correct them, because as as teacher I can’t even sit with each student for 5 min per day.” I’d say it’s likely programs might be better than teachers at that “before too long” only if you think of “identifying a mistake” as telling Joanie that in _this_ step, she didn’t convert a decimal to a fraction correctly. It’ll be a very long time before a computer will be able to say why she made that mistake, and thereby help her correct her thinking.

2013 Aug 14. Christian Bokhove passes along an interesting link summarizing criticisms of CLT.

Bloggers In The Media

  1. John Burk on the physics of Angry Birds. [Charlotte Observer]
  2. Frank Noschese on Khan Academy. [MSNBC]
  3. Me on applied math problems using multimedia. [NBC]

Featured Comments

Jay:

Check and mate.

Richard Hake:

The anonymous “Jay” links to the totally vacuous paper “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching” [Kirschner et al. (2006)] and then states “Check and Mate.” [..] Anyone who thinks s(he) can even Check, let alone Mate, with that paper must be woefully ignorant of the literature — see e.g., Hmelo-Silver et al. (2007), Kuhn (2007), Schmidt et al. (2007), and Tobias & Duffy (2009).

JimP:

Holy Cow! Richard Hake just commented. There is physics education research royalty lurking around here.

Winter Quarter Wrap-Up / Spring Quarter Kick-Off

Brief Remarks Encapsulating Winter Quarter

  • Mentorship. This is new: I switched emphases from teacher education to math education. I’m retaining Pam Grossman (my current adviser in teacher education) but adding Jo Boaler (who is the math education professor at Stanford) to the Team Dan Meyer, Ph.D. roster. The education of new teachers and development of current teachers is still wildly fascinating to me, but I am asked with growing frequency to speak to and write for and work with math educators. I know enough about what I don’t know to know that I need to study up and work out some blind spots in my vision if I’m going to be effective in any of those roles.
  • Temptation. The private sector extended several invitations my way last quarter to leave Stanford – to cut a corner, basically, and go straight to work. Some of those invitations were easier to turn down than others. In every case, though, I was grateful for the opportunity to remind myself again of the reasons I committed to this difficult, frequently humbling work.
  • Music. I tend to wear out the grooves on a single record during finals week each quarter, playing the same songs over and over and over until they become useful white noise. Fall quarter it was Mumford and Sons. Winter quarter it was the soundtrack to The Social Network by Trent Reznor and Atticus Ross. Anyway.

Notes on last quarter’s classes:

  • Statistical Methods in Education. Key skill: analyze regression tables like this one for meaning. Prof. Stevens said in fall quarter he loves the moment when an author drops the tables in a paper because up until that point we’re just bobbing along with the author’s narrative. But the table tells its own stories.
  • Proseminar. One of my colleagues said it pretty well: “In any given week of proseminar, two thirds of the class simply don’t give a damn.” Which is to say the wonks don’t really care much about the pedagogy and the teachers don’t care much for policy and the social theorists have an entirely separate set of interests.
  • Casual Learning Technologies. This was a mixed bag. The field is really, really new (James Gee, the discipline’s flag-bearer, is a linguist by training who got interested in gaming all of six years ago) and has a lot of room to grow. Which is to say, I wasn’t dazzled by the literature. Remind me to post my group’s final project, though. That was fun.

Current Coursework

  • EDUC325C – Proseminar. David Labaree, Francisco Ramirez. Required. Labaree, in his initial remarks to the class: “You may have heard this course features too much reading, too much writing, that the criticism is too harsh, and our opinion of schools is too pessimistic. It’s all true.” (Labaree has written a few books of note.)
  • EDUC359F – Research in Mathematics Education. Jo Boaler. Elective.
  • EDUC424 – Introduction to Research in Curriculum and Teacher Education. Hilda Borko. Required.

Winter Quarter #GradSkool Tweets

  • Yes, this is #gradskool and, yes, Angry Birds is on the syllabus. http://yfrog.com/gzqghxsj 6 Jan
  • Today’s #gradskool throw-down: Who won in US schools and universities — Dewey or Thorndike? Great discussion. Lots of nuance. 18 Jan
  • Stats prof, reading the room: “I don’t know how to make this more lively. I really don’t know how to make this more lively.” #gradskool 23 Feb
  • Carol Dweck is speaking. I am listening. #gradskool yfrog.com/h4l7mjoj 8 Mar
  • Dweck has no slides. She’s four-feet tall, sitting on a table, feet dangling beneath her, positively /owning/ the room. #gradskool 8 Mar
  • Five rows from Michelle Rhee. An unlikely mix of education and business grad students in the building. yfrog.com/h0wo8yhj 11 Mar
  • Rhee: “What we did definitely made people unhappy.” She literally seems to believe that diplomacy and efficacy are mutually exclusive. 11 Mar
  • Rhee: “Is there a less controversial way to do controversial things? I don’t know the answer to that.” 11 Mar
  • Rhee: “Chris Christie? I love him. He’s a Republican and I’m a Democrat. It’s not obvious we’d get along so well.” Seriously? 11 Mar
  • Rhee: “I worry about people going into the job with longevity as one of the goals. I’m not a big believer in longevity.” 11 Mar
  • GSB student: “Did you really eat a bee?” Rhee: “I did eat a bee.” Way to pitch her a fastball, Chuck. 11 Mar
  • These moguls were the most out of place contingent at the Rhee Q&A. Good luck finding the executive washroom, fellas. yfrog.com/gzz8vdcj 11 Mar

Michelle Rhee followed me on Twitter the next day. So look out, right?

Favorite Winter Quarter Papers

I spent a few weeks of my winter quarter trying to make sense of the PBL / anti-PBL scrum of 06/07. Those papers are below, in chronological order, with a closing paper pitched specifically at math educators.

Spring Speaking & Workshops

They Really Get Motivation, Don’t They?

I’m working on a review of the anti-PBL / pro-PBL fracas of 2006 and I just had the wind knocked out of me by this line from Sweller & Cooper in 1985:

It was assumed that motivation, while reading a worked example, would be increased by the knowledge that a similar problem would need to be solved immediately afterwards. (p. 69)

This is their seminal study that establishes (finally!) the best practice for math instruction: I work out an example, then you work out an example from the same family as the first.

The straw man on which they premise their study (which, in turn, has been the premise of two decades of direct instruction advocacy) has to be seen to be believed. Even if I suspend disbelief for a moment, though, here’s the question I can’t find anywhere in the literature on worked examples:

What if you manage to create a perfect system of worked examples, a perfect lecture, a perfectly-wound informational system, and no one cares? What if the perfect lecture provokes students to truancy? What if a year of perfect explanation produces students who don’t want anything to do with math later in life, whether or not they’re proficient in the near term? (Boaler, 1998).

But the cost-benefit analysis of the perfect lecture is left to the teacher. Sweller, Cooper, and their modern-day acolytes totally punt the issue. “We know what works for an eight-question experiment,” they say. “You figure out how to make it work every day for a year.”

Sweller and Cooper don’t fully discount the issue of motivation but their answer – “You’ll be motivated to watch me work out this example because you’ll be doing one in a moment.” – is simply stunning. This is why teachers find it so easy to dismiss researchers.

2011 Mar 14: Sweller and Cooper’s straw man. In this study, the experimental group is taking a test on a problem while looking at an example of the same kind of problem worked out at the top of the page. The control group just takes the test. Unsurprisingly, the experimental group performs better. Surprisingly, Sweller and Cooper take this as evidence against any amount of guidance less direct than their worked examples.