The Difference Between Pure And Applied Math

Henry Pollak, in his essay, “What Is Mathematical Modeling?

Probably 40 years ago, I was an invited guest at a national summer conference whose purpose was to grade the AP Examinations in Calculus. When I arrived, I found myself in the middle of a debate occasioned by the need to evaluate a particular student’s solution of a problem. The problem was to find the volume of a particular solid which was inside a unit three-dimensional cube. The student had set up the relevant integrals correctly, but had made a computational error at the end and came up with an answer in the millions. (He multiplied instead of dividing by some power of 10.) The two sides of the debate had very different ideas about how to allocate the ten possible points. Side 1 argued, “He set everything up correctly, he knew what he was doing, he made a silly numerical error, let’s take off a point.” Side 2 argued, “He must have been sound asleep! How can a solid inside a unit cube have a volume in the millions?! It shows no judgment at all. Let’s give him a point.”

What a fantastic dilemma.

Pollak argues that the student’s error would merit a larger deduction in an applied context than in a pure context. In a real-world context, being wrong by a factor of one million means cities drown, atoms obliterate each other, and species go extinct. In a pure math context, that same error is a more trivial matter of miscomputation.

The trouble is that, to the math teachers in the room, a unit cube is a real-world object. They can hold a one-centimeter unit cube in their hands and, more importantly, they can hold it in their minds.

The AP graders aren’t arguing about grading. They’re trying to decide what is real.

What a fantastic dilemma.

Featured Comment

Shannon Alvarez:

Whenever I wanted to give students the most amount of partial credit, my coop teacher would ask me the poignant question, “What exactly are you assessing?” I found this was a great question to continue asking myself. So, in the example you gave, are you assessing students’ ability to perform mathematical functions correctly or are you assessing their ability to connect those math functions to the real world?

Benjamin Dickman alerts us that Pollak’s piece is online, free, along with a number of his modeling tasks.

About 
I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.

20 Comments

  1. The AP reading! Dan has discovered my wheelhouse!
    There’s another important consideration when assessing the student’s response: that the student wasn’t solving a problem in a real-life setting, and instead was taking an exam with a ticking clock. While we certainly hope that students identify that 500-foot high mushrooms and gummy bears with volumes in the thousands of cubic feet are outside of any reasonable domain, real world math isn’t usually conducted until the artificial duress of an AP exam. In stats, we often (but not always) let these brain burps go if the underlying statistical reasoning is sound. I can’t speak for the grading principles of Calc.
    Developing a first draft, editing and revising are crucial steps in the writing process – how do we make them equally important in math, and then provide opportunity to assess revision in math courses?

  2. After I switched from maths, operational research, statistics into control systems engineering my feelings on this matter changed dramatically. I would tell the students “Not only do I not give you the answers, you have to convince yourself that they are correct and meaningful. One day someone is going to pay you for this”.

  3. What a great dilemma indeed, and yet I can’t help but see the other story here.

    When I look at a student solution, I feel that I need to factor in everything about that student solution. When there is what seems to be a minor error, based on the rest of the student work, what would be the reaction if the error was pointed out to the student?

    Of course we want our students to pay attention to detail, and number sense is very important in applied math AND theoretical math. However, in the case of a student who solved a challenging problem during an AP test but made a calculation error and didn’t catch it, do we really think that student understood only 10% of the math? Or did the student understand 90% of the math? If we pointed the error out, would the student exclaim, “I can’t believe I missed that!”, or would that student look puzzled, shrug, and say, “I don’t get it”.

    Context of a student, knowing a student’s background, previous work, and how they are doing on a test day isn’t a luxury of standardized test graders, and factoring it in too much can lead to dangerously subjective and inaccurate grading, but it’s naive to ignore those factors as very real factors.

  4. My boss and I had this conversation back during the CST days. He wanted to know what we needed to teach students in order for them to score higher on the exam. I told him we needed to teach them to be perfect. It bothered me then (and still bothers me now) that students would be tested all or nothing on a process that requires multiple steps. My favorite example is finding the solutions to a 3×3 system. A student could perform all operations and steps correctly… except for one. If their wrong answer happened to fit with one of the multiple choice answers, they were given a big fat WRONG. First of all, does this really tell us if the student understands the process of solving a 3×3 system? Any more than the student who is filling in the bubbles to create a happy face on his answer sheet? And how is this problem worth the same one point as a problem where they convert a log equation to an exponential one? How are those two problems even on a par with each other?
    Wouldn’t it be wiser and more indicative of a student’s understanding (on any test) if we were to say “say how you arrived at your answer and justify how you know your answer makes sense.”

  5. My favorite example along these lines was from back in my chemistry days, when I was TAing Physical Chemistry.

    There’s a concept called partial molal volume, which, if memory serves, tells you what volume 1 mole of a substance would take up if dissolved in an infinite amount of solute. A mole of salt is about 2 tablespoons dry, but if you add it to water, it will take up less space than it does dry. For P Chem lab, you find the partial molal volume of salt (NaCl) by dissolving a quantity of salt in increasingly large amounts of water and basically extrapolating.

    This one kid who was bright but kind of lazy in his thinking made a mistake with units and ended up reporting the partial molal volume of salt as something like -10,000 liters. I love the idea of chucking 2 tablespoons of salt into a lake and POOF! 10,000 liters just DISAPPEARS!

    Still amuses me 25+ years later. Maybe it’s just me…

  6. I am a college chemistry professor, and yes, the student would get points off. An essential part of problem solving is asking if your answe makes sense.

  7. This is indicative of a possible weak link early in math education: the failure to check reasonableness of answers when checking answers. This has to be stressed very early on. I wish I had a dime for every time I saw a student solve and check perfectly the equation(s) created to solve a problem, only they were not the correct equation(s).

    I often went “out of curricula” and spent time getting high school students to devise equations (the likes of which they had never solved) for problems, watch me solve them, then analyze the correctness of my answer. I was trying to direct their attention away from the “equation solving” and onto the rest of the problem-solving structure.

    After 30+ years in the classroom where I tallied student errors on a monthly basis, over 90% of their errors were either reading or basic arithmetic.

    The student in question here was possibly weak on the “checking” habits. This is an error that ought to be judged in connection with the rest of the students work (common occurrence, infrequent event, or definite rarity), and also with thought in mind of the frequency of that style of error over the entire testing population.

  8. Shannon Alvarez

    May 10, 2016 - 4:32 am -

    I was a good math student in high school but it wasn’t until I entered engineering school in college that I really started to focus on whether an answer makes sense or not. In engineering all of that “pure math” became more real, and therefore, it became more obvious that I should be searching for answers that make sense.

    I’m currently making the switch to the teaching world and did my student teaching this past winter. Whenever I wanted to give students the most amount of partial credit, my coop teacher would ask me the poignant question “what exactly are you assessing?” I found this was a great question to continue asking myself. So, in the example you gave, are you assessing students’ ability to perform mathematical functions correctly or are you assessing their ability to connect those math functions to the real world??

  9. Love this post! As someone who has a BS in Applied Math, an MS in Paleontology and a certified Math/Science teacher, I can definitely see both sides of the argument! It reminds me of the NASA engineers who, in making calculations for a probe, used different units (m vs. ft). The result is that the probe crashed into Mars and was a HUGE waste of money!

    Teaching my students, I push how the major issue is not simple mistakes in your calculations (i.e. missing a negative, multiplying instead of dividing), but CHECKING your work to see that it makes sense. Then, once you check your answer, CORRECTING any mistakes!

    I would grade it by taking 1 point off for the initial error, and 4 for not checking his answer. That way, he loses half credit, mostly for not checking his answer.

    The main difference between Applied and Pure math is that, often kids need the pure math concepts to solve problems, but without those applied math strategies to check that their answers are reasonable and precise enough, the end result could be disastrous in a real-world situation.

  10. David Garcia

    May 10, 2016 - 11:40 am -

    I would like to reiterate what Robert wrote. During an AP test there is tremendous time pressure. Knowing nothing else, I would assume that the student did not have time to do anything about the mistake even if he or she recognized it. Would the student who wrote “I know this is wrong but I can’t find out why :(” deserve more points than the student who doesn’t write this? Is part of the scoring criteria for this course for student to identify or check for reasonableness of their response?

  11. This reminds me of a problem I gave my students about the cost of car tires one time. A student worked out the problem and said each tire had to cost something like $5000. The other students said that didn’t make sense. The student replied “it doesn’t have to make sense, it is just a math problem”.

  12. So you’re telling me that the AP graders didn’t have a rubric or any other type of systematic approach for grading open-ended problems? I hope they do now.

  13. Chester Draws

    May 11, 2016 - 1:42 am -

    I think we need to be careful about accusing our students of “not checking”.

    I know that I am not alone in my inability to see mistakes in what I have just typed. To see mistakes I need to take a long break and come back and re-read it fresh. This is an adult, who knows the sorts of mistakes he makes, mind you, so should spot them. Not a kid who has no ideas what he can’t do.

    Students rechecking their work generally make the same mistake again. So much so that I no longer ask students to check their work, just to learn to be more careful in the first place.

    In fact if my students know that their answer is wrong I tell them to do it again from the start. That is pretty much the only way they will do it right (and even then they’ll still often make the same mistake).

    In an exam situation students simply do not have the time and calmness to check properly. It is literally a waste of time to ask them to do so, when they could be doing more questions (or the same ones slower).

  14. Maybe what is real has implication on catching obvious errors. From my understanding of the book Thinking, fast and slow, if it was real for the student, brain system 1 would have easily catched that error. Maybe because this is unreal for the students, lazy system 2 let it slide.

  15. Chester Draws

    May 11, 2016 - 9:21 pm -

    Carl, to clarify.

    Students should notice that they have made an error when the answer is clearly wrong in the context.

    My point is that going back and looking for the where error occurred is not necessarily practical in an exam.

    I often know that I have made an error in a worked answer I want to give students. In extreme cases it has taken me hours to track it down. The thing is, once I made the error I kept making it again because I was incapable of seeing it cleanly. That requires taking time off and coming back fresh to it.

  16. In a situation where the answer is so clearly wrong that it borders on the ridiculous, I typically take a point for the initial mistake and a point for not recognizing that the answer is wrong. If a student writes something like a question mark afterwards or “this doesn’t seem right by I don’t know what I did wrong,” I take that as recognizing the unreasonableness of their answer and take only the point for the error.

    Of course a student without anything written may also have realized it was wrong, but they should keep in mind that their job on the test is to fully communicate all of their understanding to me.

  17. FWIW: The MSRI piece by H.O. Pollak is from the COMAP Mathematical Modeling Handbook.

    Full copies are available for free online; here is a brief sample with the article (begins on PDF 8/25):

    : http://www.comap.com/modelingHB/Modeling_HB_Sample.pdf

    But here is a longer excerpt from the Handbook (though with an abbreviated version of Pollak’s intro):

    : http://www.comap.com/modelingHB/CCSSModelingHB.pdf

    One reason to point this out is that the lessons are each followed with a section entitled “Teacher’s Guide — Extending the Model.” Although not attributed in the document, these were (essentially or entirely) written by Pollak.

    So, e.g., in the linked MSRI version Pollak writes:

    1. “An introduction to the modeling of epidemics can be associated with Viral Marketing.”

    2. “…Sunken Treasure has discrete, continuous, and even experimental aspects.”

    Sunken Treasure, besides using a variety of forms of mathematical reasoning, even suggests using physics in order to do mathematics!”

    3. “For example, in connection with several modules involving probability and statistics, the notion of optimal stopping occurs more than once. It is the central idea in Picking a Painting…”

    I wrote (but did not edit…) the three lessons numbered above, and report with surety that Pollak’s extensions are much better than my contributions! (E.g., the “association” of modeling epidemics with Viral Marketing is totally due to Pollak.)

  18. This is a fantastic dilemma, but the dilemma arises from the practical task of assigning points to the student’s answer. I imagine those same teachers would be largely in agreement about what the students’ work demonstrates about their mathematical thinking, or what instructional approach might be appropriate to take next (although that’s a moot point to graders of AP exams). It’s the relative importance of the pure versus the applied, and how to assign point values to these, that creates the debate. In everyday teaching, how we adjust our instruction when students make mistakes like these is more important than how many points we take off. Let the AP graders get hung up on points.