Category: tech contrarianism

Total 130 Posts

Big Online Courses Have a Problem. Here’s How We Tried to Fix It.

The Problem

Here is some personal prejudice: I don’t love online courses.

I love learning in community, even in online communities, but online courses rarely feel like community.

To be clear, by online courses I mean the kind that have been around almost since the start of the internet, the kind that were amplified into the “Future of Education™” in the form of MOOCs, and which continue today in a structure that would be easily recognized by someone defrosted after three decades in cold storage.

These courses are divided into modules. Each module has a resource like a video or a conversation prompt. Students are then told to respond to the resource or prompt in threaded comments. You’re often told to make sure you respond to a couple of other people’s responses. This is community in online courses.

The reality is that your comment falls quickly down a long list as other people comment, a problem that grows in proportion to the number of students in the course. The more people who enroll, the less attention your ideas receive and consequently you’re less interested in contributing your ideas, a negative feedback loop which offers some insight into the question, “Why doesn’t anybody finish these online courses?

I don’t love online courses but maybe that’s just me. Two years ago, the ShadowCon organizers and myself, created four online courses to extend the community and ideas around four 10-minute talks from the NCTM annual conference. We hosted the courses using some of the most popular online course software.

The talks were really good. The assignments were really good. There’s always room for improvement but the facilitators would have had to quit their day jobs to increase the quality even 10%.

And still retention was terrible. 3% of participants finished the fourth week’s assignment who finished the first week’s.

Low retention from Week 1 to Week 4 in the course.

The organizers and I had two hypotheses:

  • The size of the course enrollment inhibited community formation and consequently retention.
  • Teachers had to remember another login and website in order to participate in the course, creating friction that decreased retention.

Our Solution

For the following year’s online conference extensions, we wanted smaller groups and we wanted to go to the people, to whatever software they were already using, rather than make the people come to us.

So we used technology that’s even older than online course software, technology that is woven tightly into every teacher’s daily routine: email.

Teachers signed up for the courses. They signed up in affinity groups – coaches, K-5 teachers, or 6-12 teachers.

The assignments and resources they would have received in a forum posting, they received in an email CC’d to two or three other participants, as well as the instructor. They had their conversation in that small group rather than in a massive forum.

Of course this meant that participants wouldn’t see all their classmates’ responses in the massive forum, including potentially helpful insights.

So the role of the instructors in this work wasn’t to respond to every email but rather to keep an eye out for interesting questions and helpful insights from participants. Then they’d preface the next email assignment with a digest of interesting responses from course participants.

The Results

To be clear, the two trials featured different content, different instructors, different participants, and different grouping strategies. They took place in different years and different calendar months in those years. Both courses were free and about math, but there are plenty of variables that confound a direct comparison of the media.

So consider it merely interesting that average course retention was nearly 5x when the medium was email rather than online course software.

Retention was nearly five times greater in the email course than LMS.

It’s also just interesting, and still not dispositive, that the length of the responses in emails were 2x the length of the responses in the online course software.

Double the word count.

People wrote more and stuck around longer for email than for the online course software. That says nothing about the quality of their responses, just the quantity. It says nothing about the degree to which participants in either medium were building on each other’s ideas rather than simply speaking their own truth into the void.

But it does make me wonder, again, if large online courses are the right medium for creating an accessible community around important ideas in our field, or in any field.

What do you notice about this data? What does it make you wonder?

Featured Comments

Leigh Notaro:

By the way, the Global Math Department has a similar issue with sign-ups versus attendance. Our attendance rate is typically 5%-10% of those who sign up. Of course, we do have the videos and the transcript of the chat. So, we have made it easy for people to participate in their own time. Partipating in PD by watching a video though is never the same thing as collaborating during a live event – virtually or face-to-face. It’s like learning in a flipped classroom. Sure, you can learn something, but you miss out on the richness of the learning that really can only happen in a face-to-face classroom of collaboration.

William Carey:

At our school now, when we try out new parent-teacher communication methods, we center them in e-mail, not our student information system. It’s more personal and more deeply woven into the teachers’ lives. It affords the opportunity for response and conversation in a way that a form-sent e-mail doesn’t.

Cathy Yenca:

At the risk of sounding cliché or boastful about reaching “that one student”, how does one represent a “data point” like this one within that tiny 3%? For me, it became 100% of the reason and reward for all of the work involved. I know, I know, I’m a sappy teacher :-)

Justin Reich is extremely thoughtful about MOOCs and online education and offered an excellent summary of some recent work.

2018 Oct 5. Definitely check out the perspective of Audrey, who was a participant in the email group and said she wouldn’t participate again.

2018 Oct 12. Rivka Kugelman had a much more positive experience in the email course than Audrey, one which seemed to hinge on her sense that her emails were actually getting read. Both she and Audrey speak to the challenge of cultivating community online.

Learning the Wrong Lessons from Video Games

[This is my contribution to The Virtual Conference on Mathematical Flavors, hosted by Sam Shah.]

In the early 20th century, Karl Groos claimed in The Play of Man that “the joy in being a cause” is fundamental to all forms of play. One hundred years later, Phil Daro would connect Groos’s theory of play to video gaming:

Every time the player acts, the game responds [and] tells the player your action causes the game action: you are the cause.

Most attempts to “gamify” math class learn the wrong lessons from video games. They import leaderboards, badges, customized avatars, timed competitions, points, and many other stylistic elements from video games. But gamified math software has struggled to import this substantial element:

Every time the player acts, the game responds.

When the math student acts, how does math class respond? And how is that response different in video games?

Watch how a video game responds to your decision to jump off a ledge.

Now watch math practice software responds to your misinterpretation of “the quotient of 9 and c.”

The video game interprets your action in the world of the game. The math software evaluates your action for correctness. One results in the joy in being the cause, a fundamental feature of play according to Groos. The other results in something much less joyful.

To see the difference, imagine if the game evaluated your decision instead of interpreting it.

I doubt anyone would argue with the goals of making math class more joyful and playful, but those goals are more easily adapted to a poster or conference slidedeck than to the actual experience of math students and teachers.

So what does a math class look like that responds whenever a student acts mathematically, that interprets rather than evaluates mathematical thought, that offers students joy in being the cause of something more than just evaluative feedback.

“Have students play mathematical or reasoning games,” is certainly a fair response, but bonus points if you have recommendations that apply to core academic content. I will offer a few examples and guidelines of my own in the comments later tomorrow.

Featured Comments

James Cleveland:

I feel like a lot of the best Desmos activities do that, because they can interpret (some of) what the learner inputs. When you do the pool border problem, it doesn’t tell you that your number of bricks is wrong – it just makes the bricks, and you can see if that is too many, too few, or just right.

In general, a reaction like “Well, let’s see what happens if that were true” seems like a good place to start.

Kevin Hall:

My favorite example of this is when Cannon Man’s body suddenly multiplies into two or three bodies if a student draws a graph that fails the vertical line test.

Sarah Caban:

I am so intrigued by the word interpret. “Interpret” is about translating, right? Sometimes when we try to interpret, we (unintentionally) make assumptions based on our own experiences. Recently, I have been pushing myself to linger in observing students as they work, postponing interpretations. I have even picked up a pencil and “tried on” their strategies, particularly ones that are seemingly not getting to a correct solution. I have consistently been joyfully surprised by the math my students were playing with. I’m wondering how this idea of “trying on” student thinking fits with technology. When/how does technology help us try on more student thinking?

Dan Finkel:

I think that many physical games give clear [evaluative] feedback as well, insofar as you test out a strategy, and see if you win or not. Adults can ruin these for children by saying, “are you sure that’s the right move?” rather than simply beating them so they can see what happens when they make that move. The trick there is that some games you improve at simply by losing (I’d put chess in this column, even though more focused study is essential to get really good), where others require more insight to see what you actually need to change.

The Four Questions I Always Ask About New Technology in Education

A tweet where someone asks my impressions about Graspable Math.

A tool called Graspable Math found an audience on Twitter late last week, and a couple of people asked me for my opinion. I’ll share what I think about Graspable Math, but I’ll find it more helpful to write down how I think about Graspable Math, the four questions I ask about all new technology in education. [Full disclosure: I work in this field.]

1. What does it do?

That question is easier for me to answer with basic calculators and graphing calculators than with Graspable Math. Basic calculators make it easy to compute the value of numerical expressions. Graphing calculators make it easy to see the graphical representation of algebraic functions.

Graspable Math’s closest cousins are probably the Dragonbox and Algebra Touch apps. All of these apps offer students a novel way of interacting with algebraic expressions.

Drag a term to the opposite side of an equality and its sign will change.

Move a term from one side of the equation to the other.

Double click an operation like addition and it will execute that operation, if it’s legal.

Click to perform an operation like addition.

Drag a coefficient beneath the equality and it will divide the entire equation by that number.

Drag to divide by a coefficient.

Change any number in that sequence of steps and it will show you how that change affects all the other steps.

Change a number in one place in the sequence of steps and it will change it everywhere else.

You can also link equations to a graph.

Connect the equation with a graph.

2. Is that a good thing to do?

No tool is good. We can only hope to figure out when a tool is good and for whom and for what set of values.

For example, if you value safety, an arc torch is a terrible tool for a toddler but an amazing tool for a welder.

I value a student’s conviction that “Mathematics makes sense” and “I am somebody who can make sense of it.”

So I think a basic calculator is a great tool for students who have a rough sense of the answer before they enter it. (ie. I know that 125 goes into 850 six-ish times. A basic calculator is perfect for me here.)

A graphing calculator is a great tool for a student who understands that a graph is a picture of all the x- and y-values that make an algebraic statement true, a student who has graphed lots of those statements by hand already.

A basic and graphing calculator can both contribute to a student’s idea that “Mathematics doesn’t make a dang bit of sense” and “I cannot make sense of it without this tool to help me” if they’re used at the wrong time in a student’s development.

The Graspable Math creators designed their tool for novice students early in their algebraic development. Is it a good tool for those students at that time? I’m skeptical for a few reasons.

First, I suspect Graspable Math is too helpful. It won’t let novice students make computational errors, for example. Every statement you see in Graspable Math is mathematically true. It performs every operation correctly. But it’s enormously helpful for teachers to see a student’s incorrect operations and mathematically false statements. Both reveal the student’s early understanding of really big ideas about equivalence.

In one of their research papers, the Graspable Math team quotes a student as saying, “[Graspable Math] does the math for you – you don’t have to think at all!” which is a red alert that the tool is too helpful, or at least helpful in the wrong way.

Second, Graspable Math’s technological metaphors may conceal important truths about mathematics. “Drag a term to the opposite side of an equality and its sign will change” isn’t a mathematical truth, for example.

Move a term from one side of the equation to the other.

It’s a technological metaphor for the mathematical truth that you can add the same number (3 in this case) to both sides of an equal sign and the new equation will have all the same solutions as the first one. That point may seem technical but it underpins all of Algebra and it isn’t clear to me how Graspable Math supports its development.

Third, Graspable Math may persuade students that Algebra as a discipline is very concerned with moving symbols around based on a set of rules, rather than with understanding the world around them, developing the capacity for conjecturing, or some other concern. I’m speaking about personal values here, but I’m much more interested in helping students turn a question into an equation and interpret the solutions of that equation than I am in helping them solve the equation, which is Graspable Math’s territory.

These are all tentative questions, skepticisms, and hypotheses. I’m not certain about any of them, and I’m glad Graspable Math recently received an IES grant to study their tool in more depth.

3. What does it cost?

While Graspable Math is free for teachers and students, money isn’t the only way to measure cost. Free tools can cost teachers and students in other ways.

For instance, Graspable Math, like all new technology, will cost teachers and students time as they try to understand how it works.

I encourage you to try to solve a basic linear equation with Graspable Math, something like 2x – 3 = 4x + 7. Your experience may be different from mine, but I felt pretty silly at several points trying to convince the interface to do for me what I knew I could do for myself on paper. (Here’s a tweet that made me feel less alone in the world.)

Graspable Math performs algebraic operations correctly and quickly but at the cost of having to learn a library of gestures first, effectively trading a set of mathematical rules for a set of technological rules. (There is a cheat sheet.) That kind of cost is at least as important as money.

2018 Aug 8. Elizabeth Hernandez writes in the comments:

One thing I might add to the section about cost is that it is so important to find out how student data is being used. Resources that are labeled as “free” often make students and teachers pay with their data. That is unethical if the vendor doesn’t provide information about what data they collect and how it is used. Graspable Math is a no-go for me because I can’t find their terms of use or privacy policy. The only information I saw about data collected was one vague sentence when I click login.

4. What do other people think about this?

I spent nearly as much time searching Twitter for mentions of Graspable Math as I did playing with the tool itself. Lots of people I know and respect are very excited about it, which gives me lots of reasons to reconsider my initial assessment.

While I find teachers on Twitter are very easily excited about new technology, I don’t know a single one who is any less than completely protective of their investments of time and energy on behalf of their students. Graspable Math may have value I’m missing and I’m looking forward to hearing about it from you folks here and on Twitter.

BTW. Come work with me at Desmos!

If you find these questions interesting and you’d like to chase down their answers with me and my amazing colleagues at Desmos, please consider applying for our teaching faculty, software engineering, or business development jobs.

2018 Jun 11. Cathy Yenca pulls out this helpful citation from Nix the Tricks (p. 54).

An image showing the page from Nix the Tricks.

2018 Jun 11. The Graspable Math co-founders have responded to some of the questions I and other educators have raised here. Useful discussion!

Must Read: Larry Berger’s Confession & Question About Personalized Learning

Larry Berger, CEO of Amplify, offers a fantastic distillation of the promises of digital personalized learning and how they are undone by the reality of learning:

We also don’t have the assessments to place kids with any precision on the map. The existing measures are not high enough resolution to detect the thing that a kid should learn tomorrow. Our current precision would be like Google Maps trying to steer you home tonight using a GPS system that knows only that your location correlates highly with either Maryland or Virginia.

If you’re anywhere adjacent to digital personalized learning – working at an edtech company, teaching in a personalized learning school, in a romantic relationship with anyone in those two categories – you should read this piece.

Berger closes with an excellent question to guide the next generation of personalized learning:

What did your best teachers and coaches do for you—without the benefit of maps, algorithms, or data—to personalize your learning?

My best teachers knew what I knew. They understood what I understood about whatever I was learning in a way that algorithms in 2018 cannot touch. And they used their knowledge not to suggest the next “learning object” in a sequence but to challenge me in whatever I was learning then.

“Okay you think you know this pretty well. Let me ask you this.”

What’s your answer to Berger’s question?

BTW. It’s always the right time to quote Begle’s Second Law:

Mathematics education is much more complicated than you expected even though you expected it to be more complicated than you expected.

Featured Comment

SueH:

I have come to believe that all learning is personalized not because of what the teacher does but because of what’s happening inside the learner’s brain. Whatever pedagogical choices a teacher makes, it’s the student’s work that causes new neural networks to be created and pre-existing ones to be augmented or strengthened or broken or pruned.

Scott Farrand:

I’ll accept the risk of stating the obvious: my best teachers cared about me, and I felt that. Teaching is an act of love. A teacher who cares about each student is much more likely to, in that instant after a student responds to a question, find the positive value in the response and communicate encouragement to the student, verbally and nonverbally. And students who feel cared for are more likely to have good things going on in their brains, as described by SueH.

1,000 Math Teachers Tell Me What They Think About Calculators in the Classroom

Yesterday, I asked teachers on Twitter about their classroom calculator policy and 978 people responded.

I wanted to know if they allow calculators a) during classwork, b) during tests, and also which kinds of calculators:

  • Hardware calculators (like those sold by Texas Instruments, Casio, HP, etc.).
  • Mobile phone calculators (like those you can download on your Android or iOS phone).

(Full disclosure: I work for a company that distributes a free four function, scientific, and graphing calculator for mobile phones and other devices.)

I asked the question because hardware calculators don’t make a lot of financial sense to me.

Here are some statistics for high-end HP and Texas Instruments graphing calculators along with a low-end Android mobile phone. (Email readers may need to click through to see the statistics.)

 cost ($)storage (MB)memory (MB)screen size
TI Nspire CX129.9910064320 x 240
HP Prime149.9925632320 x 240
Moto G Unlocked Smartphone179.993200020001920 x 1080

You pay less than 2x more for the mobile phone and you get hardware that is between 30x and 300x more powerful than the hardware calculators. And the mobile phone sends text messages, takes photos, and accesses webpages. In many cases, the student already has a mobile phone. So why spend the money on a second device that is much less powerful?

1,000 teachers gave me their answer.


The vast majority of respondents allow hardware calculator use in their classes. I suspect I’m oversampling for calculator-friendly teachers here, by virtue of drawing that sample from a digital medium like Twitter.

734 of those teachers allow a hardware graphing calculator but not a mobile phone on tests. 366 of those teachers offered reasons for that decision. They had my attention.

Here are their reasons, along with representative quotes, ranked from most common to least.

Test security. (173 votes.)

It’s too easy for students to share answers via text or picture.

Internet access capabilities and cellular capabilities that make it way too easy for the device to turn from an analysis/insight tool to the CheatEnable 3000 model.

School policy. (68 votes.)

School policy is that phones are in lockers.

It’s against school policy. They can use them at home and I don’t have a problem with it, but I’m not allowed to let them use mobile devices in class.

Distraction. (67 votes.)

Students waste time changing music while working problems, causing both mistakes due to lack of attention and inefficiency due to electronic distractions.

We believe the distraction factor is a negative impact on learning. (See Simon Sinek’s view of cell phones as an “addiction to distraction.”)

Test preparation. (54 votes.)

I am also preparing my students for an IB exam at the end of their senior year and there is a specific list of approved calculators. (Phones and computers are banned.)

Basically I am trying to get students comfortable with assessments using the hardware so they won’t freak out on our state test.

Access. (27 votes.)

Our bandwidth is sometimes not enough for my entire class (and others’ classes) to be online all at once.

I haven’t determined a good way so that all students have equal access.

Conclusion

These reasons all seem very rational to me. Still, it’s striking to me that “test security” dwarfs all others.

That’s where it becomes clear to me that the killer feature of hardware calculators is their lack of features. I wrote above that your mobile device “sends text messages, takes photos, and accesses webpages.” At home, those are features. At school, or at least on tests, they are liabilities. That’s a fact I need to think more about.

Featured Comments

Jennifer Potier:

I work in a BYOD school. What I have learned is that the best way to disengage students from electronic devices is to promote learning that involves student sharing of discussion, planning, thinking, and solving problems. When the students are put “centre stage,” the devices start becoming less interesting.

Chris Heddles:

The restriction on calculation aids and internet connections still stems from a serious cultural issue we have in mathematics teaching – the type of questions that we ask. While we continue to emphasise the importance of numerical calculations and algebraic manipulation in assessment, electronic aids to these skills will continue to be an issue.

Instead, we should shift the focus to understanding the situation presented, setting up the equations and then making sense of the calculation results. With this shift, the calculations themselves are relatively unimportant so it doesn’t really matter how the student process them. Digital aids can be freely used because they are off little use when addressing the key aspects of the assessment tasks.

In many ways our current mathematics assessment approach is equivalent to a senior secondary English essay that gave 80% of the grade for neat handwriting and correct spelling. If this were the case then they too, would have to ban all electronic aids to minimise the risk of “cheating” by typing and using spell checking software.

If we change what we value in assessment then we can open up better/cheaper electronic aids for students.

2017 Mar 24. Related to Chris’s comment above, I recently took some sample SAT math tests and was struck by how infrequently I needed a calculator. Not because I’m any kind of mental math genius. Simply because the questions largely concerned analysis and formulation over calculation and solution.

Featured Tweets