Category: presentation

Total 43 Posts

My 2016 Speaking Schedule

151114_1

Here is my speaking calendar for 2016. Some of these sessions are private, others have open registration pages (see the links), and others have waiting lists. Feel free to send an e-mail to dan@mrmeyer.com with inquiries about any of them. It’d be a treat to see you at a workshop or a conference.

BTW. After my keynote address at Nebraska’s state conference on September 9, 2016, I’ll have worked with teachers in every U.S. state. It’s been such a privilege getting to know so many interesting people doing so much interesting work. If you have attended any of my sessions, you’ve heard me express how indebted I am to participants from other sessions for the questions they ask about my ideas and the ideas they share themselves.

Fake-World Math: When Mathematical Modeling Goes Wrong and How to Get it Right

Fake-World Math was the talk I gave for most of 2014, including at NCTM. It looks at mathematical modeling as it’s defined in the Common Core, practiced in the world of knowledge work, and maligned in print textbooks. I discuss methods for helping students become proficient at modeling and methods for helping them enjoy modeling, which are not the same set of methods.

Also, a note on process. I recorded my screen throughout the entire process of creating the talk. Then I sped it up and added some commentary.

NCTM 2015 Schedule

Poking my head up briefly to let you know where I’ll be speaking at NCTM in Boston this week:

I really think we should spend Thursday evening together.

From 3:30 on you have a) the Ignite sessions, which are always fun, then you have b) ShadowCon, which is basically the future of NCTM conferences, after which c) your two favorite math education companies want to buy you a drink.

If you’re looking for help with the rest of your schedule, have a look at this list of Internet-enabled presenters as well as any of the names from my list last year.

For my part, if I can only make it to one session, it’s going to be this one.

Video Games & Making Math More Like Things Students Like

Here is the talk I gave at CMC-North last weekend: Video Games & Making Math More Like Things Students Like.

Students generally prefer video games to our math classes and I wanted to know why. So I played a lot of video games and read a bit about video games and drew some conclusions. I also asked my in-laws to play two video games in front of a camera so we could watch their learning process and draw comparisons to our students.

These are the six lessons I learned:

  1. Video games get to the point.
  2. The real world is overrated.
  3. Video games have an open middle.
  4. The middle grows more challenging and more interesting at the same time.
  5. Instruction is visual, embedded in practice, and only as needed.
  6. Video games lower the cost of failure.

Featured Comments:

Tim brings storytelling to the conversation:

As one of those weird AP Lit and AP Calc teachers – and a gamer – I think “story” is key in video gaming. Psychologists (like Willingham) and sociologists talk about the “story bias” of the brain. Nearly all long video games have a heavy story element. You are a character embedded in a story, be it open-ended or scripted. So often when I’m frustrated with bad game design I’ll push through because I’m committed to the story. So often when I finish the “missions” I give up on the well-designed “side-quests” because the story has rushed out of the game and it’s just a task-garden again.

I’ll play Angry Birds for a few minutes. I’ll play Temple Run till I beat my friend’s score. But I won’t put 20 hours into a game until I find a story I want to be invested in. (In the same breath, I’ll say that – in the sense of “story” that Willingham uses it – Angry Birds and Temple Run have their stories, too. Far more than many “story” problems in math books like to pretend that have.)

Not sure how you get rich story into math. How to become characters whose adventures we become invested in, not the scripted Jane who is trying to maximize the area of his pasture or the open-ended John who is trying to find a good way to estimate the number of people in a photo.

Anyway – the first lesson I learn from video games is: humans will spend hours on a good yarn.

My Panama Canal metaphor was just a joke from the onset so I had to admire Joshua Greene’s continued debunking.

Why People Didn’t Like Your Conference Presentation

I attended the California Math Council’s annual conference in Palm Springs last week along with nearly 3,600 other attendees. I presented a session there along with nearly 240 other presenters. At the end of our sessions, attendees could use PollEverywhere to send in their responses to three statements:

  1. The speaker was well-prepared and knowledgeable.
  2. The speaker was an engaging and effective presenter.
  3. The session matched the title and description in the program book.

Attendees scored each statement on a scale from 0 (disagree!) to 3 (agree!). Attendees could also leave written feedback.

At the end of the conference, presenters were sent to a public site where they could access not just their own session feedback, but the feedback for other presenters also. This link started circulating on Twitter. I scraped the feedback from every single session and analyzed all the data.

This is my intention:

  • To learn what makes sessions unpopular with attendees. It’s really hard to determine what makes a session good. “Great session!” shows up an awful lot without elaboration. People were much more specific about what they disliked.

This isn’t my intention:

  • To shame anybody. Don’t ask me for the data. Personally, I don’t think it should have been released publicly. I hope the conference committee takes the survey results seriously in planning future conferences but I’m not here to judge anybody.

Overall

There were 2,972 total feedback messages. 2,615 were correctly formatted. With 3,600 attendees attending a maximum of eight sessions, that’s 28,800 feedback messages that could have been sent, for a response rate of about 10%.

How Much Feedback Did You Get?

141031_4lo

Most presenters received between 10 and 20 feedback messages. One presenter received 64 messages, though that was across several sessions.

How Did You Do On Each Of The Survey Questions?

141031_2lo

Overall the feedback is incredibly positive. I’m very curious how this distribution compares to other math education conferences. I attend a lot of them and Palm Springs is on the top shelf. California has a deep bench of talented math educators and Palm Springs is a great location which draws in great presenters from around the country. I’d put it on par with the NCTM regionals. Still, this feedback is surprisingly sunny.

The data also seem to indicate that attendees were more likely to critique the presenters’ speaking skills (statement #2) than their qualifications (statement #1).

How Did You Do On A Lousy Measure Of Overall Quality?

For each presenter, I averaged the responses they received for each of the survey questions and then summed those averages. This measure is problematic for loads of reasons, but more useful than useless I think. It runs from 0 to 9.

141031_3lo

62 presenters received perfect scores from all their attendees on all measures. 132 more scored above an 8.0. Even granting the lousiness of the measure, it points to a very well-liked set of presenters.

So why didn’t people like your session? The following quotes are all verbatim.

What People Said When You Weren’t “Well-Prepared Or Knowledgeable.”

If someone rated you a 0 or a 1 for this statement, it was because:

  • he was late and unprepared.
  • frustrating that we spent an hr on ppl sharing rationale or venting. I wanted to hear about strategies and high leaveage activities at the school.
  • went very fast
  • information was scattered.
  • A lot of sitting around and not do much.
  • This presentation was scattered and seemed thrown together at the last minute.
  • Unfortunately, the presenter was not focused. There was no clear objectives. Please reconsider inviting this presenter.

What People Said When You Weren’t “An Engaging Or Effective Presenter.”

If someone rated you a 0 or a 1 for this statement, it was because:

You didn’t offer enough practical classroom applications.

  • I wanted things students could use.
  • philosophy more than application. I prefer things I can go back with. I already get the philosophy.
  • very boring. Too much talking. I wanted more in class material.

Your style needs work.

  • very dry and ppt was ineffective
  • very disappointing and boring
  • Arrogant,mean & full of himself
  • BORING. BORING. He’s knowledgable, but dry. Not very interactive.
  • knowledgeable but hard to hear
  • he spoke very quickly and did not model activities. difficult to follow and not described logistically.
  • more confused after leaving

Not enough doing.

  • not as hands-on as I would have hoped
  • too much talking from participants and no information or leadership from the presenter. Everyone had to share their story; very annoying.
  • I could do without the justification at the beginning and the talking to each other part. I already know why I’m here.
  • I would have liked more time to individually solve problems.

Too much doing.

  • while they had a good energy, this session was more of a work time than learning. It did not teach me how to facilitate activities
  • I didn’t think I was going to a math class. I thought we would be teachers and actuall create some tasks or see student work not our work.
  • it would be nice to have teachers do less math and show them how you created the tasks you had us do.

What People Said When Your Session “Didn’t Match The Title And Description In The Program Book.”

You were selling a product. (I looked up all of these session descriptions. None of them disclosed the commercial nature of their sessions.)

  • The fact that it was a sales pitch should have been more evident.
  • only selling ti’s topics not covered
  • a sales pitch not something useful to take back to my classroom
  • Good product but I was looking more for ideas that I can use without purchasing a product.
  • I was hoping for some ideas to help my kids with fact fluency, not a sales pitch.
  • didn’t realize it was selling a product
  • this is was nothing more than a sales pitch. Disappointed that I wasted a session!
  • More like a sales pitch for Inspire
  • I would not have gone to this session if I had known it required graphing calculators.

You claimed the wrong grade band.

  • good for college methods course not for math conference
  • not very appropriate for 6,7 grade.
  • disappointed as it was too specific and unique to the HS presented.
  • Didn’t really match the title and it should have been directed to middle school only.
  • This was not as good for elementary even though descript. said PreK-C / a little was relevant but my time would have been better used in another
  • the session was set 2-6 and was presented at grades k-5.
  • this was not a great session for upper elementary grade and felt more appropriate for 7-12.

You didn’t connect your talk closely enough to the CCSS.

  • not related to common core at all. Disappointing
  • unrelated to ccss

You decided to run a technology session, which is impossible at math ed conferences because half the crowd already knows what you’re talking about and is bored and half the crowd doesn’t know what you’re talking about and is overwhelmed.

  • gave a few apps but talked mostly about how to teach instead of how to use apps or what apps would be beneficial
  • good tutorial for a newbie or first time Geogebra user but I already knew how to use Geogebra so I found most of this pointless. Offer an advanced
  • Good information, but we did not actually learn how to create a Google form. I thought we would be guided through this more. It doesn’t help to
  • apps were geared to higher lever math not middle school

People Whose Sessions Were Said to Be the “Best of the Conference!”

23-way tie! Perhaps useful for your future conference planning, however.

  • Armando Martinez-Cruz
  • Bob Sornson
  • Brad Fulton
  • Cathy Seeley
  • Cherlyn Converse
  • Chris Shore
  • David Chamberlain
  • Douglas Tyson
  • Eli Luberoff
  • Gregory Hammond
  • Howard Alcosser
  • Kasey Grant
  • Kim Sutton
  • Larry Bell
  • Mark Goldstein
  • Michael Fenton
  • Monica Acosta
  • Nate Goza
  • Patrick Kimani
  • Rachel Lasek
  • Scott Bricker
  • Vik Hovsepian
  • Yours Truly

Conclusion

The revelations about technology and hands-on math work interest me most.

In my sessions, I like to do math with participants and then talk about the math we did. Too much doing, however, and participants seem to wonder why the person at the front of the room is even there. That’s a tricky line to locate.

I would also like to present on the technology I use that makes teaching and learning more fun for me and students. But it seems rather difficult to create a presentation that differentiates for the range of abilities we find at these conferences.

The session feedback here has been extremely valuable for my development as a presenter and I only hope the conference committee finds it equally useful for their purposes. Conference committee chair Brian Shay told me via email, “Historically, we use the data and comments to guide our decision-making process. If we see speakers with low reviews, we don’t always accept their proposal for next year.”

Great, if true. Given the skew of the feedback towards “SUPER LIKE!” it seems the feedback would be most useful for scrutinizing poor presenters, not locating great ones. The strongest negative feedback I found was in reaction to unprepared presenters and presentations that were covertly commercial.

CMC has the chance to initiate a positive feedback loop here by taking these data seriously in their choices for the 2015 conference and making sure attendees know their feedback counted. More and more thoughtful feedback from attendees will result.

Full Disclosure

I forgot to tell the attendees at my session my PollEverywhere code. Some people still sent in reviews. My practice is to post my most recent twelve months of speaking feedback publicly.

2014 Nov 4. It seems I’ve analyzed an incomplete data set. The JSON file I downloaded for one presenter (and likely others) contains fewer responses than he actually received. I don’t have any reason to believe there is systematic bias to which responses were included and excluded but it’s worth mentioning this caveat.