Why People Didn’t Like Your Conference Presentation

I attended the California Math Council’s annual conference in Palm Springs last week along with nearly 3,600 other attendees. I presented a session there along with nearly 240 other presenters. At the end of our sessions, attendees could use PollEverywhere to send in their responses to three statements:

  1. The speaker was well-prepared and knowledgeable.
  2. The speaker was an engaging and effective presenter.
  3. The session matched the title and description in the program book.

Attendees scored each statement on a scale from 0 (disagree!) to 3 (agree!). Attendees could also leave written feedback.

At the end of the conference, presenters were sent to a public site where they could access not just their own session feedback, but the feedback for other presenters also. This link started circulating on Twitter. I scraped the feedback from every single session and analyzed all the data.

This is my intention:

  • To learn what makes sessions unpopular with attendees. It’s really hard to determine what makes a session good. “Great session!” shows up an awful lot without elaboration. People were much more specific about what they disliked.

This isn’t my intention:

  • To shame anybody. Don’t ask me for the data. Personally, I don’t think it should have been released publicly. I hope the conference committee takes the survey results seriously in planning future conferences but I’m not here to judge anybody.

Overall

There were 2,972 total feedback messages. 2,615 were correctly formatted. With 3,600 attendees attending a maximum of eight sessions, that’s 28,800 feedback messages that could have been sent, for a response rate of about 10%.

How Much Feedback Did You Get?

141031_4lo

Most presenters received between 10 and 20 feedback messages. One presenter received 64 messages, though that was across several sessions.

How Did You Do On Each Of The Survey Questions?

141031_2lo

Overall the feedback is incredibly positive. I’m very curious how this distribution compares to other math education conferences. I attend a lot of them and Palm Springs is on the top shelf. California has a deep bench of talented math educators and Palm Springs is a great location which draws in great presenters from around the country. I’d put it on par with the NCTM regionals. Still, this feedback is surprisingly sunny.

The data also seem to indicate that attendees were more likely to critique the presenters’ speaking skills (statement #2) than their qualifications (statement #1).

How Did You Do On A Lousy Measure Of Overall Quality?

For each presenter, I averaged the responses they received for each of the survey questions and then summed those averages. This measure is problematic for loads of reasons, but more useful than useless I think. It runs from 0 to 9.

141031_3lo

62 presenters received perfect scores from all their attendees on all measures. 132 more scored above an 8.0. Even granting the lousiness of the measure, it points to a very well-liked set of presenters.

So why didn’t people like your session? The following quotes are all verbatim.

What People Said When You Weren’t “Well-Prepared Or Knowledgeable.”

If someone rated you a 0 or a 1 for this statement, it was because:

  • he was late and unprepared.
  • frustrating that we spent an hr on ppl sharing rationale or venting. I wanted to hear about strategies and high leaveage activities at the school.
  • went very fast
  • information was scattered.
  • A lot of sitting around and not do much.
  • This presentation was scattered and seemed thrown together at the last minute.
  • Unfortunately, the presenter was not focused. There was no clear objectives. Please reconsider inviting this presenter.

What People Said When You Weren’t “An Engaging Or Effective Presenter.”

If someone rated you a 0 or a 1 for this statement, it was because:

You didn’t offer enough practical classroom applications.

  • I wanted things students could use.
  • philosophy more than application. I prefer things I can go back with. I already get the philosophy.
  • very boring. Too much talking. I wanted more in class material.

Your style needs work.

  • very dry and ppt was ineffective
  • very disappointing and boring
  • Arrogant,mean & full of himself
  • BORING. BORING. He’s knowledgable, but dry. Not very interactive.
  • knowledgeable but hard to hear
  • he spoke very quickly and did not model activities. difficult to follow and not described logistically.
  • more confused after leaving

Not enough doing.

  • not as hands-on as I would have hoped
  • too much talking from participants and no information or leadership from the presenter. Everyone had to share their story; very annoying.
  • I could do without the justification at the beginning and the talking to each other part. I already know why I’m here.
  • I would have liked more time to individually solve problems.

Too much doing.

  • while they had a good energy, this session was more of a work time than learning. It did not teach me how to facilitate activities
  • I didn’t think I was going to a math class. I thought we would be teachers and actuall create some tasks or see student work not our work.
  • it would be nice to have teachers do less math and show them how you created the tasks you had us do.

What People Said When Your Session “Didn’t Match The Title And Description In The Program Book.”

You were selling a product. (I looked up all of these session descriptions. None of them disclosed the commercial nature of their sessions.)

  • The fact that it was a sales pitch should have been more evident.
  • only selling ti’s topics not covered
  • a sales pitch not something useful to take back to my classroom
  • Good product but I was looking more for ideas that I can use without purchasing a product.
  • I was hoping for some ideas to help my kids with fact fluency, not a sales pitch.
  • didn’t realize it was selling a product
  • this is was nothing more than a sales pitch. Disappointed that I wasted a session!
  • More like a sales pitch for Inspire
  • I would not have gone to this session if I had known it required graphing calculators.

You claimed the wrong grade band.

  • good for college methods course not for math conference
  • not very appropriate for 6,7 grade.
  • disappointed as it was too specific and unique to the HS presented.
  • Didn’t really match the title and it should have been directed to middle school only.
  • This was not as good for elementary even though descript. said PreK-C / a little was relevant but my time would have been better used in another
  • the session was set 2-6 and was presented at grades k-5.
  • this was not a great session for upper elementary grade and felt more appropriate for 7-12.

You didn’t connect your talk closely enough to the CCSS.

  • not related to common core at all. Disappointing
  • unrelated to ccss

You decided to run a technology session, which is impossible at math ed conferences because half the crowd already knows what you’re talking about and is bored and half the crowd doesn’t know what you’re talking about and is overwhelmed.

  • gave a few apps but talked mostly about how to teach instead of how to use apps or what apps would be beneficial
  • good tutorial for a newbie or first time Geogebra user but I already knew how to use Geogebra so I found most of this pointless. Offer an advanced
  • Good information, but we did not actually learn how to create a Google form. I thought we would be guided through this more. It doesn’t help to
  • apps were geared to higher lever math not middle school

People Whose Sessions Were Said to Be the “Best of the Conference!”

23-way tie! Perhaps useful for your future conference planning, however.

  • Armando Martinez-Cruz
  • Bob Sornson
  • Brad Fulton
  • Cathy Seeley
  • Cherlyn Converse
  • Chris Shore
  • David Chamberlain
  • Douglas Tyson
  • Eli Luberoff
  • Gregory Hammond
  • Howard Alcosser
  • Kasey Grant
  • Kim Sutton
  • Larry Bell
  • Mark Goldstein
  • Michael Fenton
  • Monica Acosta
  • Nate Goza
  • Patrick Kimani
  • Rachel Lasek
  • Scott Bricker
  • Vik Hovsepian
  • Yours Truly

Conclusion

The revelations about technology and hands-on math work interest me most.

In my sessions, I like to do math with participants and then talk about the math we did. Too much doing, however, and participants seem to wonder why the person at the front of the room is even there. That’s a tricky line to locate.

I would also like to present on the technology I use that makes teaching and learning more fun for me and students. But it seems rather difficult to create a presentation that differentiates for the range of abilities we find at these conferences.

The session feedback here has been extremely valuable for my development as a presenter and I only hope the conference committee finds it equally useful for their purposes. Conference committee chair Brian Shay told me via email, “Historically, we use the data and comments to guide our decision-making process. If we see speakers with low reviews, we don’t always accept their proposal for next year.”

Great, if true. Given the skew of the feedback towards “SUPER LIKE!” it seems the feedback would be most useful for scrutinizing poor presenters, not locating great ones. The strongest negative feedback I found was in reaction to unprepared presenters and presentations that were covertly commercial.

CMC has the chance to initiate a positive feedback loop here by taking these data seriously in their choices for the 2015 conference and making sure attendees know their feedback counted. More and more thoughtful feedback from attendees will result.

Full Disclosure

I forgot to tell the attendees at my session my PollEverywhere code. Some people still sent in reviews. My practice is to post my most recent twelve months of speaking feedback publicly.

2014 Nov 4. It seems I’ve analyzed an incomplete data set. The JSON file I downloaded for one presenter (and likely others) contains fewer responses than he actually received. I don’t have any reason to believe there is systematic bias to which responses were included and excluded but it’s worth mentioning this caveat.

About 
I'm Dan and this is my blog. I'm a former high school math teacher and current head of teaching at Desmos. He / him. More here.

18 Comments

  1. (I looked up all of these session descriptions. None of them disclosed the commercial nature of their sessions.)

    It might just be that there is nothing worse than a conference session that is a poorly constructed commercial. Some conferences are worse than others at this, I think. In my experience nobody stacks a conference full of ads for consulting services the way SUNGARD does at their Banner conferences, but at least there I just learned to stop going to anything listed as being presented by an employee of that company.

    It’s really a sign, I think, of how few people go to a conference on their own dime. I was always annoyed to have an hour of my time wasted that way, but at least they were cheating my employer out of my sizable registration fee rather than me. I suspect that if most conference goers were paying their own registration fee we’d see a lot more tar and torches when someone offers a session like that.

  2. Thanks Dan,

    This is valuable information. I will be presenting at CMC North for the first time and I will definitely be taking this information into consideration. Appreciate your time in compiling this.

  3. I wonder about the response rate of feedback based on room location (there was little to no cellular service in the basement of the Hard Rock) & time (little to no battery power left in devices by the end of the day for those who didn’t carry around a charger).

    I agree with you about the top shelf. I was struck by how eager everyone was to participate actively. I’ve been in too many conference sessions where those who attend want to learn by sitting and watching instead of wanting to learn by doing. I don’t know if I just happened to choose the right sessions or not, but I was most impressed by the “community of learners” feel.

  4. Thanks Dan for the compilation and analysis, although I still think this type of assessment/feedback leaves a lot to be desired. For starters, three pieces of information I’d love to have are:

    1) Percentage of attendees who submitted evaluations and how this varied from session to session & some sense of the reason people didn’t submit evaluations (no time, no device, no strong feelings one way or another, a principled opposition to conference feedback, laziness, …)

    2) Some actually useful feedback from participants who left less than fully satisfied. I received two non-333 ratings, one of which was a 323 rating with no feedback and the other had the feedback “Technology?”. This isn’t useful for me moving forward.

    3) Most importantly, I don’t want feedback to be a measure of how well I entertained my audience. What I REALLY want to know is whether anyone is still thinking about my session a week later–whether anyone has changed or plans on changing some aspect of their classroom teaching.

  5. I’m going to push back a little on some of the reasons people gave for not liking a session. I’ll defend some of the presenters. Not having presented at CMC-S, I can do so without being accused of being defensive.

    philosophy more than application. I prefer things I can go back with. I already get the philosophy.

    The all too common “I wanted something I could use Monday morning” critique. Like the not enough/too much doing line, the balance between philosophy and application can be tough to find. (The feedback quoted above flies in the face of your thoughts coming out of TMC, no?)

    the session was set 2-6 and was presented at grades k-5

    This fixation on grade band seems a little narrow-minded to me. Was this a Grade 6/Sixth Grade teacher who, again, didn’t have his/her need for that Monday morning lesson plan met? A Grade 2 teacher — or Grade 6, for that matter — who wasn’t interested in seeing how a concept is developed in K/1 and how that connects to his/her grade level? Since our standards (Western Canada) don’t necessarily line up with CCSS, I’ve just grown accustomed to not paying too close attention to advertised grade bands. Having said that, I do get how this could be a valid complaint. I’ve attended “elementary” sessions that really didn’t offer much for K-2. Early numeracy is a different beast.

    gave a few apps but talked mostly about how to teach instead of how to use apps or what apps would be beneficial

    I shall wear this like a badge of honour. We talked pedagogy when you wanted 50 of my favourite math apps in 50 minutes? Unless “50 of My Favourite Math Apps in 50 Minutes” was the title of my session, no apologies.

    In your not enough doing category I was expecting to see more of “He talked at us for 75 minutes without coming up for air.” I suggest there’s a subcategory based on the quotes there: Too much room. “The room is the expert” is a phrase I often hear. Hasn’t been my experience. Looking at you, EdCamp.

  6. Steve Leinwand

    November 4, 2014 - 3:37 am -

    Sure there is a slew of additional data and detail we would love to have, but as a frequent speaker, I would love to have EVERY program chair responsible for sending back data like this for every conference along with the traditional “thank you” e-mail. Some of us ought to give some thought to the 5 most appropriate and helpful questions from which we’d love to get feedback and share them widely. Thx Dan for your love of data.

  7. For curiosity, I looked at the scans of your most recent feedback.

    My favorite response:

    “Suggestions for future inservice offerings: subject, speaker, etc.: More of this guy”

  8. Avery:

    What I REALLY want to know is whether anyone is still thinking about my session a week later—whether anyone has changed or plans on changing some aspect of their classroom teaching.

    Why stop there? What I really want to know is whether any kids like or know math better after my talk. But this runs aground on logistics. There’s no practical way to measure what I want to measure.

    Similarly, what is a practical way to measure teacher change?

  9. The big question, I think, is how transparent is our own learning to us? Do we know what good learning feels like? Or are we easily duped?

    I don’t know of any research on this question from the perspective of conference attendees, but there’s a literature on this from the perspective of student evaluations. And it shows that people are fairly biased in their evaluations.

    If you’re a man, you’re going to receive higher ratings. If you defy expectations in any way, you’re likely to receive lower ratings. If you’re black, you’re likely to receive lower ratings. Little, if any, correlation between what you learn and how you’re rated. (source)

    In the face of all this, how should we react?

    1. When it comes to conferences (and school), maybe we have no choice but to aim to please. People can stop coming to our sessions. And maybe our livelihoods depend on people attending our sessions. Or maybe we just want people to be happy about their experiences.

    The issue is that we have little reason to think that this will correspond to how much our participants (or students) actually learn. So…

    2. We need other kinds of feedback to determine the quality of the session. Maybe exit tickets? Maybe we want to send emails and ask about the content of the session? Who knows.

  10. Michael Pershan:

    When it comes to conferences (and school), maybe we have no choice but to aim to please.

    There are two different audiences for the feedback: the conference organizers and conference presenters. I suspect for conference organizers, short-term attendee satisfaction is sufficient.

    We need other kinds of feedback to determine the quality of the session. Maybe exit tickets? Maybe we want to send emails and ask about the content of the session?

    The post-conference email strikes a nice balance between practicality and usefulness. I’ll bet the response rate wouldn’t be any lower than the 10% I found for the text messages.

    Asking attendees to name from memory the speakers they attended would be interesting. Asking them to mention their most important takeaway messages and from whom they learned those messages might be even better.

  11. Dan, thanks for sharing your thoughts, and for all the work that went into scraping and analyzing the data. Great stuff to chew on as a presenter.

    I had several conversations with other participants at CMC South about whether the response rate would be higher or lower using Poll Everywhere (compared to those little slips of carbon paper I’ve seen at other conferences). Most of us suspected lower… I love that the feedback that DID roll in is digital (I always seem to lose little bits of paper), but I suspect that shoving a small sheet of paper in front of participants would have generated larger returns. Not sure on the best way forward here.

    Another thought: Are Google Forms a viable option here? Seems like the data would easier to share with specific people there. Of course, we’d be excluding those without smartphones or laptops. (Do a lot of attendees have flip phones and no other device?)

    As for drawing conclusions from what feels like an incomplete and overly positive set of responses, what about this as an alternative measure of presenter effectiveness:

    (average score) – (number of responses) ÷ (room capacity)

    I know I’m more likely to complete a feedback form if I have something positive to say. It doesn’t mean all silence is damning, but I wonder if others are wired in similar ways.

    Of course, there should probably be a square root somewhere in that formula, just for good measure.

  12. Chris Hunter already left a similar comment, but this scares me:

    gave a few apps but talked mostly about how to teach instead of how to use apps or what apps would be beneficial

    We cannot grow as a professional group unless we focus on the teaching first. Tech often has a front seat at big conferences and as a result, we get to talk about shiny toys and one-off activities, but classroom practice rarely changes as a result. Conferences have managed to create a positive (negative?) feedback loop from which feeds itself from year to year.

  13. Funny … Teachers want the same thing from presenters that students want from teachers: be prepared and to the point, be relevant, and have a palatable ratio of lecture to activity. And don’t be boring.

    All good comments above.

    CMC South is definitely Top Shelf. Thanks to all who make it happen.

  14. I was disappointed with the feedback that I received. I had 40 attendees and only 6 reviews. Does this mean the other 34 felt my session was not worth reviewing?

    I had one person tell me personally that I had presented an excellent session. I gave participants a technology activity that they could use on Monday morning with hints as to what I learned and links to electronic copies of the documents that I used. I also tied it to why I decided to use this activity and the math practices it addressed. This activity has been used from elementary to high school. I also presented an activity that could be used with a little more prep but all of the documents that I used. Both had student work examples.

    This was my first time presenting at CMC. Not sure what I need to do to improve. I have applied to do the same presentation at CUE.

    Also, there needs to be an alternative to sending a text for evaluating the session. I don’t have that service on my phone and I had some sessions I wanted to review. I wonder how many participants had the same issue?

  15. It sounds like authentic and honest feedback is something that is desirable and wanted. Further it could be helpful for both the speaker and the attendees to learn about the impact of the information/content presented during the conference session. As a professional development consultant/coach, non-judgmental feedback offered through reflective conversation is not only beneficial for classroom teachers but also for conference speakers/presenters. I’m wondering if having a pre-conference conversation, online or by phone, might be a way to help the presenter really clarify what their session will be about including the target audience while helping the committee to correctly identify the grade level span? I have found email communications/conversations particularly helpful because they allow one to view their thoughts in print as well as an opportunity to clearly put down their thoughts. This might reduce those negative comments referring to presentations that are scattered or disorganized. After the presentation, if desired, there could be a post-conference, either face to face, email or by phone by the person who conducted the pre-conference. This would take time and expertise and perhaps it could be part of a leadership strand, coaching and professional development. This building of professional educational leadership capacity is something I would love to see happen and would love to be a part of such a movement.

  16. By the way, thank you Dan for compiling the data which has opened up this conversation around professional development and feedback about the PD.

  17. This reminds me of a feedback form I saw on another teacher’s blog: Just Mathness.

    for students to do at the end of each class:

    2 things learned today or thought about in a new way
    1 question about the material or 1 new question inspired by the material
    1 piece of feedback on today’s lesson

    Have had a mixed experience asking these of my students.