Posts

Comments

Get Posts by E-mail

Can you help me shuffle my thoughts on teacher data dashboards?

The Current State of Teacher Data Dashboards

Generalizing from my own experience and from my reading, teacher data dashboards seem to suffer in three ways:

  • They confuse easy data with good data. It's easy to record and report the amount of time a student had a particular webpage open, for instance, but that number isn't indicative of all that much.
  • They aren't pedagogically useful. They'll tell you that a student got a question wrong or that the student spent seven minutes per problem but they won't tell you why or what to do next beyond "Tell the student to rewind the lecture video and really watch it this time."
  • They're overwhelming. If you've never managed a classroom with more than 30 students, if you're a newly-minted-MBA-turned-edtech-startup-CEO for instance, you might have the wrong idea about teachers and the demands on their time and attention. Teaching a classroom full of students isn't like sitting in front of a Bloomberg terminal with a latte. The same volume of statistics, histograms, and line graphs that might thrill a financial analyst with few other demands on her attention might overwhelm a teacher who's trying to ensure her students aren't setting their desks on fire.

If you have examples of dashboards that contradict me here, I'd love to see screenshots.

We Tried To Build A Better Data Dashboard

With the teacher dashboard on our pennies lesson, the Desmos team and I tried to fix those three problems.

130906_3lo

We attempted to first do no harm.

We probably left some good data on the table, but at no point did we say, "Your student knows how to model with quadratic equations." That kind of knowledge is really difficult to autograde. We weren't going to risk assigning a false positive or a false negative to a student, so we left that assessment to the teacher.

We tailored the dashboard to the lesson.

We created filters that will be mostly useless for any other lesson we might design later.

130906_4lo

We filtered students in ways we thought would lead to rich teacher-student interactions. For example:

  • If a student changed her pennies model (say from a linear to a quadratic or vice versa) we thought that was worth mentioning to a teacher.
  • We made it easy to find out which students filled up large circles with pennies and which students found some cheap and easy data by filling up a small circle.
  • We made it easy to find out which students had the closest initial guesses.

These filters don't design themselves. They require an understanding of pedagogy and a willingness to commit developer-hours to material that won't scale or see significant reuse outside of one lesson. That commitment is really, really uncommon for edtech startups. It's one reason why the math edublogosphere gets so swoony about Desmos.

130906_6

Contrast that with filters from Khan Academy, which read, "Struggling," "Needs Practice," "Practiced," "Level One," "Level Two," and "Mastered." Broadly applicable, but generic.

We suggested teacher action.

For each of those filters, we gave teachers a brief suggestion for action. For students who changed models, we suggested teachers ask:

Why did you change your model? Why are you happy with your final choice instead of your first choice?

For students who filled up large circles, we suggested teachers say something like:

A lot of you filled small circles with pennies but these students filled large circles with pennies. That's harder and it's super useful to have a wide range of data when we go to fit our model.

For students who filled up small circles, we suggested teachers say something like:

Big data help us come up with a model, but so do small data. A zero-inch circle is really easy to draw and fill with circles so don't forget to collect it.

Even with this kind of concise, focused development, one teacher, Mike Bosma, still found our dashboard difficult to use in class:

While the students were working, I was mostly circulating around the classroom helping with technology issues (frozen browsers) and also clarifying what need to be done (my students did not read directions very well). I was hoping to be checking the dashboard as students went so I could help those students who were struggling. The data from the dashboard were helpful more so after the period for me. As I stated above, I was very busy during the period managing the technology and keeping students on track so I was not able to engage with what they were doing most of the time.

So we'd like to hear from you. Have you used the pennies task in class? Have you used the dashboard? What works? What doesn't? What would make a dashboard useful – actually usable – for you?

Featured Comments

Tom Woodward, arguing that these platforms are tougher to customize than the usual paper-and-pencil lesson plan:

The other piece I worry about is the relatively unattainable nature of some of the skills needed for building interesting/useful digital content for most teachers. I really want to provision content for teachers and then be able to give them access to changing/building their own content. While many are happy consuming what’s given, there are people who will want to make it their own or it will spark new ideas. I hate the idea that the next step would be out of reach of most of that subset.

And there’s Eric Scholz looking for exactly that kind of customization:

I would add a “bank” of variables at the top of the page that teachers from when building their lesson plan the night before. This would allow for a variety of objectives for the lesson.

Bob Lochel, being helpful:

While many adaptive systems propose to help students along the way, they are often mis-interpreted as summative assessments, through their similarities to traditional grading terms and mechanisms.

Tom Woodward, also being helpful:

There could/should be some value to a dashboard that guides formative synchronous action but it’d have to be really low on cognitive demand.

22 Responses to “Teacher Data Dashboards Are Hard, Pt. 2”

  1. on 12 Sep 2013 at 6:01 pmEric Scholz

    I have not done the lesson, but have tinkered around with the dummy dashboard. I have to agree that dashboards are tough to design and have a bunch of ideas that worked for me and a bunch that would work for me.

    I really like the idea of displaying a minimal amount of data per student and having automated filters for categories of evidence. The UI is really a clean operational view for teachers on the run. I have not seen anything of this quality yet online.

    That being said, I would add a “bank” of variables at the top of the page that teachers from when building their lesson plan the night before. This would allow for a variety of objectives for the lesson. Some additional variables to consider for me would be low estimate, high estimate, variance from curve of best fit, a visual range with the guess as a point (like a number line) and other statistical measures. The types of evidence that are currently used for filtering do not really make sense to me. If other students already tested out large circles, I would use their data and not worry about creating my own. This to me reduces the value of the circle information or not collecting data. I would like to know if students changed away from quadratic, but do not know what to do with the data about students changing in general. I would also like a classroom dashboard to toggle to. This could include range of circle sizes and data points. Whole Class questions could be included to promote a wider span of circle sizes.

    Ok, so I would also like a Ferrari. I know many of these things require a lot of work. I love the effective combination of technology and math and see it far to infrequently.

    -Eric

  2. on 12 Sep 2013 at 6:20 pmKevin Hall

    While I like what you and Desmos have done, I’m not sure I get the critique of the KA dashboards as having categories that are too generic. We agree that one limitation of an interface like KA’s is that it can’t discern WHY the student is getting questions wrong. But would it really help if the interface said of student performance on solving equations, “Lucas and Michelle get these wrong because they combine like terms on opposite sides of the equals sign; Marcus and Pamela get them wrong because they often subtract something to the left side twice and do nothing to the right side; Angela and Brian…” The way I read your post, the limitation is that teachers don’t have time to respond in class to all these individual needs anyways.

    I use KA for some things in my classroom. I just have to have students show their work, so when they’re getting things wrong (which I don’t detect via the dashboard–I just see them getting questions marked wrong as I walk around the room) I can see where their mistake is. Obviously, I can’t solve everyone’s problems this way, but what I notice is that my top 30% figure out their own mistakes before I can get to them. In that sense, the function of the data isn’t to tell the teacher what’s going on–it’s to tell the student that she’s getting them right or wrong.

    That frees me up marginally to work with the others. I can’t catch everything, but then I have some involved parents who will sit with their kids and correct stuff as long as they know the exact questions the kid needs help with, and I can call kids out of their homerooms and work with them for 15-20 minutes at a time, and and and. Basically, I’m hustling, but not for money. I find that KA simplifies the process, because when a kid shows up during homeroom for 15 minutes, I don’t have to ask, “what do you need to work on?” And I don’t have to keep some kind of complicated spreadsheet. I just have them log in to KA and we go to where they were stuck. Generally, they even remember the exercise name themselves, which shows more self-awareness than I used to get in the old days when I’d ask them what they needed to work on and they’d dig through a binder full of crumpled papers for 5 minutes trying find an old quiz. Since every minute does count, making this process more frictionless has been valuable to me. For those who don’t know me, I also teach MTBoS-style lessons, and students NEVER work on a Khan exercise until after conceptual development has been done through lessons I stole from Dan, Fawn, etc.

  3. on 12 Sep 2013 at 11:04 pmBill Fitzgerald

    Hello, Dan,

    I’ve done a fair amount of research on dashboards, and have built a dashboard or three as well.

    Most dashboards suck because they make too many assumptions about what people coming to them will want to know (and your point about managing a class of 30 – or 5 classes of 30 – is right on). Understood this way, most dashboards are one way conversations that provide tools for people to filter down.

    I’ve come to the perspective that dashboards are really an focused search tool – rather than telling you what you need to know, a good dash will let you ask questions about the information it contains. Some of this is text-based search, and some of this is filtered (based on facets that are either structured into the apps architecture, or inferred from student responses) – but in any case, a good dash will reflect the assumption that success can look different for different students.

    I also like the idea of a dash tailored per-problem. Looking at this from an application design perspective, it would be pretty straightforward to allow instructors to set up custom categories on a per-problem basis.

    RE: ” They require an understanding of pedagogy” – absolutely. This is a level of nuance largely absent from what passes as Edtech.

    RE: “and a willingness to commit developer-hours to material that won’t scale or see significant reuse outside of one lesson.”

    Once the use case is defined, it becomes easier to abstract out types of filters that could potentially scale. In your example, you have a set of filters that appear to be based on numerical comparisons, and some that are potentially based on classroom observation. But really, once the pedagogical goals are defined, it becomes possible to trap for the conditions that would allow for better questions to be asked of the dashboard. But if we can define the conditions we want to examine (good initial estimate, unreasonable answer, collecting a certain number of data points) the dash can bend with the needs of the problem, and the code behind the dashboards can become more general.

  4. on 12 Sep 2013 at 11:15 pmBill Fitzgerald

    And to clarify, when I say “it would be pretty straightforward” what I mean is “clarifying the terms of the comparisons would be pretty complex, and by comparison writing code that exposed those conditions to end users would be easier.”

    So, yeah, not simple, but not the hardest thing in the world, either :)

  5. on 13 Sep 2013 at 1:37 amTom Hoffman

    I agree with the emphasis on the need for much more fine grained customization to context than is generally regarded as necessary or economically feasible.

    On SchoolTool we’re stuck at the opposite extreme, since our mission is to make something adaptable to a wide range of school systems, so I know the pain of having to make everything overly generic.

    In fact, trying to get to generic solutions as quickly as possible is often a false economy, since then you tend to just make things that don’t work well enough for anyone to really use.

    One thing that I’m constantly struck by is the extent to which different disciplines, grade levels, etc. have such different requirements for this kind of thing. You’d need a completely different approach in English class.

    In particular, I don’t know how you’d break down something like this: “Evaluate the advantages and disadvantages of using different mediums (e.g., print or digital text, video, multimedia) to present a particular topic or idea.” At least, halfway sanely. And I don’t mean because it is too vague, but because it implies a level of specificity that just isn’t there.

  6. on 13 Sep 2013 at 5:51 amMatt Clark

    I haven’t done the pennies problem in class, but have tried some of the others. I like the clean look and concise information you have provided in your dashboard, but the biggest problem I keep running into with them is that they are on a computer screen. Class time is limited and if I’m looking at a dashboard trying to figure out who is struggling, then I’m losing time in class. Ideally I’d like to to go back and delve into the data at a later time, but realistically with 150 students it doesn’t happen… there just isn’t enough time. Your dashboard does make it easier to quickly scan results which in turn will assist in making quick decisions on who needs help, who needs more time, and who needs extension questions. Keep going with it please!

    If I had the time, money, and know-how, I’d try to do some combination of your dashboard with Google Glass. Imagine looking at a student and seeing the data pop up in front of you! I’ll keep dreaming.

  7. on 13 Sep 2013 at 10:01 amCarl Malartre

    Great post & great job, Dan & Desmos! Keep the good work!

  8. on 13 Sep 2013 at 10:27 amKevin Hall

    Okay, I just tried the activity for the first time today, and it’s great. And I totally get the point of the dashboard. I will use it with my class in the next month, and I’ll get back to you regarding the dashboard after that. But I think the dashboard will really help the post-activity debrief. I wonder how hard it would be to add a similar dashboard to Estimation 180? My students don’t enter their answers online, so I couldn’t use it, but for classes with ipads it would be great to track who always selects really unreasonable upper limits, etc. Nice job.

  9. on 13 Sep 2013 at 11:54 amDave

    Just thinking out loud, in case I stumble across anything helpful:

    It seems like things could be kept streamlined to just the most actionable information. What steps of a teacher’s process can be simplified/helped by surfacing the right aspects of the data?

    1) What problems were most common across each class? Actionable: what concepts do I need to revisit with the entire class? (I like looking through the filters, but ultimately this is what I want to know.)

    2) What are some interesting things to say when we talk about this activity? “Some students did it this way, some did it this other way, why do you think that was. Everyone estimated low, except for Mary who estimated very close but a little high — do you remember how you came up with your estimate Mary?” (I think browsing through your custom filters is the best way to accomplish this right now, and that’s probably good.)

    3) Which students are having major struggles? Actionable: who needs individual attention or tutoring? (Again, I like the ability to browse through everyone, but I especially want an idea of who needs help the most.)

    4) What standards are good, bad, or stale? Actionable: I magically have some extra class time (ha!), what should I do with it? What have we not practiced in a long time (or at all)?

    I think I want the ability to get that information easily, but I don’t necessarily want a screen that just answers those 4 questions with no other information.

    And something that I’m kind of on the fence about: a “generate some small groups” feature. — This sounds cool, but would get used rarely, I think, and would clutter the interface the rest of the time. The idea is for the tool to automatically generate small groups in a way that you specify — either somewhat balanced mixed ability groupings, or groupings of similar ability level. Based on one problem or many problems. Then you could use it for teams, individualized homework or instruction, project groups, etc. I’d want a way to very easily display this on a computer projector and maybe to save a given generation to a file. (This might be entirely outside the scope of what you’re envisioning, though.)

    I’m also thinking about how I’m going to use this. Am I going to sit at the computer during planning? Am I going to have it open on a wifi-enabled tablet while I walk around the classroom? Are there times when I want to be able to show this on the computer projector? Am I going to use this as a reference while I write down a rough lesson plan on paper, or am I going to print this out and then use the print out while I type lesson plans on the computer? Thinking about what use cases you prefer might help define what you want to show on dashboard screens.

    I’m on the fence about whether I would want to know how my students compare with others. If 90% of nationwide students used big circles, but 90% of my students did NOT make any big circles, is that useful or significant enough to have pointed out to me? It seems interesting, but ultimately I’m always aiming for all students to understand as much as possible, so it might be a red herring.

  10. on 13 Sep 2013 at 2:10 pmVishakha

    Talk to the folks at the DataGames project – the Desmos guys know them. Especially talk to Bill Finzer and ask him for the data dashboard that he created for one lesson –

    It was exactly the kind of work you describe – detailed, very specific to one task, not scalable to many activities, needing a deep understanding of the pedagogy and lesson goals, and hard to productize but deeply valuable in the classroom.

    I manned a “manual” dashboard during a couple of class sessions where I would read out to the teacher signs of struggling students so he could walk to them. I imagined a digital version of me that the teacher could have on their iPads so maybe what Mike needs is a digital data interpreter that will alert him by flashing a light on top of the student – hey I can dream!

  11. on 13 Sep 2013 at 3:36 pmBob Lochel

    There’s just so much here, it’s so hard to be succinct. But I’ll try.

    I appreciate the shift in data you propose here. I don’t need to see time spent on task: how many quadratic equation questions a student happened to answer from your bubbles, or your idea of “proficiency”. I can see that on a daily basis if I am doing my professional best. Rather, I would like to know how my students repond to complex tasks. What comments they make. The vocabulary they use. And I would like it all in one place.

    This is my first year trying some “flipping” in my AP Stats class. Along with short videos, I have a Google form where students respond to quick prompts. It keeps them “honest”, and I get some data about misunderstandings I need to confront if my next class meeting. In my last video, I asked students to describe a data distribution. As a class, we looked at some (anonymous) responses, and critiqued them. Such a valuable activity to look at writing of peers in class! Examining student thoughts via your interface would be an equally valuable discussion.

    At the same time, I am reminded of Dylan Wiliam’s work, who proposes real, meaningful feedback as a means of formative assessment. And this is where I think your work is the most helpful. While many adaptive systems propose to help students along the way, they are often mis-interpreted as summative assessments, through their similarities to traditional grading terms and mechanisms. What I would like to see is my ability, as a teacher, to repond to student attempts and responses and provide real feedback.

    For example: this year, many of my students are submitting math works on Edmodo. I have found that I am providing much more detailed feedback, simply because the technology makes it easier for me to type and construct helpful responses than before (my handwriting is awful), and that I have an archive of my comments. Migrating student responses into a similar system from your interface would be equally compelling and attractive to me, and helpful in providing real feedback to students.

    I don’t believe I have done justice here to all of the work you and the Demsos team have done here. Keep at it…and can I buy some stock in your company please?

  12. on 14 Sep 2013 at 9:23 amTom

    To further complicate things, I wonder if Mike Bosma’s comment reflects the need for an entirely different view of data to help teachers in the moment. It seems most dashboards are built for a post-lesson reflection or based on the idea that the teacher is not there in the moment. There could/should be some value to a dashboard that guides formative synchronous action but it’d have to be really low on cognitive demand.

    It seems like some of these issues parallel issues with the LMS as well. We’d like to create fairly common, simple tools with low thresholds and high ceilings. I think most attempts shoot for this. The more I look at what happens when teachers are in these environments, the less effective this appears to be. When building content in an LMS I inevitably find myself trying to work around the limits of the system to get the subject specific tools/capabilities I feel are needed. There are some efforts lately to let the LMS become the glue that holds together lots of outside tools but it’s a slow and ugly process. This will only get progressively uglier as schools move towards digital content and begin to try to bind this huge mess of content, teacher tools, student tools, and other systems into some sort of cohesive environment.

    The other piece I worry about is the relatively unattainable nature of some of the skills needed for building interesting/useful digital content for most teachers. I really want to provision content for teachers and then be able to give them access to changing/building their own content. While many are happy consuming what’s given, there are people who will want to make it their own or it will spark new ideas. I hate the idea that the next step would be out of reach of most of that subset.

  13. on 15 Sep 2013 at 3:46 amKevin Hall

    @Dan, curious you think what a dashboard would like to assist teachers with implementing SBG. Or do you think no dashboard would work for that?

  14. on 15 Sep 2013 at 9:51 amDan Meyer

    Thanks for all the commentary team. A couple of follow-up questions here, a couple of follow-up comments there, and then a couple instances where I just want to throw two comments against each other and watch the sparks:

    For instance, here’s Tom Woodward:

    The other piece I worry about is the relatively unattainable nature of some of the skills needed for building interesting/useful digital content for most teachers. I really want to provision content for teachers and then be able to give them access to changing/building their own content. While many are happy consuming what’s given, there are people who will want to make it their own or it will spark new ideas. I hate the idea that the next step would be out of reach of most of that subset.

    And earlier there’s Eric Scholz looking for exactly that kind of customization:

    I would add a “bank” of variables at the top of the page that teachers from when building their lesson plan the night before. This would allow for a variety of objectives for the lesson.

    Bob Lochel, being useful:

    While many adaptive systems propose to help students along the way, they are often mis-interpreted as summative assessments, through their similarities to traditional grading terms and mechanisms.

    Tom Woodward, being useful:

    There could/should be some value to a dashboard that guides formative synchronous action but it’d have to be really low on cognitive demand.

    Dave really throws down the gauntlet with his product roadmap. I tend to think most of the data he’d like a tablet to supply is way, way outside the capabilities of most adaptive systems. (The really interesting stuff anyway – #1 and #2.) Regardless, I’ll keep that roadmap in the back of my head. Give us all a few decades, okay.

    Kevin Hall:

    But would it really help if the interface said of student performance on solving equations, “Lucas and Michelle get these wrong because they combine like terms on opposite sides of the equals sign; Marcus and Pamela get them wrong because they often subtract something to the left side twice and do nothing to the right side; Angela and Brian…” The way I read your post, the limitation is that teachers don’t have time to respond in class to all these individual needs anyways.

    Assuming the computer’s judgment was correct 90% of the time? Yeah, it would really help. There’s the data a teacher can use in class, which is a subset of the data a teacher can use in general. Knowing why Marcus and Pamela are struggling to solve equations would definitely help with lesson planning, lunchtime remediation, etc., even if a lot of teachers would struggle to put that information to work immediately.

    Bill Fitzgerald:

    I’ve come to the perspective that dashboards are really an focused search tool – rather than telling you what you need to know, a good dash will let you ask questions about the information it contains. Some of this is text-based search, and some of this is filtered (based on facets that are either structured into the apps architecture, or inferred from student responses) – but in any case, a good dash will reflect the assumption that success can look different for different students.

    Any examples you can share?

    Kevin Hall:

    @Dan, curious you think what a dashboard would like to assist teachers with implementing SBG. Or do you think no dashboard would work for that?

    Something like the last image on this page here.

    Okay, j/k sort of. But SBG information is the archetypal not-useful-for-the-teacher-in-the-heat-of-the-moment kind of information. Riley Lark created a nice SBG system called ActiveGrade (since acquired by someone). It just managed and displayed the information. What I liked most about it was that the teacher had to make the judgment that a student had achieved mastery, rather than relying on Khan’s thirteenth-ish attempt at defining mastery algorithmically.

  15. on 15 Sep 2013 at 10:11 amBen

    The Discovery Assessment platform does a decent job of providing a respectable response to your first and third points. While I don’t have a screenshot to offer (although I will have many in about a month), it tries to break student data down into relatively simple color coded indicators of which skills students are deficient at. While it can provide all of the minutia like time spent on problem, most teachers using the data prefer to stick with the simple indicators of which skills students need work on. I haven’t personally used it in a classroom, but the teachers in my district that have all used it enjoy the connections that it makes to resources inside of Discovery Education’s website, and how easy it make it to assign certain tasks to the students.

    As this now sounds like an advertisement for Discovery Education, I’ve also had many teachers just take the data and use it with existing interventions (non Discovery-provided) and teaching strategies within small groups in their classrooms.

  16. on 15 Sep 2013 at 10:19 amKate Nowak

    Never underestimate the time-suckage of frozen browsers and students-not-reading. Also, #first #gurrrlllll.

  17. on 15 Sep 2013 at 7:47 pmKevin Hall

    @Dan, but remember that even with an expert teacher such as yourself using SBG, you were only 47% accurate: http://blog.mrmeyer.com/?p=2877 . I would venture to say that KA is now more accurate than you were. And, as you say at the end of the post you linked to above (http://blog.mrmeyer.com/?p=1558 ), you were putting in 58-hour weeks implementing that. If that’s considered standard operating procedure, SBG will never really take off.

    Your point about KA being on their thirteenth-ish attempt to define mastery is well taken, however. Part of the problem was their initial reluctance, for reasons I couldn’t understand, to engage with anyone from the researcher community. I assumed it had to do with a Silicon Valley mindset that they could figure out anything on their own. So they ended up getting all excited about training a machine-learning logistic model, when the rest of us were looking at them and thinking, um, did you just rediscover Item Response Theory? (See the comments on this blog post: http://david-hu.com/2011/11/02/how-khan-academy-is-using-machine-learning-to-assess-student-mastery.html )

    But I think they are past that attitude now–they seem to be more aware of the need to read and respond to the literature. I mean, you might not agree with everything they do, but they clearly read your blog, and I think they take suggestions and feedback seriously.

    I’d also venture to guess that their model is now pretty good, since lots of modeling methods seem to get close to the theoretical maximum predictive accuracy, according to this article from the recent conference on Educational Data Mining: http://www.educationaldatamining.org/EDM2013/papers/rn_paper_04.pdf ). Check out sections 4 and 5 of that paper for the main results and a summary of the problems in educational data mining that we still don’t know how to solve. I won’t pretend to have read or understood the methods section…

  18. on 15 Sep 2013 at 7:52 pmKevin Hall

    @Dan Oh, I’d also point out that your ideal SBG dashboard (the last picture on this post: http://blog.mrmeyer.com/?p=1558 ) looks like it uses broadly applicable, but generic categories similar to the KA ones. So I think generic categories can be good for SBG-type dashboards. But I think activity-specific dashboards like the one for your Desmos pennies lab are what you need for in-the-heat-of-the-moment data.

  19. on 18 Sep 2013 at 4:27 amPam

    Not about the dashboard specifically, but about the problem. Looking at the results through the dashboard, I noticed that a lot of my students’ reasons for choosing the quadratic model over the others was “because it fit the points.” I wonder, then, if it would be worth including a cubic model, or a generic power model y=ax^n with a slider for the exponent. Then we could really get into why a quadratic would be better than any other model. Because I bet they could make a cubic model fit, depending on the data that comes up.

  20. on 18 Sep 2013 at 11:15 amTim Stirrup

    Your comment about measuring what is easy strikes a chord. At Mathspace (which is very new, you have probably not heard of it!) we give and record step on a step by step basis as students solve some problem. these tend to be the more closed question/answer than you show above.

    But the step by step approach means that students see their working out and have it checked as they go along, they know if they are on the right course.

    The dashboard for the teacher then shows progress for all students, across all the questions for an assignment. the teacher can see who has gone through with no errors, but crucially, can see exactly how each student has solved a problem (or not). in this way, the teacher can use the information to correct any misconceptions for the class or for the individual.

    The new adaptive functionality released this weekend does measure progress towards some ‘mastery’ – but the analytics on this one is a work in progress.

    There’s a very brief video on this here http://www.youtube.com/watch?v=MyOvRpUcrGw

    but we try not to provide too much data, just the data that is needed by the teacher.

  21. on 20 Sep 2013 at 12:20 pmanonpls

    Given what Mr. Bosma shared from his attempts at using this system, I wonder…

    Do yall have the capability to just go ahead and provide students digitally with the follow up prompts you’ve written for teachers to discuss with students in person?

    I understand that you’re trying to facilitate teacher-student interactions, but I don’t think giving students the opportunity to consider those extension questions/comments on their own would detract from the potential in-person conversations.

    On the other hand, if the teacher does end up with too many fires to put out dealing with tech issues, explaining directions, or whatever… at least the students would have a chance to re-consider looking at circles of a different size (or what have you).

  22. on 21 Sep 2013 at 12:04 pmDan Meyer

    There doesn’t seem to be a lot of downside there. With paper, those extra questions stare back at the student, freaking them out. With the digital environment, we can unload those questions progressively, at less cost to the student’s brain.

Leave a Reply