Deans, who teachers English, outlines his "small experiment with long-delayed course assessments, surveys that ask students to reflect on the classes that they have taken a year or two or three earlier."
I've been considering such evaluations ever since I went through the tenure [process] a second time: the first was at a liberal arts college, the second two years later when I moved to a research university. Both institutions valued teaching but took markedly different approaches to student course evaluations. The research university relied almost exclusively on the summary scores of bubble-sheet course evaluations, while the liberal arts college didn't even allow candidates to include end-of-semester forms in tenure files. Instead they contacted former students, including alumni, and asked them to write letters. ...But how to get that kind of longitudinal feedback at a big, public university?
Deans then wrote a six-question survey on SurveyMonkey and e-mailed a link to the survey to students from courses he had taught one year ago and three years ago. I was surprised by the rate of return Deans got: 60 percent, not makedly worse than I sometimes get for my regular end-of-term evaluations. As Deans puts it, he was interested "to know what stuck -- which readings (if any) continued to rattle around in their heads, whether all the drafting and revising we did proved relevant (or not) to their writing in other courses, and how the service experience shaped (or didn't) any future community engagement." I won't go into the details of the results Deans got, but suffice to say that he got a powerful picture of which assignments and readings made an impact and which didn't.
Deans' efforts are laudable, and they raise an issue I've long thought about: the timing of student evaluations. Why should we suppose that students are best situated to evaluate their learning experiences immediately after they take place (or in some cases, as they are still taking place)?
I understand that the proximity of student evaluations to the final exam in particular tends to influence how students evaluate the course, but beyond this, I wonder if various situational factors lead students to evaluate their own learning experiences in distorted ways. The student who, at the end of term, is laboring under a ton of deadlines is probably going to say that course workload is too heavy. The student who came into the course afraid of essay writing who got an A on the most recent assignment is more likely to say positive things about such assignments. And so on. This isn't to say that situational factors might not also influence students evaluating a course a year or two after it took place, but I would speculate that hindsight, while not 20/20, is still clearer than students' immediate perception of their learning experiences. (I'm reminded of that message on a car's side mirrors: "Objects in mirror are closer than they appear.")
Think of it this way: We're asking students to evaluate their learning experiences. Just after those learning experiences occur, it's likely that their evaluations will be shaped by their memories of the experiences. As time passes (and students have more information about themselves as learners, their needs, etc.), the particulars of the experiences will recede and the learning may come to the fore. Or so I would hypothesize.
So I find myself very tempted to follow Deans and create my own instruments for gathering longitudinal feedback. Are others similarly tempted? Should we expect this feedback to be more insightful, accurate, and useful than the feedback gathered as courses reach their conclusions?
I'm not sure that the delayed evaluations would be more insightful, accurate, or useful, but surely they would be differently insightful, accurate, and useful. I can think of no reason not to try such an experiment. More kinds of data is better than fewer.
ReplyDelete