Tuesday, January 12, 2010

Evaluating your teaching evaluations

Inside Higher Ed's Rob Weir doles out his own thoughts on what to do once you get back those student teaching evaluations. Here are Rob's general thoughts, followed by my own:

There's inevitably something negative. Weir notes
Stay in the profession long enough and you’ll soon learn that it’s impossible to please everyone. Even if your class featured naked fire-jugglers, at least one student would still complain it was “boring.” You’ll also learn that some complaints are simply reflexive. When have students not grumbled that the workload was too heavy? Or that some courses were scheduled too early in the day? And even if you held office hours 23 hours per day, someone would complain you were hard to reach.
Too true -- someone always has a negative experience in your class. When this happens to, I remind myself that there's a few students who frankly aren't suited for college life. No amount of effort or engagement on my part is going to please those who flat out hate the educational experience.

Give the evaluations just the importance your institution does. If you're on the tenure track, you should definitely have a clear picture of just how much student evaluations matter in evaluating your teaching.

Look for the trends in the data. The overall picture matters much more than scores in one course or on one particular question.

Here are some additional thoughts I'd add:
Go after the low hanging fruit. Most student evaluations I've seen ask big picture things ("Was this course a valuable learning experience?") and more directly behavioral things ("Were the lectures organized?", "Did the instructor return graded material promptly?"). Your best bet for improving your evaluations is to focus on those specific behavioral criteria.

Keep your audience in mind. Students in your upper-division or majors courses are more likely to find the material you're teaching engaging, but Gen Ed students can be tougher to reach. Expect lower evaluations from underprepared and less interested students.

Don't sweat small differences. If your institution is like mine, student evaluations are quantitatively compacted, i.e., they tend to fall within a fairly small numerical range. One implication of this is that a small swing in raw numerical results can lead to a larger swing in comparative or percentile scores. So (hypothetically) if 3 students in a class of 35 had rated you one level higher on a given question, you would have ended up in the 70th percentile among the instructors you're being compared to instead of the 50th percentile. That's the sort of small difference you shouldn't take too seriously. Again, look at the overall patterns in the data, not minute variations that are likely to be statistical noise.

That being said, I'm neither a skeptic nor an uncritical booster concerning student evaluations of teaching. Students evaluations vary in design, and some will identify good teaching better than others. What students say is one element in a larger body of evidence that can tell us something about quality teaching.

Incidentally, Terry Doyle at Ferris State University has written an excellent summary of the research on the validity and effectiveness of student evaluations . Great advice, and definitely worth checking out.

So how do other people interpret their evaluations? Any other advice you'd share?


  1. This is a nice article. I strongly echo the 'trust your instincts' aspect (assuming, of course, that you've nicely developed instincts). Also, answers to questions are only as good as the questions asked.

    If you can, provide input on the sorts of questions that would get at the information you'd find helpful to improve your teaching.

    At my institution, each department has their own course evaluations and each professor can add questions that aren't on the departmental form so this has been fairly easy.

    Talking to students, as mentioned in the article, is always a good way to get feedback as most (at least where I teach) tend to truly want to help you improve the course.

  2. Very practically speaking, one thing I do right away is enter all the scores (can't do this with the comments, obviously) into an Excel file I keep with all my previous scores on the same or similar classes. Sometimes one can spot downward (or upward!) trends that way, which you might not see otherwise.


If you wish to use your name and don't have a blogger profile, please mark Name/URL in the list below. You can of course opt for Anonymous, but please keep in mind that multiple anonymous comments on a post are difficult to follow. Thanks!