There's inevitably something negative. Weir notes
Stay in the profession long enough and you’ll soon learn that it’s impossible to please everyone. Even if your class featured naked fire-jugglers, at least one student would still complain it was “boring.” You’ll also learn that some complaints are simply reflexive. When have students not grumbled that the workload was too heavy? Or that some courses were scheduled too early in the day? And even if you held office hours 23 hours per day, someone would complain you were hard to reach.Too true -- someone always has a negative experience in your class. When this happens to, I remind myself that there's a few students who frankly aren't suited for college life. No amount of effort or engagement on my part is going to please those who flat out hate the educational experience.
Give the evaluations just the importance your institution does. If you're on the tenure track, you should definitely have a clear picture of just how much student evaluations matter in evaluating your teaching.
Look for the trends in the data. The overall picture matters much more than scores in one course or on one particular question.
Here are some additional thoughts I'd add:
Go after the low hanging fruit. Most student evaluations I've seen ask big picture things ("Was this course a valuable learning experience?") and more directly behavioral things ("Were the lectures organized?", "Did the instructor return graded material promptly?"). Your best bet for improving your evaluations is to focus on those specific behavioral criteria.
Keep your audience in mind. Students in your upper-division or majors courses are more likely to find the material you're teaching engaging, but Gen Ed students can be tougher to reach. Expect lower evaluations from underprepared and less interested students.
Don't sweat small differences. If your institution is like mine, student evaluations are quantitatively compacted, i.e., they tend to fall within a fairly small numerical range. One implication of this is that a small swing in raw numerical results can lead to a larger swing in comparative or percentile scores. So (hypothetically) if 3 students in a class of 35 had rated you one level higher on a given question, you would have ended up in the 70th percentile among the instructors you're being compared to instead of the 50th percentile. That's the sort of small difference you shouldn't take too seriously. Again, look at the overall patterns in the data, not minute variations that are likely to be statistical noise.
That being said, I'm neither a skeptic nor an uncritical booster concerning student evaluations of teaching. Students evaluations vary in design, and some will identify good teaching better than others. What students say is one element in a larger body of evidence that can tell us something about quality teaching.
Incidentally, Terry Doyle at Ferris State University has written an excellent summary of the research on the validity and effectiveness of student evaluations . Great advice, and definitely worth checking out.
So how do other people interpret their evaluations? Any other advice you'd share?