A summary, prepared by the division dean, was attached to the front. Five of my seven classes were evaluated; a total of 211 students participated by turning in evaluations. Students were allowed to rate their profs as "Outstanding", "Good", "Average", "Poor", or "Failing." My ratings (and you'll just have to take my word for it) were
Now, lest you think I write only to brag, note what else was in my summary: the details of the college and departmental averages for all full-time faculty. The college reports the following ratings for some four hundred professors evaluated campus-wide last fall:
Clearly, grade inflation works both ways! 93% of the faculty ranks above average. 65% of us are outstanding, which raises an obvious question about what it is that so many of us can be standing out from! What on earth does "average" mean when only 6% of full-time faculty fall into that category?
Is Schwyzer right? Are we the beneficiaries of evaluation inflation as pernicious as the grade inflation so many of us deride? To the extent that student evaluations of teaching are valid (and to show my hand here, my view is "more valid than people think, but limited and not perfectly reliable"), does this phenomenon, if real, render them useless? I gather that the compression of results toward the high end makes distinguishing among the good and competent instructors challenging, but maybe the upshot is that the utility of these evaluations derives from the bottom end: If 'outstanding' is the norm, then how terrible an instructor do you have to be to get dubbed 'poor' or 'failing'?
I have to say that Schwyzer's post motivates me to go back and look at my own evaluation data and look at the disciplinary norms at my institution. How are matters where you teach — has the Lake Wobegon effect, as Schwyzer dubs it — kicked in?