Suppose that I’ve become a better teacher. Suppose, for instance, that I’ve used the same bank of test questions over the years and that, due to implementing certain non-substantive changes in my PHI X course, students taking that course from me this year are doing a better job answering those questions than students who took the same course from me in previous years. So the material that I’m trying to get the students to learn and the technique that I’ve been using for assessing whether they’ve learned it hasn’t change, but I’ve become more effective in that my current students are, on average, leaving the course with a better understanding of the material than students who took the same course from me in previous years.
The question, then, is: Should I (A) adopt higher standards with respect to what level of understanding I expect from them so as to earn certain grades or (B) keep the same standards and give higher grades on average than I had been giving in previous years?
To some extent, this sounds like a rehash of debates about grading on a curve: Are we supposed to be evaluating mastery (no curve) or comparative performance (curve)? Notice that my university, in its official description of what an 'A' grade is, mashes these two together:
Indicates originality and independent work and a thorough mastery of the subject matter/skill; achievement so outstanding that it is normally attained only by students doing truly exemplary work.
So an 'A' indicates mastery, but also 'exemplary' (i.e., unusually good in comparison to what is normal) work.
I've always opted for the mastery approach over the comparative approach. Ideally, grades should reflect how much or how richly students learn. Suppose I curved my students, they performed very poorly, but I am thereby compelled to give A's to the top 10% of the class. I don't think I should convey to the larger world that those students have a 'thorough mastery of the subject matter'. I'd then be attesting that they can do things they can't in fact do.
Yet Doug's post also highlights an uncomfortable fact: If we adopt the mastery approach, then we have to acknowledge that grades are not (ideally) a reflection only of student learning performance. They are also measures of instructor teaching effectiveness. This may sound funny to the ear: Aren't grades 'earned' by, and given to, students? And is it all plausible that instructors whose courses have higher grades are more effective instructors? To the former question, I can only say 'no.' Imagine a (wonderful, spectacular, idyllic) world without grades. When students do exemplary things in that world, would we be the slightest bit reluctant to divide the credit between them and their instructors? Of course not. The only reasons why we want to distill the instructor contribution to learning from the student contribution is because of the credentialing function of the modern university, of which grades are the most prominent symbol. Because learning is a partnership, it's essentially impossible to identify the partners' respective contributions to the end result with anything like the precision we expect from grades. But better in my estimation to give up the assumption that grades are purely evaluations of student performance.
As to the second question, the answer is 'of course not'. There's no reason for thinking that high grades in a course are a tribute to instructor teaching effectiveness. Instructors can inflate grades and thereby detach evaluation from learning. But that underscores the wisdom of divorcing teaching from evaluating — a topic worthy of discussion in its own right.