Tuesday, November 1, 2011

An R1 faculty member spills some beans on teaching and learning

Michael O'Hare, a professor of public policy at UC-Berkeley, shares some intriguing observations about how his colleagues perceive teaching. Since Berkeley is the sort of Research 1 place where you get brownie points almost exclusively for research rather than for teaching, it's gratifying to see O'Hare being candid about the issue and why his R1 colleagues are wary of calls for accountability or improved teaching quality at the university level. Curious to know reader reaction to his observations ...

Like many, O'Hare laments the lack of serious, formal training for faculty about how to teach, as well as "the complete absence of a quality assurance program for teaching that anyone from industry (service or manufacturing) would recognize":

I’ve asked where it is again and again, and everyone – including the chair of the Committee on Courses of Instruction – says they don’t know. For research, we have a fairly good QA system with the equivalent of quality circles, collaboration, watching each other work and talking about it, peer feedback, and the other basics. But for teaching, where our political life hangs by a thread…well, there was a one-off program for about 40 faculty last spring on “how students learn”, and a seminar that twelve faculty a year can join...
O' Hare senses an ambivalence amongst his colleagues about the task of teaching:
I sense a great deal of resistance to taking teaching seriously among most of my colleagues (though everyone asserts, on cue, that we care about teaching, and sometimes that we are very good at it). This resistance has two main sources. The first is subconscious. As we are mostly all aware that student course evaluations, useful and important as they are, are uncorrelated with learning, and they are all we get, we have never had evidence of a type we respect as scholars that we are any good at it, and we are as insecure about our abilities – especially abilities in a field with a strong affective component – as the next person. Seriously engaging with improving teaching is just scary; why would I start to play a game I may not be able to get any good at? ...

The second is a correct perception that there is a production possibility frontier across teaching and research, and an incorrect perception that we are operating on it and therefore any gain in student learning will be at the cost of research productivity.
I have less to say about the second issue, except to note that anything calling itself an educational institution is seriously off track if another responsibility (research) is so heavily incentivized as to crowd out education. As to the first: Yes, exactly. The way to encourage and reward quality teaching without freaking people out is to develop instruments that measure it in an equitable and (to the extent possible) accurate way. One of the baleful effects of student evaluations is that even the reliable ones, if that's all the evidence faculty are evaluated on, only tell us about the end product, and imperfectly at that. As analytical tools, they are often clumsy or obscure, and as evaluative metrics, they are easily tricked by contingent variables instructors cannot control.

O'Hare notes that the cheapest way to improve teaching may be very simple: Watch someone else.
Here’s one example: break the profound isolation of the teaching profession (only a pathologist in a dark room with his microscope, or maybe a forest ranger in a watchtower, has as little day-to-day peer and partner support as we do). A typical course around here meets for fourteen weeks, twice a week, in plenary session with the prof. Let’s imagine two of those weeks, about six hours per semester, redirected from meeting with the students to visiting another prof’s class thrice, briefly writing up three things she’s doing well that (i) I should try to copy in my own course (ii) she should be aware of as effective practice, be proud of, and keep doing; and three things that would make the class sessions [even] more effective. I still have 90 minutes left: this might be a lunch meeting to schmoose about what everyone saw in these visits (maybe in groups of four rather than pairs). After a couple of years of this, given the minimal base of collaboration and mutual coaching we’re starting from – let me emphasize, we never see each other work and never talk about what we do in this area – I guarantee that student learning would increase by way more than the 14% lost from so-called ‘contact hours’. 
Of course, I'm saddened to hear that O'Hare's colleagues never talk about teaching! But surely he's spot on that one of the great oddities of our profession is just how little we know about how others fulfill their pedagogical responsibilities.


  1. Reminds me of a movie, paraphrased line - "I'm shocked, shocked to find that no teaching is going on in here!"

  2. Thank you for a very thought-provoking post, and for bringing this article to our attention! I, too, am at what one might call an "R1" university (University of British Columbia), and so some of what O'Hare says sounds familiar. However, I feel fortunate that there are at least more opportunities for improving teaching and learning at my university than he cites. We have an entire centre devoted to providing short workshops, three-day intensive workshops, and even a year-long one that I'm taking right now, all focused on providing information about how to improve teaching and learning. The only thing missing from most of these (with the exception of the year-long one) is a focus on providing faculty with the research underpinning the recommendations. I think that's crucial--we are trained to produce and listen to quality research, and if we are to change our teaching ways, I think that hearing about "what works" is most effective if we also hear the data that supports it. Or at least, that we are given bibliographies so we can go find it if we wish.

    UBC has also recently started to develop a significant program of peer review of teaching, where faculty members (esp. those pre-tenure) will be visited by two others, one from the dept. and one from outside the dept., as a way of trying to supplement student evaluations as the previously main method of judging quality of teaching. Of course, most departments already did peer reviews, but this program tries to improve upon and standardize somewhat the peer review practices in various departments, based on some of the "best practices" of peer review of teaching in the literature on education research. (I have a blog post on this if anyone is interested: http://blogs.ubc.ca/chendricks/?p=111).

    Still, I do find that many people, including myself, find it daunting to have someone come observe and comment on their teaching. I hadn't thought about it before, but this is different than many people's attitudes towards having their research peer reviewed. Most of those I've spoken to recognize the importance of that practice and that it can improve our work (even if it is uncomfortable at times). But peer review of teaching seems much more uncomfortable to many people, almost as if it's more invasive. Occasionally I have heard talk amongst some in the university that sounds as if criticisms of teaching border on a violation of academic freedom. It's interesting to consider why peer review of teaching seems to some more problematic than peer review of research.

  3. Derek Bok has a great anecdote in the beginning of his wonderful book "Our Underachieving Colleges about a university head (one can only imagine it is him) putting a question about critical thinking on the course reviews, only to have it taken off the next year. To say R1 universities are allergic to fostering collaboration on teaching and learning measures, or even promoting thinking about thinking about them, in an understatement.


If you wish to use your name and don't have a blogger profile, please mark Name/URL in the list below. You can of course opt for Anonymous, but please keep in mind that multiple anonymous comments on a post are difficult to follow. Thanks!