In all of the previous threads we've started on Academically Adrift, there's a common theme: the crisis in higher education and the need for us to wake up to it, analyze it, and figure out some solutions. Becko says that there's a "disengagement compact" (I like this phrase) in which parties are given incentives not to ask for, demand, or offer good teaching; Mike says that part of the problem is that we don't recognize the desperate need to institute virtues into the curriculum, and Jason, mindful of the crisis, wants us to be careful in our attempt to capture "good teaching" (it's not something that can be captured simply in quantitative terms), pointing out that we also need the humility to acknowledge that many undergraduate teachers simply do not know how to teach well.
That's a lot to chew on. To be honest (and I'm in agreement with all of their concerns), it's a bit depressing and overwhelming! The problems are so large and systemic that it's hard to imagine any way out. Below the fold, after a brief overview of what I take to be the main claim of the book, I want to move to a more empowering subject: how to take some stabs at fixing the problem. I don't have a lot of solutions, but I know where, as teachers, we need to devote our attentions: tenure and promotions procedures. Let's talk about them.
Before taking some stabs at "what to do", let's take stock of some of the larger claims that Arum and Roska make in the book. As I see it, and as I argued in an earlier post, their main claim (or at least a central one) is that critical thinking (and likely education as a whole) results from strong habits. The biggest predictors of CLA gains were, they argue, (a) the presence of strong and rigorous habits in the K-12 time frame (supported by schools, parents, communities, etc), (b) one's previous CLA score, and (c) whether the habits developed and cultivated in (a) were built upon and reinforced in college. Much of A&R's attack on the undergraduate experience, when it fails, is a way of pointing out how colleges are not doing enough to fulfill the mission of (c).
I don't want to get caught up too much on the "CLA" or "using critical thinking as the measure of learning" (I think Jason makes some good critical points about too much focus on this), but I think we can all agree that, at the heart of the successful educational enterprise, are habits -- virtues as Mike stressed (intellectual and otherwise, I don't want to quibble, that's a separate debate).
Unfortunately, there are powerful counter-incentives to assure that good habits are developed in a college setting. Parents don't want to see their kids fail (as long as the college is seen as hard, it's fine if it's really easy), and have a vested and personal interest in seeing their kids acquire a credential that will get them a job. Employers don't try to ascertain in any meaningful way whether potential employees can critically think or whether they've actually learned much, they mostly go by credentials and interviews. Students see parents and educators pushing for credentials, and so they push for the disengagement compact; professors get pressure from students not to push them too hard and themselves recognize that research is the inter-collegiate stamp of exchange value, not teaching, and also realize that administrators are often concerned more with perceptions of good teaching more than actual good teaching anyway. Why? Well, for the most part, perception drives enrollments, and administrators have to keep enrollments up; moreover, Boards care about endowments, and those are driven by tuition, enrollment, and donation. And none of those really is driven by great teaching. As long as the school is seen as reasonably demanding -- a perception -- it's all good.
That's a thick matrix of bad incentives. How do you get your foot in this door? Of course, as professors, we can individually "hold the line" and just insist to do what's right, but that's not a solution, it's a suicide pact, especially for people with no tenure. So I would propose that we need to change tenure and promotion. Two clear areas in which these needs to be done:
A. Less emphasis on research, more emphasis on teaching.
B. Teaching must be evaluated in a thoroughly different way.
I would say that (A) and (B) probably have little chance of ever happening at large research institutions. So I'm going to hold out here for smaller liberal arts colleges, where teaching is supposed to matter in the first place (at the very least, I suspect that changes at research institutions would be forced - if ever - by first changing liberal arts colleges and creating a successful market pressure for such things). Even still, at liberal arts colleges, I don't hold out much hope for (A). At least not right away. If people disagree, feel free to do so! I'd love to be wrong. Instead, I'm going to move to (B).
How can we evaluate teaching in a different way - one that gives strong incentives for good teaching? This is a huge question, and I don't have any quick answers. I simply have some simple intuitions and vague directions. For one, I think the numerical system of evaluation, which is here to stay and won't be changed, would need to be restructured so that the questions actually get at quality teaching. So why not start there? Imagine:
1. Questions that get at how many hours a week the student needed to study for, or think about the concepts within the course in order to acquire the grade they suspect they will get in the course.
2. Questions about the level of feedback one received from the teacher on assignments.
3. Questions that get at the level at which the classroom experience itself was demanding - did you need to read the texts/do preparatory work in order to follow along? (how many hours of prep reading did you do? What was your level of comprehension during lectures?)
4. Questions that aim at finding out the percentage of the material the student actual read over the course of the semester.
5. Questions that aim at finding out whether students felt grades were assigned in close tandem with the standards laid out in the syllabus.
These are just some questions. What I'd like to hear from people here is: what questions would you add? What current questions in numerical teaching evaluations should be dropped? What I've tried to do here, albeit briefly, is point out the need to get away from "so and so is an excellent teacher" or "such and such is an excellent course" questions, which tend to be the ones that T&P and administrators focus on. As one might suspect, these questions are likely not meaningful in terms of learning, and probably reflect more than anything whether the student likes the person teaching, or whether they had fun, or whether they found the course "agreeable" in some general way. They are more perception driven. We need to move away from such questions, and towards questions that get at habits, and whether teachers are cultivating them in the ways that they are teaching their courses. We also need questions that aim to see whether students who do put in the time and effort into courses are succeeding in those courses (if they are not, this could be evidence of bad teaching).
In a sense, we need to be willing to do things that annoy our students, and this means being demanding and being willing to hand out lousy grades. Annoying students is dangerous. That means that T&P must actually develop ways to assess teaching that reward such teaching when it is effective. The instructor must see it as worthwhile, in terms of professional incentives, to make some students unhappy. The only to do that is for us to push for such changes in teaching evaluations, and to get P&T committees to highly value and recognize such things. Not a small task.
This would not be an easy thing to do, make no mistake. But it's something in our control that we could try for, something that makes the effort towards good teaching not a suicide pact. Moreover, such an institutional shift in priorities (on P&T) would have the inevitable effect of leading to an institutional climate shift. Students would know that "this is a pain in the ass university where the teachers will work you to death". They'd expect it and eventually the incentive for the disengagement compact might lessen, as they would realize that "kicking their asses" is a value too widely held across campus. I would assume that the incentive on the part of students to push for the disengagement compact increases as they suspect that "kicking their asses" is not a universally valued aim across campus. Hell, who knows - administrators might actually - eventually - see the value in it and try to figure out how to market it. But the important point would be that teachers are rewarded for doing the right thing.
I don't want to make things too simplistic. Even doing what I'm talking about above is tremendously complicated, and I don't want people to think I've overly simplified things. Hey, it's a dream. It's got to start somewhere -- a blog thread, perhaps.
Chris,
ReplyDeleteHow are any of these sensible reforms to be made when 66% of all college professors are adjuncts? I'm sure you realize that many of us teach 6-10 classes each term to make ends meet. And the class sizes are increasing too. Mine at one institution have gone from 30 to 55 "students" in a year's time. My only contact with academics is here. We need to get at the root of the problem, which is the part-timization of the professoriate! When all professors have the opportunity to participate in policy making you will see improvements made. Until then, the money grubbers and power mongers will merrily ignore our concerns.
Hi Robert,
ReplyDeleteYou're right --they (my proposal) doesn't (address the concerns of educators working as adjuncts). However, you've got to start somewhere. It's a complicated mess. I'm simply proposing one place to begin hammering away. Clearly you can attack the problem in multiple places simultaneously.
I don't doubt, by the way, that the part-timing of the professoriate is a deep issue here. At the same time, it's a deep issue that seems, to me, to be at the other end of the problem, so to speak: it's at the end where the ways in which people and students and academics think of education is having real-world consequences on hiring decisions.
I'm not sure we can get people to stop engaging in those sorts of hiring practices until we get to the other side of the problem -- the way in which people are thinking of education in the first place.
In any event, I'm with you on the problem, I'm just not convinced that's the place to start, this not meant in any way to diminish the importance and impact that it has on things. This solution - the one I'm discussing here - could of course be applied partly to the situation with adjuncts too. It would involve not tenure evaluation but yearly evaluation. The problem is, whereas tenured professors have the power to force such changes to evaluation, adjuncts, unfortunately, do not.
As I mentioned, it's a huge problem.
I agree that teaching is not evaluated very well currently, but I'm not sure what the best way to fix this is. At my school we switched from a fairly standard, numerical kind of evaluation form (which was designed by people with what they claimed was relevant academic training) to something with no numbers, lots of space for students to write thoughtful comments, and questions along the lines you propose here. I would say that the results have been mixed. Sometimes I get very helpful comments that give me a better sense of what is working and what isn't. But many students leave the questions blank or write something with little useful content ("great prof!!!" if you're lucky, for instance). Almost as many seem to particularly dislike the very things that others make a point of praising. And not having any numbers makes it very hard to form an accurate overall impression of whether students are satisfied or not. Satisfaction is not the only goal, of course, but it's about the only thing these forms seem to gauge at all well.
ReplyDeleteOne question we ask is how many hours a week you had to work in this course. Some students write "0" even in courses that required them to write several papers--so that this could not be an accurate answer. Others write equally impossible answers, such as 100 hours. (I think these tend to be the "great prof!!!" students.) In other words, as far as I can tell from my experience, no matter what questions you ask, the answers you get reflect things like popularity and satisfaction, not what you want them to reflect.
A problem with your question 2 is that what students want (lots of praise and/or justification of the grade given) is not what good teachers provide (I have been told more than once). The advice I have always been given is not to write a lot on student papers but to comment selectively and provide constructive criticisms. I have also been advised that justifying the grade is not what comments are for. So I can imagine students judging negatively feedback that is actually of exactly the right kind.
But I don't mean to be entirely negative. It's a difficult problem to solve. Maybe one solution would be to ensure that teachers know the administration values good teaching and will not penalize individuals, departments, or programs if attempts to be rigorous lead to unpopularity with students. Another might be to evaluate teachers on the basis of assessment tests (showing 'value added') rather than evaluation forms. I wouldn't do away with evaluation forms entirely, but their value can certainly be overemphasized.
Chris, I'm not fond of seeing teaching and research as dichotomous. Practically speaking, they are, since time and energy devoted to one is necessarily time not devoted to the other. I myself advocate a 'teacher-scholar' model of faculty work that integrates these two forms of labor. (Here's a description: http://bit.ly/eskAWh)
ReplyDeleteThat aside, can't disagree that teaching with high expectations, etc. shouldn't be a suicide pact, i.e., teaching that leads to learning should be recognized and rewarded. So what should we ask on student evaluations? I agree with you that global perception questions aren't very insightful. My other beef is questions that require expertise students haven't got (asking about the instructor's knowledge of the course content, the quality of the textbook, etc.).
I favor statements/questions that ask about instructor behaviors that we have good reason to think conduce to student learning. Examples:
"The course helped me improve my F, G, H, etc, skills." (where F, G, H are going to be discipline-specific)
"The instructor provided me timely and informative feedback that helped me improve my performance."
"The assignments, exams, and tasks helped me progress toward meeting the course learning objectives."
"Taking this course increased my ability to study similar topics in the future."
"This course was a challenging learning experience." (Given the range of student abilities, etc., the desirable outcome here is a range of responses -- not challenging for a few, very challenging for a few, fairly challenging for most).
My justification for this is that we should not be asking merely about students' sense of their own learning (students invariably think they learn!) but nor should we "develop ways to assess teaching that reward such teaching when it is effective." My point here is that we should see the evaluation of teaching in terms of best practices, and reward teaching that follows them, even if that teaching isn't effective inasumch as very little learning occurs. There are simply too many variables (read: students!) to make it fair to evaluate teachers on the basis of actual learning. Think of this as my 'no pedagogical luck' approach to evaluating teaching.
Chris, you also suggested questions about student behaviors: I favor that too. One issue there is the place of this sort of question in something that's an evaluation of an instructor. I would favor completely jettisoning "student evaluations of teaching" to "evaluations of student learning," but I'm aware that's rather idealistic on my part.
"best practices" approach
Hi DR,
ReplyDeleteI agree that there are all sorts of problems with numerical evaluations. Part of me would love to dump the whole system - but administrators will never go for that. They want easy to pull out numbers. So you have to work with what you have. Of course, a more holistic system of evaluation would be best, drawing from a number of sources.
On Q2, it's not about grade justification, that's not what I was thinking about. I was more thinking that many profs will give an A or B and then really have a few "goods" and "explains" and that's it. I'm not sure that does the student any good, learning wise.
In the end, though, we agree: we need to move towards *some* kind of system that provides incentives for good teaching, and which recognizes that good teaching and happy students don't always go together (sometimes they do, sometimes they don't).
Hi Michael -
ReplyDeleteI wasn't aware that I had treated teaching and research as dichotomous. I don't think they are. I'm a big advocate of the need to do research (this has not made me popular in some cohort circles). Instead, I was arguing that the primacy of research needs to be taken down some pegs. I do think *that* is true, especially at liberal arts colleges.
On many levels, this is an unavoidable problem. For one, the number of publications you have gives you external value. Being a good teacher, for the most part, has in-house value. Instructors know that. Add to that knowledge that P&T wants pubs, and just wants you to be a decent teacher. Can we talk disengagement compact? I know I've seen it, I'm sure you have too. Basically, universities (well, especially liberal arts institutions) need to figure out who they are, and stick to that. That doesn't mean no research - it means acknowledging it's importance, but putting a primary emphasis on good teaching, not just "they aren't complaining" teaching.
I totally agree with you that the "expertise" questions have to go. I always shake my head when students write on my evals "Panza really knows his stuff!" Now come on. How do they know? I could be BSing the whole time in a confident manner. They'd never know.
I like these:
a) "The instructor provided me timely and informative feedback that helped me improve my performance."
b)"The assignments, exams, and tasks helped me progress toward meeting the course learning objectives."
I'm not sure about this one: (c) "Taking this course increased my ability to study similar topics in the future."
...only because I'm not sure how they'd know that (an expertise question? it could be that they'd know it in hindsight perhaps).
I also like "This course was a challenging learning experience."
I'm not sure why you object to "develop ways to assess teaching that reward such teaching when it is effective." This sounds like best practices to me. Otherwise how would you do the assessment? The main point here is that, say, it may well be understood that in discipline X, building good academic habits is done through practices Y and Z. Well, then you should reward Y and Z when you see it, regardless of how the students feel about it.
In your last paragraph, though, I think I see your point and I don't think I disagree: if you are looking to build habits, then it may well not be possible to point to "learning outcome successes" course-by-course. Instead, you simply point to see if the right practices are in place. If they are, you can assume, I would hope, that learning will successfully take place, but this would have to be evaluated in a more longitudinal manner.
That seems to me an acceptable tweaking, though as you well know, it would be a very radical point to get T&P committees to embrace.
Chris, sorry to seem critical on either score. My point about research vs. teaching wasn't so much directed at your post but just at the idea (common enough, I think) that an academic career must be 'about' either one or the other.
ReplyDeleteA small clarification: I don't think we should be evaluating instructors for how effective their teaching is, if by that is meant how well their students actually learned. Again, too many variables here: students themselves, plus disciplinary differences, etc. In contrast, a best practices approach doesn't measure teaching in an outcome-based way, but based on whether instructors implement pedagogical techniques, etc. that tend to produce learning (even if, in some actual cases, they don't). My own opinion is that outcome-based assessment of learning works best at a macro or institutional level, but not so much at a micro, course, or instructor level.
Michael -
ReplyDeleteNo worries on the criticism (bring it!). I was just sure I hadn't said that (about research).
I like your idea about outcomes assessment vs, say, habit-practices assessment. I think it would drive evaluators nuts (which is fine, it's a good idea), but I think it's a very valuable addition to the "how do we endorse good teaching?" conversation in T&P.
Students know that these evaluations are a tool to reward teachers they like and punish teachers they dislike. If they love the fact that Prof. Snoozewell gives everyone an A, they're still going to praise his classes for being so challenging, if they know that's what's expected. If they are furious that Prof. Higround gave them a D, they'll put down that she provided no feedback on assignments, however much she actually gave.
ReplyDeleteI agree that we should thoroughly reconsider how teaching is evaluated, but I think that altering the questions we ask students is only tweaking with the process. Monitoring students' perceptions should be part of the process, and it's worth thinking what questions should be asked, but something more radical is needed to shake things up - for example, more emphasis on peer-evaluation of teaching.
I disagree that liberal arts colleges ought to put less emphasis on research and more on teaching. First, from my own experience teaching at a LA college and from what I hear from my colleagues at other LA colleges, there isn't much of an emphasis on research. Yes, one needs the requisite handful of articles to get tenure. But one's contribution to one's field is often at best ignored and at worst penalized in many LA colleges because it "takes away" from the institution. This is a part over the overall hostility to the life of the mind - the anti-intellectualism - that we used to find rampant outside academe and is now firmly entrenched inside it as well.
ReplyDeleteIndeed, I think that an increased emphasis on rigor in student learning ought to be accompanied by an unabashed love for intellectual and scholarly pursuits. Or, put another way, I think that the devaluation of scholarship at LA colleges is not unrelated to the overall emphasis on non-academic, non-intellectual values and pursuits. Great teachers should be great scholars. Great philosophy teachers should be great philosophers (which most of us - but not all of us - accomplish through writing).
Of course I agree with part of the proposition: that there should be an increased emphasis on student learning (and good teaching practices). But as Michael pointed out and as you agreed, this need not be at the cost of or inconsistent with being an excellent philosopher (again, typically - but not exclusively - accomplished by contributing written work in one's field).
I agree with you about evaluations and I like Michael's suggestions. My least favorite question on our form asks about how "enthusiastic" the teacher is. Naturally, this invites students to think of learning in terms of entertainment.
Evaluations should not encourage subjective reactions - they should not be a poll about how much a student liked a class. If they are used to measure, the should measure measurable things, i.e., number of hours spent studying, kinds and diversity of assignments, acquisition and development of particular skills, etc. Evaluations should always include the grade the student actually received in the course. This can be done while retaining anonymity.
Most importantly, evaluations should never be the sole measure of teaching (as they are at my institution). There should be an office of teaching development offering a range of free training to all faculty throughout the year and it should include a service for helping faculty introduce, keep track of, and evaluate the pedagogical practices they use.
Chris,
ReplyDeleteYes, I think we pretty much agree. On Q2 I meant that what students want and what is good practice might not be the same thing, so that if a question asks whether the professor provided useful or helpful or valuable feedback, the students might say No when other professors (if they saw the comments) might say Yes. I agree that a few "good"s here and there are not helpful, but if the paper got an A then the student might not care. If we ask whether the professor provided many comments, then those who try to provide a few, well chosen comments might be penalized, especially by students who received grades lower than they thought they deserved. So it would be a hard question to word effectively. That is the point I was trying to make. Having a set of questions all trying to get at the same thing in different ways might help.
Michael makes some good points about assessment of outcomes, but if we're going to do such assessment I would like to see it used to help teachers (when it comes to T&P, or just not being laid off) who don't get good evaluations but do produce good outcomes. I don't know how we do this unless the assessment is somehow individualized. But I see the dangers. This might be another reason to think that teaching should be evaluated in multiple ways.
Ben, Becko, (and others),
ReplyDeleteAlways been curious about this, so I'll ask: How would you feel about going away from anonymous student 'bubble forms' to something like smaller focus groups of students who try to arrive at a consensus about instructor performance? I've always found the latter to be far better in terms of the feedback you get as an instructor, and perhaps by making the process intersubjective we'd remove some of the subjectivity, grade-related bias, etc. that's been referenced here?
Ben, I wonder what you mean when you say: "students know that these evaluations are a tool to reward teachers they like and punish teachers they dislike"
ReplyDeleteAre you saying this is what students do with evaluations or that students do this knowing that the institution will punish or reward teachers...?
Here's a post from December on this topic, indicating that students think their institutions pretty much ignore the evaluations: http://bit.ly/eEsc9I
Becko -
ReplyDeleteSplit into two pieces:
_RESEARCH_
We may have to agree to disagree, or we may have to make sure that we're talking about the same thing, or maybe we have different experiences.
I *don't* mean that LAC should stop placing importance on research. Research is *essential* to what we do, both as thinkers and as pedagogues, for a whole host of - in my opinion - obvious reasons. I do research myself, am driven by it, value it highly, and see it as a necessary component to my pursuit of professional excellence. As I noted to Michael, this stance has not won me some friends in my "you should just teach, teach, teach" cohort.
What I'm saying is that T&P needs to de-emphasize it, not that teaching loads should be increased, pubs ignored, or whatever. In my own LAC experience, every year sees an additional bar for scholarship. On its own, and in the abstract, this is not problematic. However, it is problematic if teaching is the primary mission of the school. At some point, something has to give. There are only 24 hours in the day, people have families, there are institutional needs, and so on. To make space for one, at some point, you have to take from the other. It's not a question here of one or the other, which would be ill advised (on both ends). It's a question of the identity of the LAC, and what balance best achieves that within a realistic time and workload framework.
To be honest, I have not had the experiences you have listed at your LAC. I can assure you, no one at my school sees their pubs marginalized as irrelevant to the school. Quite the opposite, they are lauded, and resources (of a variety of types) flow to those who do more research, and less to those who teach. If it means putting a "at some SLACs" caveat in front of the proposal (I'm not at all of them!), no problem. That said, the problem of increasing "disengagement compacts" as a result of increased research emphasis simply exists. Maybe not everywhere, but it does certainly exist. We (on the whole) need to talk about it.
_TEACHING COMMENTS_
I totally agree that evals should not be polls, and that they should be a mere component of total evaluation. However, in my experience, they almost always are polls, and although T&P and administrators always _say_ that evals are a mere component of total evaluation, that's not entirely true.
The big question here for me, though, is not necessarily whether this or that type of evaluation is part of a holistic assessment process, but rather whether the assessments that we are using give the proper incentives for actual teaching in the sense that _Academically Adrift_ seems to be suggesting is not really taking place. Basically: how do we break the disengagement compact by changing the P&T process?
DR:
ReplyDeleteI see your point on Q2 - the question is a bit subjective, and that is a problem. I wonder: can some of this be solved by explaining (in the syllabus or elsewhere) what the learning objectives for the course are, and what kinds of comments to expect that might help one to achieve those goals? Still surely subjective, but a step in the right direction.
I agree on good outcomes and bad evals. I would like to see a similar thing. But I wonder if part of what I'm calling for above helps. If students list "I studied 5 hours a month for this course" but such students got Ds, whereas those who studied 5 a week got Bs, that's useful info, and serves to show that something is going right in this class, even if there are lots of Ds and bad general evals. Questions that get at these sorts of "are you being asked to work?" in this course and "what grade do you expect to get?" and using those sorts of questions to look at helpful correlations adds a great deal to the complexity of what is going on in the classroom.
This comment has been removed by the author.
ReplyDelete(sorry about last comment, made a cut/paste mistake)
ReplyDeleteBen,
I agree it only tweaks the process. I wasn't really suggesting more -- we have to start somewhere. But that said, two points to consider:
1. Changing the evaluation process here via evals does much, much more than change the questions students are asked to answer. In changes the way (a) teachers think (and feel comfortable with) about effective pedagogy, (b) it changes how P&T thinks about it, and (c) it pushes against administrators who are happy with student driven perceptions of entertaining professors as good professors.
That's significant, even for a tweak.
2. I hate to encourage the misleading, but I don't think students think that much about those questions, which are frankly typically buried in the evals that currently exist. So why not do this: leave the form as is, and then later glean out the seemingly (to them) irrelevant questions, the plus being that they will be answered more truthfully.
Right now, I teach at a bottom-tier university, mainly just a required Critical Thinking course since I've given up trying to get students to read the short selections and go through draft and rewrite processes needed for them to write even half-way decent papers in other Philosophy courses.
ReplyDeleteI've tried different pedagogical strategies in my courses -- including using CLA Performance Tasks (with which I've had a lot of involvement at my institution -- for an example of what can be done with faculty driven CLA, see http://tinyurl.com/4u62cmx).
I'm in agreement with your focus on the necessity to identify and develop virtues, "strong and rigorous habits" if there are going to be any measurable gains in Critical Thinking.
What has been lost is any sense that students are not blank slates, that if they lack virtues it is likely because they already have vices. The "disengagement compact" both reflects those vices and reinforces them. There is a"thick matrix of bad incentives" (a great phrase, btw), but student performance and learning at some point is also a product of student' choices and commitments.
This is essentially a moral problem: how to move those who already possess vicious dispositions away from those and ultimately towards virtuous dispositions. It's a very difficult problem because those very vicious dispositions keep us from being able to adequately use the sorts of incentives or reasoning that might move better-disposed students.
In my Critical Thinking classes, we've done a lot of different activities aimed at getting students to practically reason and make some commitments about studying, their ends (which are pretty much get a good job), the necessary means to those ends, like actually having the skills demonstrably developed that employers are demanding, their other goals, temptations, etc -- I use the inevitable failure of most of the students on the first test (and the first test is so amazingly easy -- if one has done some study, using the sample problems, handouts, and review sheets provided) to bring these matters home to them. All of this seems to get very little traction -- Why?
So many of my students come into the classroom already morally damaged by their K-12 education, having deeply rooted dispositions which lead to failure. Being able to show up somewhere on time with a minimal amount of preparation -- that is a habit, as is its opposite. Looking carefully at and following all instructions -- another habit most of my students not only lack, but of which they do have the vicious opposite.
The questions that you suggest for evaluating teaching -- how would those play out in the sort of situation I and so many of my peers teach in?
We can assign work, show the students that success in the class depends on spending a certain amount of time studying, even set up learning activities taking this as their theme, and the majority of the students can still say: Eh, not my thing, not going to do it. You can sit down individually with many of these students and show them how they are on a trajectory to flunk, and they'll smile, tell you that they're really going to try, but not change their behavior at all. I'm actually a surprisingly popular professor despite being tougher than the other profs, and I have repeat students taking my classes every semester, and continuing to fail.
I don't have a solution to provide. I suspect much more drastic measures will be required to break these cycles of mutually supportive vicious dispositions and incentive systems. A good first step would be calling vices by their names, and being bold enough to ascribe them to students.
Agreeing better measures for teaching are needed, and receptive to some of the suggested measures you offer, they cannot be indexed to the amount of anything students have control over through their choices, or else teachers in lower-tier institutions are simply screwed.
Thanks Chris for a very interesting post. This has been an interesting discussion. However, unless I am missing something, much of this discussion is resting on the unargued assumption that students have the expertise to judge teaching. What expertise do they possess that enables them to accurately judge the expertise of their teachers? This is a different question then asking what we, as teachers, are looking for when we want feedback on our effectiveness. Individually, we can partially judge our effectiveness by the test scores, quality of writing, and class discussions, etc. of our students. What we are looking for is feedback on how we teach – are we effective at interacting with our students in ways that actively engage them in the learning process. For that I think that we should have teaching specialists that go into classrooms and observe the interaction that is talking place between teachers and student. This would be followed up by a formal written report to the teacher, and if part of a formal review, this report would also go to the department chair and/or review committee that would become part of the teachers official record. If problems are identified that need corrective action, the teacher would meet with the teaching specialist and develop a plan of action that would be part of the report. There would be follow up as needed. I would also suggest that institutions offered training sessions on teaching effectiveness, different teaching methodologies, best practices, etc., throughout each semester that would be required for all faculty members as part of their professional development.
ReplyDeleteGB
ReplyDeleteI agree with most of what you say here. However, I wasn't so much arguing that we should be convincing students of anything, actually - rather that we should simply *do* the practices that we know will be effective, and then we should be judged (in part) in whether we do those practices, and whether we do them well. That's in isolation from whether students like the practices, or whether they are "sold" on their benefit.
I think many people are underselling the massive sea change this sort of alteration in focus on 'what counts as good teaching' would result in. This is no small change. Teachers would be protected and rewarded and given proper incentives to teach right, and students would learn things.
I think you can judge how radical it is by the number of *faculty* that would reject it. And I think a lot would, for a variety of reason I think you can figure out.
This approach, in itself by the way, would be an assault on the vicious habits already possessed by some student. At this level I'm not sure it is necessary to engage that vice explicitly in conversation. Instead, you simply attack it, institutionally, at the core: the practices themselves.
Have the conversation later.
Hi Chris, nice post, I'd like to address a couple points: one about employers asking for essays on the spot during interviews, and another about administering alternative student feedback forms.
ReplyDeleteFirst: At one point in your post, you say,
"Employers don't try to ascertain in any meaningful way whether potential employees can critically think or whether they've actually learned much, they mostly go by credentials and interviews."
It seems to me that more employers are demanding essays on the spot, in light of recognizing that many of their new hires have atrocious writing skills. I've been trying to collect evidence of this; when I find a relevant article, etc., I pass it on to students.
Right now this is just an informal, on the side, "when I notice it" project. Just last week, I brought the following article in to my classes, and read some excerpts out loud to them: http://online.wsj.com/article/SB10001424052748703409904576174651780110970.html
And I posted a little bit about this over on my blog (http://philosophy-teacher.blogspot.com/2011/02/not-degree-skills.html).
What do you think...if it is the case that more employers are starting to do this, could this be one way of motivating students and administrators to take critical thinking and analytical writing more seriously?
Second: Alternative Student Evaluation.
At times, I have administered my own feedback forms to students, both at the midterm, and at the end of the term. (When I have time...which is not the case, lately.) This serves two purposes: first, it gives me a more accurate and more helpful response, and second, it gives me a "back up" should administrators come a callin'. Which hasn't happened yet. But you never know. As I elevate my standards, students get nastier.
Unfortunately, much of the feedback that the standard forms solicit, at my schools, tends to be pretty unhelpful. Similar to what someone else pointed out before, those forms tend to ask students for feedback they are not at all qualified to give.
On the informal evaluation forms I hand out, in class, I try to ask questions similar to some of the questions posed by folks in the response posts above.
What do you think? Have you ever just administered your own feedback form for students, as an alternative?
Perhaps they wouldn't hold up "in court" (deans, administrators, etc.), but at least it gives us instructors a supplementary take on what students benefited from, what they did not benefit from, etc (assuming the alternates we hand out are well designed).
Karla
John, you ask: "What expertise do they [students] possess that enables them to accurately judge the expertise of their teachers?"
ReplyDeleteFirst, I don't suspect that Chris or any of the other commenters are assuming that student evaluations should be the only information we receive about teaching performance. Your proposals for other avenues to gauge teaching performance don't require us to junk student evaluations.
Second, I don't think there's any single individual or constituency who's in a position to be an "expert" about someone else's teaching. Even an expert teacher isn't in a position to give expert feedback about some facets of my teaching, e.g., the clarity or accuracy of my explanations, unless she is also a disciplinary expert. And I'm even skeptical that my disciplinary colleagues are expert evaluators of my teaching. My own experience is that faculty evaluate their colleagues' teaching in terms how they themselves teach, not in terms of whether the teaching is effective.
The question is: are there components of teaching with respect to which students are "expert," things we could ask them about that no one else is in a position to know and which would result in reliable results and constructive feedback? I agree with you that students aren't expert at judging "teaching" quality as a whole. But they are expert at instructor behaviors and their own learning experiences. The questionnaire items in my Mar 19 11:13 comment are intended in that spirit: what can students legitimately contribute to our understanding of teaching performance?
Karla,
ReplyDeleteI need to check out that link on employers. If this is a trend, I think it is awesome. When students actually realize that they'll have to display their education in interview settings as opposed to just have that education assumed by employers, we'll be in a different place. Not the best of all possible worlds, but one step closer.
I like the alternative comments and feedback suggestions, I think they are all useful. However, in this particular post I'm not really focused on ways that teachers can get better feedback, but rather on ways that excellent teaching can be rewarded and the incentives for bad teaching removed. I'm going to guess that most teachers know exactly what they should do in order to better their courses, but many are unwilling to really go out and do all of those things, because there are few rational incentives to do so.
Chris, thanks for the response.
ReplyDelete"However, in this particular post I'm not really focused on ways that teachers can get better feedback, but rather on ways that excellent teaching can be rewarded and the incentives for bad teaching removed."
Good point.
But...do you think if we can more effectively (more effectively than the often ineffective evaluation constructs already in place do) prove that many of us are truly making a difference (in all the ways that really count), that many of us are teaching well already, then maybe:
1. Excellent teaching will more likely be noticed >>>
2. Excellent teaching more likely rewarded >>>
3. Students benefit from having those kinds of instructors in place >>>
4. Businesses, society, communities benefit from students being prepared to write, think, speak well...
In other words, could somehow demonstrating that the evaluation constructs already in place do a poor job of recognizing good teaching encourage administrators to develop better constructs, and then...take us one step closer to a better system for all?
Whattya think?
:) Karla
Karla,
ReplyDeleteI think the main problem lies with this:
"...do you think if we can more effectively (more effectively than the often ineffective evaluation constructs already in place do) prove that many of us are truly making a difference (in all the ways that really count)."
In my experience, current evaluative practices do not center in on evidence for these things. Instead, they tend to center in on evidence of other things. That's the problem.
What you don't want is a situation where teachers who do the job that they think actually will make a difference have to then argue -- *in spite of* the way their evaluations are typically handled -- that they are actually doing a great job.
I don't believe that administrators, parents, students, teachers, and so on don't *know* (all in their own ways) that the current typical practices do not maximize student learning. So it's not that they need these things proven to them. It's that they have bad incentives.
For instance: I don't doubt that many administrators know that student perceptions of good teaching don't yield good teaching. But student perceptions of good teaching are too valuable to them. If instructors were truly hard asses (let's say), student perceptions might drop even as learning goes up. But most administrators have no incentives to push for that.
Chris,
ReplyDelete"So it's not that they need these things proven to them. It's that they have bad incentives."
Great point there (and I recognize this is the point you were making in your original post too).
Okay...well I will continue working on collecting evidence that more employers are demanding essays on the spot, etc. As you concurred earlier, this could be one way of moving forward!
And I'll keep demanding high standards in my classes. Despite the fact that sometimes my doing so leaves students so resentful and vitriolic ("other teachers don't require this! Why do you!?"), that I question my own sanity, due to the toll this takes, at times, on my own emotional health!
Karla -
ReplyDelete"And I'll keep demanding high standards in my classes. Despite the fact that sometimes my doing so leaves students so resentful and vitriolic ("other teachers don't require this! Why do you!?"), that I question my own sanity, due to the toll this takes, at times, on my own emotional health!"
And that's my point! Why are we in such a state that doing the right things not only leads to resentment and hostility on the part of students, but also causes teachers to question their own sanity?
The incentive structure is all messed up at the core. What we need are some small, but important, incremental steps to change the aspects of that bad incentive matrix that are within our control. Some are not, some are.
That said, I'm not for changes that require one gear in a 10 gear machine to decide to make the changes. We need partnerships with at least one other gear, or we're wasting our time and our energy for sure.
I posted a longish comment on the statistics of bubble-dot surveys, anonymously. Unfortuately I did not save it. Was there a technical problem with it? Was it deleted?
ReplyDeleteThanks!
Anonymous -
ReplyDeleteIt never showed up for me, so I don't think it was deleted. Perhaps a tech glitch?
I absolutely hate when that happens. I always do a copy of it before sending, just for that reason!
Anon Mar 14 4:17: Here's the text of your comment. Sorry it got eaten!
ReplyDelete<
I want to weigh in on the issue of numerical, bubble-dot student evaluations. My thesis is that, except in rare cases, they are useless and meaningless. Trying to use them to measure performance is like using a Geiger counter as a lie detector.
Student evaluations are not "best practices." Off the top of my head, I don't know of any other workplace (thought there must be some) in which customers (I know students aren't customers, but I'm building an analogy) are directly and regularly asked for "quantifiable" data regarding their evaluation the employees they happen to encounter (who are usually not the ones responsible for various policies and workplace practices) in such a way that it expected that that data will be forthcoming and will be used in decision-making.
In a class of 100 or fewer students, a representative sample of that population is 100. If one or a few students do not fill in the bubbles, or mess it up so that their responses have to be disregarded on their own terms, then you cannot generalize about that population. For a class of 500, the sample size needs to be 50%. Since a lot of classes are less than 100 students (plenty are not), then those student evals of those class are useful only if ALL the students fill in the forms correctly. If not, it should be round filed. I know it is "impossible" to tell admininstration bureaucrats this, but they are simply wasting time with this stuff and to the extent that they make decisions about tenure and other matters on these bases their "objectivity" is really arbitariness.
I'm leaving aside here all the myriad problems with the nature of the survey questions that might be found on any given bubble-dot survey. Suffice it to say they are not typically well-formulated.
The bubble-dot system has to go. So while the kinds of questions Chris Panza proposes to help fix it in the original post are great, they do nothing to fix the basic flawed character of the bubble-dot survey system. It's junk.
Student input that is based on invalid measures and insufficent or biased is *meaningless*. If I am a manager evaluating a subordinate's performance, I want meaningful information about that performance, as unsullied as possible by personal-political-etc. attempts to exert pressure. Given that they are often inherently meaningless, their function inevitable becomes that of putting pressure on faculty to do the things that students and administration want them to do--things that are often inimical to good education.
My alternative would involve greater levels of on-going peer evaluation and mutual mentoring, perhaps with students involved in that process conversationally. And, of course, students should always have fair and responsive access to a responsible party if instructors are abusing their position, inexcusably failing in their duties, and so on.
I'm anonymous so that I can keep my job.
<
Oh! Thank you so much Michael. The internet has quite an appetite.
ReplyDeleteAnon:
ReplyDeleteThe internet most certainly does! It has fed on my comments on many occasions.
I half agree with your post here. I agree that bubble dot forms can turn out to be statistically unreliable, but I wouldn't go all the way as to say that the process itself, or the instrument, is inherently so. Moreover, I certainly think (as I've noted above in a few places) that any evaluation needs to be holistic, clearly. Relying only on course evals, for a whole host of reasons, is short-sighted and incompetent.
Those things said, a few quick comments:
1. As I've tried to argue above, any changes to the system need to work in partnerships with other interested parties. In this case both administration and faculty at large are not going to embrace a rejection of the bubble forms. So I think we need to work with this instrument, and alter what it looks at.
2. I think the bias issues are less of a concern when you ask questions that are not geared to perceptions of teaching excellence. When you ask "is so and so an excellent teacher?" (for example) you are not asking a student a question that is meaningful because they are not qualified to answer it. It's more "did so and so entertain you well?" or "did so and so seems personable?"
However, asking "how many hours a week did you study?" or "how many pages were you asked to write?" and "what grade do you expect you'll get?" (for instance) are meaningful, when analyzed rightly. These are not really perception questions, but questions focused more on behaviors.
My aim here in this general post is only partially to claim that this or that should be used for evaluation. My core focus and intent here is to suggest that whatever we use, or whatever way we collect data, it needs to focus on items that give strong incentives for people to teach well, even if this makes students uncomfortable, makes them mad, or makes them fail. Currently, such incentives do not exist; in reality, the reverse incentives are given.
My question is: how do we break the matrix of bad incentives in a way that can get teachers and administrators, say, to agree? Realizing that this will likely happen in P&T, my questions is: how do we change P&T to make this happen?
Michael
ReplyDeleteI agree with what you said regarding student evaluations. I should have been clearer in what I was referring too and that are evaluations that do ask students questions about teacher knowledge and competence, etc that I am familiar with from previous institutions that I taught at.
Also regarding peer evaluations - I agree with you. I also suspect that they can be very political and misused. But, that is not what i had in mind. At GVSU there was a department dedicated to helping teachers identify their strengths and weaknesses and to work with them to improve teaching effectiveness.
Michael - as far as rewarding or punishing is concerned, at my previous institution there was a financial crunch. A decision was made that faculty numbers would have to be reduced, and the faculty member who had consistently had the lowest teaching evaluations was the first to go. (Nobody had tenure; we were all on one-year contracts). So I always tell students, and I mean it sincerely, that these evaluations really matter to faculty and the institution.
ReplyDeleteHowever, at my current institution, we've recently switched from paper evaluations, filled out during class time, to on-line evaluations filled out in the students' spare time. Consequently, very few students fill in the evaluations at all. I think then that students suspect that these evaluations are not taken seriously. And in fact, because hardly any evaluations are filled in, the university cannot and does not place very much weight on them.
Because so few students fill the forms in, I get a good idea of how they do so. If only two students fill in the form, and 50% rate me as Excellent, 50% as terrible, it is clear that one loved me, one hated me. The pattern that I see is that students tend to like an instructor, and give all positive, or dislike, and give all negative. Even with more subtle and revealing questions, I think you would still get, essentially, a 'like/dislike' response.
That's why I really like the idea of a focus group; sounds like a lot of work, but potentially much more revealing.
I am anonymous from above.
ReplyDeleteChris, thanks for responding. I guess I am a little bit more worried about statistical signifance (I teach empirical research methodology so this sort of thing is a bee in my bonnet), but perhaps too much so. Focusing on student reports of objective circumstances and specific expectations would, bracketing that other issue, be a great improvement. "Is X an excellent teacher?" is a terrible question, even when posed to students who are not interested in punishing the boring or those who catch them plagiarizing. One tries to teach so that all can learn, but we all know that sometimes cognitive styles just don't mesh--so diversity at that level is a problem.
I do like the idea of focus groups--perhaps rather than doing it for each class, one is picked at random, to cut down on work and schedule interference. The results could be worked up for the instructor to ensure student anonymity.
I know this thread is ending, but I am curious: Do you ever just ask your students, say in class or in the small groups that linger at the end, how they think the class is going, what they might want to learn or do that you're not doing... those sorts of questions, but put in an impersonal way of course?
Anon 3/19 8:16: I do a mid-quarter self-evaluation in many of my courses. This is an occasion for students to reflect on how well they're mastering the course learning objectives, as well as a chance for them to give me feedback about how the course is going. I find this to be at least as informative as end-of-term evaluations.
ReplyDelete