I'm going to be trying something new this quarter and would be interested to hear initial reactions to my plan.
In my interdisciplinary Gen Ed course on death and dying, the students receive a weekly writing assignment. They are required to complete five of the assignments during the term. The assignments themselves will be ungraded, but I plan to give collective feedback along the lines I described in an earlier post.
I'm then requiring the students to tell me which three of the five completed writing assignments they'd like me to count for their term grade. In short, they pick their best work to be counted toward their grade.
Now there are a number of reasons I'm trying this. For one, it requires the students to write a fair amount without swamping me with continual grading. Second, it broadcasts what I hope is a positive, hopeful message to students: I won't evaluate you based on your failures. After all, people trying to learn will fail sometimes. Instead, I'll grade based on what you can do.
Lastly, it's a step toward helping students take a metacognitive stance on their own writing. Instead of relying on my judgment about what makes for quality writing, students have to interrogate their own writing and articulate, at least in their own minds, the standards they believe their writing should meet.
What do you think?
I like this idea a lot. You might encourage them to grade their own work throughout the semester, in light of your collective feedback. That will make it easier for them to identify their best work at the end of the semester -- and it will also make them do the metacognitive work throughout the semester, rather than just at the end.
ReplyDeleteThis is an interesting way of doing things, and I may be interested in doing this in the future as well. But I am wondering about transparency and logistics. Will it be relatively easy for the students to know how they're doing just from looking at the collective feedback? I would be worried about this in the case of a student who is very intelligent but who continually argues against an opposition made of straw (or continually makes some other fallacious mistake like confusing 'if, then' with iff). In this kind of case, a student may be quite impressed with her work while I may think that it's actually not worthy of an A. The other thing I'm wondering about is when the grading will get done. The collective feedback is supposed to alleviate the burden of all the grading, but if you have to actually sit down and grade three papers per student at the end, it sounds like a lot of work. Would you be assigning grades to everything as you go along and then "drop" whichever two grades the student doesn't choose as her best work?
ReplyDeleteYou could also ask them to write up something about they picked these 3 of the 5, i.e., why they are best or whatever, instead of just having them think about it. Sounds good.
ReplyDeleteThinking over the metacognitive aspect of the assignment, particularly in light of the collective feedback model introduced in the other post, what if students had to submit their best work to be graded, along with a short (1-2 page?) writing assignment arguing for / explaining why they picked those papers to resubmit. You could require (or at least strongly encourage) that they make reference to the collective feedback in formulating their position. That would help ensure that students actually applied the general feedback from your comments to each specific essay. What's more, it would help guard against students being are unable to articulate why they pick an essay, and picking it based on the writing experience (e.g., it felt good when they wrote it) or vagaries about what they thought you liked best.
ReplyDeleteDavid - Good idea. Maybe provide a rubric they can use for self-evaluating?
ReplyDeleteAnon - I'm not sure I want it to be "relatively easy" for students to use the collective feedback to improve their own work. Part of the point here is to have students do more of the discovery concerning the quality of their own writing. There's some good research supporting the idea that the lessons are more likely to sink in if students discover their strengths and weaknesses than if teachers just point them out. At the same time, I don't want students to be utterly frustrated in their efforts to self-evaluate or to routinely evaluate themselves more positively (or more negatively) than their work warrants. So I guess I want a balance between compelling the students to do meaningful self-evaluation on their own and my guiding them through the process.
In any case, I hear everyone saying (a) that students need clear evaluative criteria if they are going to do this metacognitive work successfully, and (b) they need at least minimal guidance in order to get going
Hi, Michael,
ReplyDeleteI've done something like what you've described, but with the metacognitive element that Jeff Maynes mentioned included.
This semester, I'm trying something different, but that's motivated by a hope like the one you're describing: for each category of assignment (e.g., quizzes, short papers, reading comprehension papers), I make the first one required but ungraded -- it's a practice run. I then have one-on-one conferences with each student to go over those first submissions, explaining my criteria and how that first submission would fare when judged by those criteria. Their second (and subsequent) submissions *will* get a grade.
The conferences have, I confess, been both tiring and difficult to schedule, but I think they'll pay off. (Important to note is that I'm talking about two sections, each consisting of twenty students. I KNOW that this wouldn't work for people teaching larger sections!)
Michael,
ReplyDeleteYes, a rubric might help. But it might not be necessary, depending on the level of "collective feedback" that you offer. The closest thing I've ever done is this: Students wrote a brief summary of a passage (from Hume, if I recall). I wrote a "model response." It wasn't an outstanding response, just good -- something that most of the students in the class could have done. I then gave detailed, written comments on the model response. The comments indicated common mistakes that students made in their summaries, along these lines: "Sentence 1 says that Hume believes that p. Lots of students wrote that Hume believes p* or p**, but that's incorrect because.... Sentence 3 says that Hume argues that q. Some students omitted this, but it's important because...." I then gave students the chance to rewrite their summaries for extra credit. Most did so. Most improved dramatically. The students reported that the comments helped a lot. I take my experience as anecdotal evidence that students can assess the quality of their own work if given a model to which to compare it and some detailed commentary on the strengths of that model.