The Problem with Student Course Evaluations
It’s the time of year when we start filling out student evaluations. Instructors pass around pencils and leave the room. Some are done online. You might fill out the 1 through 5 quantitative evaluations and write out a few words on the qualitative questions, but don’t know where they go afterward. Where do they go? Do you think about what your instructors think about them? Do you know that they are quite controversial?
I’ll never forget my favorite and least favorite evaluations. My favorite was “Funny like Sesame Street.” (Educational and entertaining!) I don’t think I would be allowed to publish the language in my least favorite evaluation here at Everyday Sociology. Students can get quite inventive with language and, the negative evaluations always stick in our heads more than the positive ones. It’s important to remember that we’re querying students at the most stressful time of the semester: at the end!
At the same time, there is a growing consensus that student evaluations are quite problematic. There are two major issues. The first is that student learning and students’ evaluations of faculty are not related. This is to say that a student may have learned a lot of the course material, but still rate an instructor in a negative way. Conversely, a student might evaluate an instructor positively in part because the student wasn’t challenged enough in class.
Indeed, this creates a rather perverse incentive for a junior faculty member to lower expectations in class in order to raise student contentment. Evaluations more often measure how much students like the instructor, not how much students learn. (Here’s something that is somewhat counter-intuitive: students aren’t the best judges of whether or not they’ve learned course content!) Are the evaluators students or customers?
The second issue is even more troubling. Research continually shows that there is an overwhelming bias toward white and male faculty (like me), as compared with scholars of color and women. Student evaluations are strongly associated with the gender of the instructor. Students are, for example, more likely to punish female faculty if the course is required as compared with male faculty. Students also rate women who are seen as “nurturing” and men who are “amusing” with higher scores. (Remember my evaluation of being like Sesame Street? Doesn’t that just scream amusing? My evaluations often include comments about my humorous approach to teaching.)
Don’t believe me? Although, yes, Rate My Professor isn’t the same as official student evaluations, history professor Ben Schmidt created an interactive chart of the words used to describe faculty and gender on Rate My Professor. Check it out. Male faculty are quite strikingly considered to be “funny” as compared with their female colleagues in every single discipline. Male faculty are “cooler” and “smarter” than women as well. Only female faculty are considered “shrill,” “bossy” and “cold.” Go ahead and search for words, and find which terms have stark gender-based differences, and which ones do not.
Online courses are a good test of these suspicions. In one study, researchers found that, if students believed that their instructor was a man, they were more likely to report that their instructor was “prompt” than if they believed the instructor was a woman.
Because these evaluations shape how faculty are hired, promoted, and given raises, these social factors should be deeply troubling. They are troubling for us, and they should be troubling for you, as students!
The good news is that there are other models of evaluation that might be more useful. Peer evaluation (in class and in teaching materials) is one. Surveying past students, rather than students who are frantically studying for their finals is another. Evaluating teaching based upon the use of teaching strategies that correlate with student success is another.
Another example is what at UMass we call Midterm Assessment Program. This process brings in a moderator from our Institute for Teaching Excellence and Faculty Development and gauges student comprehension of the course objectives and assesses the success of the course through small group workshops and a learning questionnaire. The moderator then compiles the data, and gives the instructor a report. Rather than end-of-the-semester feedback, this process not only mitigates some of the bias through a skilled and trained professional staff person, it also allows faculty to adjust their course, and really use student feedback in real time.
The bad news is that these forms of assessment aren’t the norm. They take time, and are expensive to implement as compared with cheaper assessment tools.