Popis: |
My understanding of RateMyProfessors.com changed because of something my daughter told me. My position had been one of unadulterated contempt, as it is for just about everything that seeks to ground pedagogy to the Internet. For it goes without saying that RateMyProfessors has precisely the accuracy you'd expect from a data-base that gleefully overlooks sampling errors and disregards even basic protocols for survey measurement, like the control group. It is structurally positioned to attract contributions that run almost exclusively to the passionate highs and lows, and so it tells you almost nothing about how students feel at that sustaining middle register where students learn and teachers teach. It positively endorses "easiness" as category of approbation: as though wisdom did not come hard. And worse, RateMyProfessors has never afforded me a chili pepper. This hurts because it has given peppers to several colleagues I believe to be entirely undeserving. What changed my thinking was my daughter's observation that students make more use of the RateMyProfessors database at the University of British Columbia, where she studies, than they do at the University of Alberta, where I teach. The reason, she told me, is that UBC makes course-evaluation results formally available to students only through a structure of teacher volunteerism. In practice, this means that there's no Internet dirt on anyone at UBC, for needless to say bad teachers tend not to volunteer their inadequacies and good teachers are capable of foregrounding their triumphs selectively. At the U of A, however, students can log on to a Students' Union website and view the results of all course evaluations, which are mandatory and university-wide. Everyone knows that RateMyProfessors isn't accurate, my daughter told me, but at UBC "it's better than nothing." And so they read and write. This helps me understand that, at least in small part, RateMyProfessors is a forum for the expression of the demotic. I remember my first sight of this particular common-voiced instrument: it was in my second year at Queen's University, in 1972, and course evaluations were just being proposed--by and through the Students' Union--as a way of providing information to students about the kinds of courses they might want to take and about the professors with whom they might want to take them. The proposal, needless to say, was controversial. The Registrar's Office feared that this kind of information might uneasily affect course enrolments. The professoriate thought the practice invasive and impertinent-then, as now, the classroom was understood largely to be a private space for teaching and connected to other teaching spaces only through a general principle of systemic non-relation. Thrillingly, however, the Students' Union won the right to assess teaching and to share its findings with its student membership. An instance of seventies counter-cultural radicalism seemed, to our young eyes, genuinely to have been secured. Within a few years, teaching evaluations were to become ubiquitous in North American universities, but only because students lost control of them. University administrators took them over because they provided an easy way of translating pedagogical practice into tabulated performance indicators: course evaluations became a modality for enabling surveillance from above, not knowledge from below. And so students, curiously, became epiphenomenal to the practice of teaching evaluation. The questions asked on student evaluations moved away from the things prospective students of a course or teacher might want to know--"Was this course worth it? … |