What we need is another list…

It’s pretty common knowledge that US News’ college and university rankings distort institutional behavior, and not always for the better.  Rankings and best of lists and so forth, which necessarily measure specific things rather than the more nebulous (and subjective) thing we like to call “quality”, tend to fix people’s attention on those specific things, and often to the detriment of overall quality.  A friend of mine in grad school called it “process adulterating practice.”

What then can we expect from this?  (Official Press Release here)

On Tuesday, the Princeton Review released a list of the top 300 undergraduate professors. The list appears in a new guidebook titled “The Best 300 Professors,” which is put out by Random House/Princeton Review. The book profiles the nation’s top 300 undergraduate professors, identified by students, who ended up coming from 122 colleges and universities.

As always, calling something the “best” doesn’t make it so.  What exactly are they measuring here?

Some 42,000 professors were rated by students using the ratemyprofessors.com site. From there, the Princeton Review reduced the top number to 1,000 and then followed up by surveying administrators and students at the professors’ schools to garner more detail.

Rate.  My.  Professors.  Dot.  Com.

That sound you just heard was the last bit of respect that I had for Princeton Review slipping out the window and ending up as an oozing puddle in the gutter.  That other sound you hear is every single one of the 122 colleges running their websites and printing presses as fast as they can to preen in the reflected glory of a published list.

Look — I suppose there may be some sort of crude wisdom in crowds.  If a professor has extremely good RMP ratings, they’re probably not a droning dinosaur or a nasty misanthrope.  And if they have 100 negative ratings over 7 years with nary a kind word said, you’re probably wise as an undergrad to avoid them.

And the fact that a friend of mine from college, Professor Miriam Liss, made the list, means that the methodology can’t be entirely awful.  She’s brilliant and loveable and funny and a bag of chips.   She may even be one of the best professors in the country.

But if I wanted to know that, I don’t think I’d look at student evaluations.  I mean, are the professors who are the best at customer satisfaction really the best professors?  Or are they just the best mentors, or the most personable professors, or the most charming instructors, or something like that?  Isn’t there more to being a professor than getting good student reviews?  Shouldn’t it matter, for instance, which students like you?  Mightn’t you think that scholarship has something to do with who the best professors are?  What about political “juice” — the ability to get your students cushy internships, fellowships, and admissions?

I guess my point is just this: aren’t there already enough incentives for professors to destructively chase after high student evaluations?  Do we really need this?