What we need is another list…

It’s pretty common knowledge that US News’ college and university rankings distort institutional behavior, and not always for the better.  Rankings and best of lists and so forth, which necessarily measure specific things rather than the more nebulous (and subjective) thing we like to call “quality”, tend to fix people’s attention on those specific things, and often to the detriment of overall quality.  A friend of mine in grad school called it “process adulterating practice.”

What then can we expect from this?  (Official Press Release here)

On Tuesday, the Princeton Review released a list of the top 300 undergraduate professors. The list appears in a new guidebook titled “The Best 300 Professors,” which is put out by Random House/Princeton Review. The book profiles the nation’s top 300 undergraduate professors, identified by students, who ended up coming from 122 colleges and universities.

As always, calling something the “best” doesn’t make it so.  What exactly are they measuring here?

Some 42,000 professors were rated by students using the ratemyprofessors.com site. From there, the Princeton Review reduced the top number to 1,000 and then followed up by surveying administrators and students at the professors’ schools to garner more detail.

Rate.  My.  Professors.  Dot.  Com.

That sound you just heard was the last bit of respect that I had for Princeton Review slipping out the window and ending up as an oozing puddle in the gutter.  That other sound you hear is every single one of the 122 colleges running their websites and printing presses as fast as they can to preen in the reflected glory of a published list.

Look — I suppose there may be some sort of crude wisdom in crowds.  If a professor has extremely good RMP ratings, they’re probably not a droning dinosaur or a nasty misanthrope.  And if they have 100 negative ratings over 7 years with nary a kind word said, you’re probably wise as an undergrad to avoid them.

And the fact that a friend of mine from college, Professor Miriam Liss, made the list, means that the methodology can’t be entirely awful.  She’s brilliant and loveable and funny and a bag of chips.   She may even be one of the best professors in the country.

But if I wanted to know that, I don’t think I’d look at student evaluations.  I mean, are the professors who are the best at customer satisfaction really the best professors?  Or are they just the best mentors, or the most personable professors, or the most charming instructors, or something like that?  Isn’t there more to being a professor than getting good student reviews?  Shouldn’t it matter, for instance, which students like you?  Mightn’t you think that scholarship has something to do with who the best professors are?  What about political “juice” — the ability to get your students cushy internships, fellowships, and admissions?

I guess my point is just this: aren’t there already enough incentives for professors to destructively chase after high student evaluations?  Do we really need this?


  1. I asked my former students at these colleges about this list and got a very positive response. Those listed seem to show the qualities you would want in a good instructor in the classroom (according to my former students) with the addition that they go above and beyond outside the classroom as well. They give guest lectures, help out incoming freshmen, have flexible office hours, and participate in activities that show that they are passionate about the subject matter.

  2. Assuming the professor’s point of view is the only point of view worthy of consideration, no, we don’t really need this.

    But students who are trying to decide on a school or a class aren’t helped by the university which is indifferent to the choice they make as long as its a choice from among those the university offers for its own purposes. From the university’s perspective a professor who isn’t brilliant and isn’t lovable and isn’t funny and isn’t a bag of chips is just as good as a professor who’s all those. From the student’s perspective? There’s a difference.

    So the university’s needs are served by keeping students in ignorance of which professors are good and which aren’t. Web sites such as ratemyprof.whatever help fill that information void that universities have chosen not to ignore. As the information vacuum is filled with information there will be distortions of the previously comfortable situation and were I a professor, especially a lousy professor, I might be upset by those changes. Good.

    If some good professors get unfairly caught in that change is that valid cause to try to impede the dissemination of information about professors? I don’t think so. The ignorance of students, or parents, doesn’t serve their interests or the interests of society.

    • Michael E. Lopez says:

      Well, I can certainly agree with you about the flow of information.

      But if I totaled up how many napkins the restaurants in my city keep in reserve, and then released that as a list of the “best” restaurants, you might say I’ve severely misunderstood what makes a “good” restaurant. I’m identifying something real, and getting more information out is (within reasonable signal:noise limits) a good thing.

      But despite that, I’ve really made a mistake if I think I’ve identified the best restaurants.

      And that’s my point: go ahead and publish the data. Visit RMP if it helps you. (As I admitted, there is some crude wisdom in these things.) All I’m objecting to is the audacity of slapping the phrase “Best Professors” on this book.

      Call them the “Most Engaging” professors. Or the “Most Well-Liked” professors. Or even – it’s a stretch, but a reasonable one — the “Most Loved Professors.”

      But none of those are synonyms for “best”.

      • Something else that might occur to your list of “best” restaurants as determined by reserves of napkins is that it would be an excellent dust collector.

        The value of your list isn’t determined by you but by the consumers of the information. If your list presents valueless information it’s not going to be terribly popular among people seeking valuable information. That means my list, which counts the number of napkins used is going to be more popular since it’ll be an indication of which restaurants are vigorously patronized.

        Similarly, if the information in that “Best Professor” book is lousy follow-ons will have a tough time unless they come up with some means of distinguishing themselves as presenting better information the reason people would buy the book to begin with.

        See how that works Michael? People looking for good information will accept information of uncertain value in preference to ignorance and, over time, good information will drive out bad.

        But let’s not stray to far from the fact that US News has a market for their information, regardless of its value, because universities benefit from education-seekers groping around in ignorance. The bad actor here is higher education and, comparatively, US News is the hero.

  3. Cranberry says:

    I think it’s a great idea–if used in conjunction with other available data points.

    I find it interesting to contrast US News & World Report’s rankings with the Washington Monthly rankings. USNWR’s rankings stem from surveys of college professors, in part. Are those officials the best judges of colleges they may never have visited? I don’t think so.

    Caveat Emptor, and all that. It’d be great to hear the opinions of professors who’ve worked at various colleges.