Let’s talk about college rankings. Start by listening to Malcolm Gladwell’s podcast, “Lord of the Rankings” and its sequel, “Project Dillard.” I learned about the ranking methodology USNews uses while studying independent educational consulting at UC Irvine (it’s also posted on usnews.com): 20% of the ranking is based upon a peer assessment survey of academic “reputation”; 22% on retention and graduation; 7% on the percent of faculty with terminal degrees.
The peer assessment means the president of College X gives a rating to its peer institution, College Y, based on what they assume about that college’s quality, even if they’ve never attended, taught at, or set foot on the campus. Retention and graduation rates can be tied to the socioeconomic status of the admitted students. And earning a terminal degree, oftentimes a doctorate, is not equivalent to being a great teacher; in fact, most PhDs are earned through research, not because of excellent teaching.
That’s the methodology of the rankings that have an outsized influence on American students and their parents. But beyond the methodology, the concept of ranking is flawed. Consumer Reports doesn’t rank a toaster against a washing machine. Is rare sirloin steak the best dinner? Not for a vegetarian. You get the idea. Princeton cannot be the best college for every student. It’s preposterous to have a linear list of bests.