College rankings—specifically the broad "best of" lists based on self-reported data—are more often a form of bragging rights than a reliable resource for prospective students. They have evolved into a game schools play against each other, with the prize being prestige.
Some college administrators have even for trying to defraud the system to improve their institution's rank. People have gone to such extraordinary lengths because these lists have measurable positive and negative—particularly financial—consequences for the schools.
Simply ranking among the top 25 schools can lead to a in applications. Conversely, falling in rank, as Columbia University did on the list, can cause applications to drop and .
Rankings are wielded in vastly different ways by different stakeholders: as a profitable product for publishers, a powerful marketing tool for schools, and a guidebook of sorts for prospective students and families.
This would be a non-issue if all stakeholders were aligned on the primary purpose of rankings as an overview of the best schools based on a comprehensive list of factors reflecting an array of perspectives with rigorously tested statistical methodologies.
But they are not, and prospective students ultimately lose because of it. Concepts like "best" or "most valuable" are exceedingly challenging to quantify when there is so much variety among schools and student experience.
Rankings with one-size-fits-all metrics based on limited data derived from flawed methodologies are not pursued by schools or assembled by publishers as a resource first and foremost, but rather as a product to sell. Former Stanford University President Gerhard Casper , and still, they persist.
However, some players, including Yale and Harvard, are refusing to participate in the game. As college rankings face a more high-profile reckoning, explored why college rankings are—and always have been—a controversial practice.
College rankings began as a way to reach new markets
U.S. News & World Report has dominated the college ranking market since 1983. Its was a list of just 10 schools derived by asking a group of university presidents across the country to name the top schools in their field—in their opinion.
With the dawn of the internet, the market for higher education grew from regional to national and even international. Soon, higher education rankings became a way for schools to highlight their value to students from around the world.
Today, the annual U.S. News & World Report list captures 1,500 schools and based on self-reported data such as graduation rates, test scores, student debt, and class sizes.
U.S. News & World Report also uses a as part of its ranking in which the president, provost, and dean of admissions at each school must rate the quality of the academic programs at their peer schools, as well as their own.
One of the biggest criticisms of this process is the incentive for biased reporting—for schools to report favourably on their programs and to evaluate their competitors unfairly to gain the advantage.
Generically "best" school rankings don't capture the nuance of the student experience
Many school rankings don't account for factors central to the student experience, like the cost of living, social fit, diversity among the student population, or success after graduation, which can look different to every student.
Some of the most prominent rankings don't even poll enrolled students to capture their perspectives. Additionally, rankings' methodologies often change with each new annual list—an often criticized aspect of the U.S. News & World Report rankings—making it the results and any changes from one year to the next.
Assembling "best" lists encourages homogeneity in student populations
Wealth plays a deeper role in college rankings than anyone might suspect. Applicants who can apply to and choose any school often gravitate toward those at the top.
A 2017 report by the New York Times found that at nearly 40 of the most prestigious schools—those clustered in top spots of college rankings—more students of household incomes than the bottom 60%. Data shows that wealth, more than ability, .
When rankings emphasize overall graduation rates as part of their "best" calculations, this can incentivize institutions to perpetuate these admissions practices. Admission decisions can shift to bolster rankings in other categories like test scores and the GPA of incoming students.
More schools are dropping out of generic rankings and more specific rankings have emerged
Despite the popularity of broad national rankings, many schools are opting out of the game. Several Ivy League law schools—including historically high-ranking schools Yale, Harvard, and the University of California, Berkeley—announced in 2022 they will no longer participate in U.S. News & World Report rankings, over misleading classifications, data omissions, and admissions disincentives.
More focused rankings, like those or liberal arts colleges, are being published to highlight specific categories instead of the broader "best" schools rankings, but may still fall short of capturing the nuance of every student's needs.
This story originally appeared on EDsmart and was produced and distributed in partnership with Stacker Studio.