When MBA student Nitin Bishnoi decided he wanted to go to business school, there was no doubt where he would start his search: the rankings. Because – well, who wouldn’t? They have become so embedded within higher education that the idea of not checking the rankings before choosing a degree seems outlandish. “ I don’t worry about their reliability,” says Bishnoi. “I’m sure they have done their research and due diligence.”
Bishnoi certainly isn’t alone in his view. We possibly place more importance on rankings in higher education than in any other area of life. But the more you think about it, the more outlandish that idea becomes. Choosing a degree isn’t like choosing a new phone, for instance, where you can directly compare things like RAM, megapixels, screen size and so on. How could you ever compare the quality of teaching, career advancement, classroom experience, or the basic life lessons that a degree teaches you?
But that doesn’t stop the various providers from trying to do just that. Rankings remain highly influential, a vital marketing tool for schools, and genuinely useful in many ways – but are also increasingly disliked by academics, mistrusted by deans, and seen as less reliable than ever by students.
So how did rankings become so important, what issues are they facing, and where do they go from here?
“As humans, we just love numbers, right?,” says Michael Barbera, an adjunct lecturer and chief behavioural officer at Clicksuasion Labs. “Numbers are a great way to classify and get our attention. When we see that a publication released their list of the top 100 universities, we say: ‘Oh, cool, this college is here again – that means they must be a really good school.’ But we never ask what makes them a good school.”
Nat Smitobol is an admissions counsellor at the educational consultancy firm IvyWise. He believes that lack of critical appraisal goes deeper than rankings. “In our society, it’s so easy to just look at what’s out there, what’s available as information and then not be critical of that information itself. We just take it at face value: ‘Oh my God, that’s number one!’”
He adds that the importance of rankings is part of the “capitalist-centric mindset” that pervades the US and other countries. It has led to the idea that education is another commodity that can be compared and ranked – just like that new smartphone.
Universities might not want to showcase everything. They just want to show their best side
Nitin Boshnoi, MBA student
Schools, too, have their own issues with rankings. “They’re a subject that deans love to hate,” laughs Marion Debruyne, dean of Vlerick Business School in Belgium. “In a university ranking, number nine is supposed to be better than 15, and number 19 is supposed to be better than number 20. But I think that is a bit of an illusion because, in my experience, the differences between those schools are so minor.”
Yet those wafer-thin margins can make a massive difference to universities. When the University of Sheffield fell out of a global top 100 last year, the BBC suggested it was one of the reasons for the school’s drop in applications. The situation even led to accusations that the university was “gaming the rankings to retain is top 100 position. It’s not the only institution to be accused of gaming the rankings, and certainly won’t be the last.
As for students like Bishnoi, looking at the rankings is still a non-negotiable. But overall trust in them appears to be falling. According to a 2024 survey by Kaplan, while an overwhelming majority (97%) said they remain an important factor in deciding where they will study, 55% believe that rankings have lost some of their prestige over the last few years.
So what are the specific issues facing rankings? Ron Duerksen is executive director of the International Masters Program for Managers and was a senior administrator at HEC Paris and McGill University. He believes that because so many aspects of higher education are so difficult to measure, rankings tend to focus on the easiest things to measure – like salary. And that can be exaggerated.
“Using the Financial Times ranking as an example, you have a certain percentage of respondents that you need from a class,” he explains. “Schools can educate the alumni that are filling out the survey on how the ranking works: if your salary is very high, it’ll contribute well to the ranking. If it’s low, it won’t contribute so well. And so you can end up with a pool of people that are in the higher echelons of satisfaction and salary that answer the survey.”
That’s certainly something that Boshnoi is aware of. ”Universities might not want to showcase everything,” he notes. “They just want to show their best side.”
Other important metrics like acceptance rate can be a little misleading. Nat says it’s a measure that naturally favours the bigger, wealthier, more prestigious schools. ”If you’re just outside the top 50, there’s a net positive to breaking into that next rung of institutions. That’s going to raise your visibility and applicant pool. And the bigger your applicant pool, the more kids that you can not accept.”
This fixation on ranking-boosting metrics can also lead to what Debruyne terms “the lemming effect” – where all schools follow the same strategies to try and improve their ranking. “The danger is that there are things that are worthwhile to invest in that may not necessarily help you move up in the rankings, but you should be doing anyway.”
Despite these issues, rankings are still a vital tool for students and schools. They helped to take Boshnoi from his native India to Canada’s McGill University, where he’s currently studying an MBA at the school’s Desautels Faculty of Management. And for a school like Vlerick, which already has a strong reputation domestically, they can form an important channel of communication with international students. “A great opportunity that rankings can provide is to make your school known,” says Debruyne, “because it’s a way of signalling to prospective candidates and the broader outside world.”
She explains that in an industry that lacks solid data, rankings can help schools benchmark themselves against other institutions. They can act as a kind of verifier; a confirmation that you’re performing well in certain areas. That can motivate you to do better.
Still, pretty much everyone seems to agree that rankings could be improved – and some providers are starting to address those concerns. For instance, the Positive Impact Rating groups schools into tiers rather than arranging them in an ordered list, which minimises the impact of falling out of the top 20, 50, or 100. It also means that small changes to the methodology won’t cause huge changes to the ranking, as they sometimes do now.
Every university has a slightly different mission. To put them all in one ranking when their mission may not be aligned with what the ranking criteria is really unfortunate
Ron Duerksen, International Masters Program for Managers
Another possible improvement is a move towards more specialised rankings that measure a school’s excellence in a specific area, rather than grouping hundreds of metrics together in one list. Examples include the THE Impact Ranking, launched in 2019, or the QS Sustainability Ranking, launched in 2022.
Deurksen agrees that these specialised rankings could signal a path forward for the industry. “One of the biggest opportunities for rankings is to be more focused on what really matters to the school and to the students,” he says. “Every university has a slightly different mission. To put them all in one ranking when their mission may not be aligned with what the ranking criteria is really unfortunate.”
Universities have their part to play, too. Barbera believes that they could do more to show their working out when it comes to rankings. If students don’t tend to dig into the methodology themselves, schools must lay it out in front of them. “We’ve used the ranking data in a marketing campaign to say: this is what allows us to be number six, number 22 or number 27 in the list.”
But perhaps a cultural shift is what’s really required here: away from easy-to-measure metrics related to salary, and towards metrics that truly measure the impact of a degree on a student’s life. “Let’s look at the number of Fulbrights that a school produces, or the number of students that climb the socioeconomic background ladder,” Smitobol says. “Why is that not in a ranking somewhere?”
Education is too complex, too multi-faceted, and too life-changing to ever be summarised in a bunch of data points and indicators. But maybe that’s OK. Instead, perhaps we should all learn to temper our expectations when it comes to rankings – because they’re here to stay.
“If I say don’t look at the rankings, it doesn’t make people not look at the rankings,” says Smitobol. “They’re not going to disappear. But use them as one of the multiple snapshots that you’re using to make a decision on where you want to attend.”