You are currently viewing Do we need a league table of scholars produced by Silicon Valley?

Do we need a league table of scholars produced by Silicon Valley?

  • Post comments:0 Comments


One recent afternoon, one of us received an email from an unknown-to-us business, Scholar GPS. It offers congratulations for “exceptional scholarly performance” and for our being placed “in the top 0.05 per cent of all scholars worldwide”.

Our initial reaction was largely indifference, and dismissal (and deletion) of predatory junk mail.

But, as other colleagues, in other countries, shared publicly having received the same message, we collectively looked to dig a little deeper.

What is this ranking?

ScholarGPS appears to be a system that utilises data mining/data scraping to produce a ranking based on metrics like productivity, quality, and impact, specifically targeting individual scholars. According to the company (we’re not linking to it, we are certain you, our readers, can find it), it profiles approximately 30 million scholars, drawing connections between citations, collaborations, and institutional affiliations to create a hierarchical system of academic performance.

As a recent addition to the ongoing marathon of metricisation, ScholarGPS exemplifies a pre-existing trend.

It aims to enhance scholar “visibility” by ranking individual academics based on quantitative metrics. It is no secret that in academia, rankings, metrics, and data-driven assessments of research often dominate the evaluation of individual and institutional performance, and institutional chasing of “excellent” ratings (as in the Research Excellence Framework). While alleged to offer objective measures of success, the systems are frequently instead reductive and exploitative. We would point for example to cogent critiques of the very notion of “excellence” as well as inventories of the cost of individualism, and lack of solidarity, to the academic sector as a whole.

The prestige economy

The suggestion that people participate in individual league tables of scholarship has appeal in a context where academia functions as a prestige economy where the number of publications and visibility of research output are inextricably linked to perceived success, and so professional advancement. ScholarGPS and similar platforms offer scholars not just metrics, but validation and an (illusory) sense of security amidst growing fears of job precarity and the relentless pressures of academic performance. ScholarGPS’s three main metrics – productivity, quality, and impact – become currency within this prestige economy, enabling scholars to acquire badges of “excellence” that can further promote themselves, and (in theory) develop their careers.

Rankings can create the illusion of a meritocratic system where hard work and talent alone dictate success. But evidence suggests that these rankings do more to reinforce privilege than to promote genuine merit.

For example, if you have a look at the lists of top-ranked scholars on platforms like ScholarGPS, they are overwhelmingly affiliated with traditionally elite, well-funded institutions. When we looked at STEM and humanities examples in the UK and US, they were also primarily white, with a high percentage of men. It is plausible that what the rankings actually reflect is the degree of institutional support rather than individual merit.

Scholars from resource-rich universities enjoy better access to research funding, infrastructure, and opportunities, as well as the impact of years of systemic privilege. Rather than providing an objective measure of merit, we would suggest that ScholarGPS (and other) rankings showcase and potentially amplify existing inequalities in the academic system.

One size

Platforms like ScholarGPS not only reinforce privilege, they also marginalise non-traditional scholars. Independent scholars, freelance researchers, and those operating in non-traditional academic roles often produce valuable research, collaborate across disciplines, and contribute to public life in ways that are not easily captured by institutional metrics. We should not fall into the trap of mistaking that which is measurable (institutional academic performance) for that which is valuable.

The absence of scholars working outside the institutional bounds of academia from tools that prioritise productivity, quality, and impact, further limits the already scant recognition of diverse and often unconventional paths of scholarship. Meritocracy as represented by indices and ranking systems not only then perpetuates a narrow view of scholarly worth, but also reinforces the ivory tower walls that keep out those whose work does not fit into neatly quantified metrics. Yet, their exclusion from being counted does not mean their contribution does not matter. Rather it reveals the inequity of a system of reward and recognition that fails to account for activity outside the tower itself.

A challenge

The emergence of platforms like ScholarGPS and the increasing focus on individual metrics pose a challenge to a vision of academia that values collaboration, critical inquiry, and the open dissemination of knowledge. As rankings grow more granular and pervasive, they threaten to strengthen mechanisms that render invisible and excluded independent scholars and those in non-traditional roles. Such rankings also work to further precaritise scholars who are in traditional roles, but increasingly left to scrape together individual cases for their own security.

ScholarGPS and similar ranking platforms present an appealing (to some) but risky (to all) illusion of meritocracy in academia. Academic worth, reduced to a series of impersonal metrics, risks not only obscuring genuine scholarly contributions but also reinforces the very inequities they claim to address. In a sector that urgently needs diverse perspectives and collaborative efforts to solve pressing global challenges, academia’s obsession with rankings threatens to alienate and exclude voices from non-elite institutions and non-traditional backgrounds.

Prioritising quantifiable individual success over qualitatively meaningful contributions erodes the principles on which scholarship is built, and puts up barriers to a more inclusive academy built on solidarity and collective action. As a sector we must reject a narrow definition of success and instead embrace a holistic, community-driven vision of achievement.

And with regards to ScholarGPS, in an already highly measured academic landscape, often monitored by the academy itself, is a “Silicon Valley start-up”, potentially aiming to extract profits from an underfunded sector, best placed to introduce yet another layer of league tables?



Source link

Leave a Reply