Compilers of university league tables would usually find themselves in their busiest period of the annual cycle in June.
Data from the preceding academic year will have been purchased, passed through various algorithms and, right about now, we would be in negotiation with HE providers about the data points that we have deemed non-credible.
Instead of this hubbub of activity, which dies down as September publication dates approach and then picks up again, we compilers find ourselves at something of a loose end. Delays to data availability as a consequence of data futures has provided an unusual lull and an opportunity to reflect on what we do.
How to improve in league tables?
As a compiler of league tables, it is a question that I am sometimes cagey about answering. After all, the purpose of the guides we produce is to inform student choice, not to offer an insight into university performance. Having spent over 20 years in strategic planning departments, however, I am only too aware of how league table position is often interpreted as a shorthand verdict on areas of strength and weakness, improvement or decline. It’s not what they are for, but it’s how they are sometimes used, so let’s set about considering the question.
There are three main avenues for improving an institution’s league table position, and they increment up in timescale, power, permanence, overlap with other priorities, and, bluntly, honesty.
The first avenue is “optimising” reporting. This operates to the fastest timescale because it affects how past activity is reported, at the last point in time at which a provider can influence how that activity will be perceived outside the organisation.
Beyond checking accuracy, precision and alignment, which is in the interests of all parties, this optimisation can deliver rapid but temporary gains. About 80 per cent of the effort that goes into producing a university ranking is dedicated to the detection of errors and deliberately distorted data, and then either negotiating focused amendments or altering the methodology to counteract any unfair advantage that is gained by a more widespread interpretation of reporting rules. Examples of this kind of activity is over-reporting staff in support roles as having an academic employment function, reporting all manner of expenditure – even recruitment agent fees! – as “academic services”, and artificially minimising student numbers.
The second avenue is in addressing student outcomes, placing an unhealthy focus on the precise win conditions that are used by metrics as proxies for positive outcomes. Goodhart’s law is at play here, and league table compilers and other producers of sector metrics provide the conditions in which pressure is applied through binary interpretation of outcomes.
Examples of this are internship schemes that focus on the precise period in which graduate occupations are surveyed, progression rules that prolong a struggling student’s period of engagement to their one-year anniversary, various efforts to satisfy students when they are about to complete the NSS, and grade inflation.
This avenue delivers on a medium term and can provide more stable benefits. The immediate costs of some schemes can outweigh the benefits while others carry a less tangible cost in the form of devalued standards and mis-prioritisation.
A better way
The third avenue is by far the best. It only delivers on the slowest timescales but when it does deliver it hits multiple priorities all at once. Creating and nurturing a culture of quality enhancement is not an objective that is dedicated to league table performance and even extends beyond the timescale of most institutional strategies. This avenue focuses on fulfilling an institution’s mission and maintaining confidence that its accomplishments will be recognised, directly or indirectly, by league tables and other external entities that can affect a reputation.
Easier said than done. This statement is basically saying ‘be brilliant and you’ll (at least) look great’. But there are practical activities that those with an interest in league tables can undertake that will help make progress down this avenue, and one way of expressing the more analytical activity is institutional research.
Institutional research is not a term that is often used in the UK but it encompasses much of the analytical work that can guide an institution along this third avenue. I’m talking about things like:
- Evaluating the impact of innovative teaching practices
- Setting admissions policy to maximise a cohort’s chances of successful outcomes, and balance this against competing pressures
- Getting beneath top-level performance metrics to interpret and express positive outcomes at a more meaningful level
- Using predictive analytics to model student behaviour and anticipate risk of adverse outcomes while intervention is still possible
- Finding means of expressing and measuring educational gain
- Deploying a business intelligence strategy to ensure that long-term key performance indicators can be articulated as localised, manageable lead indicators and are balanced by supporting metrics that balance focus between priorities.
- Studying patterns of student engagement and how these differ between diverse student groups, revealing barriers to progress and supporting evaluation of efforts to address these.
- Anticipating the effects of potential portfolio developments and potential uptake of new qualification options
Institutional research is conducted both by academics and by members of professional services. In the UK there is a strong overlap with the analytical functions that are often found in strategic planning departments but IR tends to be diffuse throughout academic teams and professional services, especially those that are dedicated to student support, to widening access, to learning and teaching development and to quality assurance.
HEIR network and events
The UK and Ireland Higher Education Institutional Research (HEIR) Network was established in 2008 to bring together those colleagues with an interest in IR across HE in the UK and Ireland. The network fosters knowledge exchange, collaboration, and good practice sharing among its members, holding free online sessions throughout the year and an annual conference. A session last Friday, focused on measuring educational gain, was particularly relevant to the next round of TEF submissions but also connected back to the “value added” scores that are used in the Guardian University Guide.
This year’s conference will be taking place at Buckinghamshire New University on 12-13 September and the theme is equitable student progression: a simple term that actually connects to all the stages of the student journey that university league tables seek to break down into metrics and present as a single view on what good looks like.
As colleagues from a wide range of roles share their institutional research into the patterns of student behaviour and engagement that they have discerned, their interpretation of what this implies about the needs different student groups, their evaluation of efforts to address these needs, and the ultimate effect on student outcomes, participants will gain insights into how IR can help guide progress along the third avenue of institutional progress, the one that coincides with league table success.
Matt Hiely-Rayner is Director of Intelligent Metrix, the company that produces rankings for the Guardian University Guide, and is co-chair of the HEIR network’s planning group.