You are currently viewing Publishing what works on access and participation

Publishing what works on access and participation

  • Post comments:0 Comments


Within the latest Access and Participation Plan (APP) guidance, the Office for Students (OfS) expects higher education providers to:

publish the results of its evaluation, both of what works and what does not work, to build the scale and quality of the evidence base for access and participation activity across the sector.

Regulatory Advice 6 lists several channels through which providers can share their evaluation findings, including academic journals, conference presentations and learning lunches.

This provides a pleasing (and perhaps surprising) level of flexibility, allowing providers to choose publication options which are best suited to the purposes of sharing the evaluation.

Sharing findings

But evaluators might legitimately ask what these options mean in practice, and when one channel would be more appropriate than another. Underpinning these judgements there needs to be an assessment of how publications can make meaningful contributions to the “what works” evidence base and inform practice, rather than being a tick box exercise for the APPs. And institutions need to consider what commitments should be made in terms of resource to deliver these “publishings”, given the diversity of our sector in terms of size, scale, and capacity.

A subgroup of the Russell Group Widening Participation Evaluation Forum has worked together to create some guidance in this space. The document, available on the Evaluation Collective website, details a range of evaluation publication types and platforms that are available, as well as some key considerations for each. This is intended to be a practical resource that supports practitioners and evaluators to reflect on the purpose and audiences of their evaluation, and to identify effective and appropriate ways to raise awareness of their findings.

However evaluation findings are presented, we should expect a significant increase in the number of publicly accessible outputs from across the sector over the course of the next APP cycle. This raises several questions about what lies ahead as the sector mobilises to build the evidence base for efficacy of equality of opportunity student success initiatives.

Why should we publish our evaluations?

The opening quote highlights the expected main benefit of developing a rich evidence base on access and participation: knowledge of what works. Many professionals in the sector will be familiar with the struggle to find compelling, published evaluation linking access and participation activities to student outcomes. Increasing the amount and accessibility of good-quality evaluation could help us to make evidence-based decisions about which activities to offer to ensure that our students have the best possible experiences of higher education. The steer from OfS to publish more represents a catalyst for meaningful knowledge-exchange and learning – if evaluation is done, written and disseminated well.

Publishing in a variety of formats will help to demystify the evaluation process for those who are new to the game. Academic research and evaluation publications are often full of complicated and technical language (or rather, they are replete with convoluted academic discourse), and the nitty-gritty involved in doing the necessary steps is often obscured due to assumptions about readers’ research knowledge. Evaluation requires critical reflection on what has been done, how it has been done and why it has been done.

“Publishing evaluation” should therefore include reflective process pieces. We want to hear how you’ve grappled with different evaluation methods, how your interim findings are shaping your projects, and the unexpected knowledge and lessons you’ve learned along the way. It is not just evaluation outcomes that we can learn from. Similarly, sharing findings in accessible and engaging formats (as well as reports and academic papers) will increase the potential audiences of our evaluation, and our capacity, as a sector, to improve.

When we share our experiences, we create opportunities to connect with our peers, to find common ground, interests and expertise and to receive feedback. Publishing more evaluation could enable valuable critical friendships and institutional collaborations which are often lacking in this space.

Publishing evaluation is also an exciting opportunity for widening participation champions to celebrate their successes, to elevate their students’ voices by sharing their stories with a captive audience and to gain buy in from a range of stakeholders by evidencing impact. Developing valuable, transferable skills in undertaking and writing up quality evaluation will support those preparing for professional development opportunities and will further improve and professionalise access and participation practice.

Who assesses what’s good and fit for practice and purpose?

Although we welcome the regulatory emphasis on making evaluation outcomes public, there are also several challenges we need to consider.

Producing quality, engaging evaluation outputs at scale will be challenging for HE providers given the complexities of gaining ethical approval, the increasing workload demands that publication will incur on practitioners delivering activity, and the skills required to communicate findings effectively. Concerns around reputational risks and unwillingness to reveal “trade secrets” in competitive, niche fields may inhibit sharing of evaluation, leaving us with a rather incomplete picture of what does or might work.

Once we have navigated the complexities of actually doing our evaluation, written up our findings to a high standard and explored the various publishing platforms, what happens next? How do we ensure the myriad puzzle pieces that will emerge fit together as a coherent, robust evidence base?

If OfS maintains its requirement for the sector to publish all, or at least most, evaluation outputs, interesting findings – be they positive or negative – may be drowned out in a deluge of less interesting or less robust ones. The resulting tidal wave of reports on disparate topics, no matter how well written, seem destined to end up floating in the ether. Even if we recruited a handful of enthusiastic individuals to collate these outputs, without a coordinated approach at the planning stage of evaluation we can expect a kaleidoscope of methodologies, approaches and institutional contexts, presented with varying degrees of transparency, to emerge. While this pluralism will be valuable in some sense – creating a space to compare what works in “what works” and avoid group-think – it will also make it harder to identify common ground, unless we find good ways of structuring ongoing debate and synthesis.

If developing sector knowledge is the primary purpose of publishing evaluation, we need sector wide structures to coordinate and ensure effective knowledge sharing, matched by an equal effort by institutions to engage with such structures (which may, again, be more challenging for smaller providers). We are intrigued by OfS’ aim to create a repository where providers can submit their evaluation findings. However, it is not yet clear what the scope of the repository will be, nor the timeline for its development and sector-wide implementation.

Increasing cross-institutional collaboration would also address some of these challenges, improving efficiency, minimising duplication of effort and ensuring coordination. Building on the work of TASO and SEER in this area, additional ways of bringing universities together to evaluate similar activities need to be determined. We need to increase the capacity of third party organisations who can play a role in this coordination and add value through their own expertise while remaining critical.

In the absence of such structures, we consider that there is a real risk that results become technically public but remain without impact on practices. This is in particular as OfS now expects providers to firmly commit to publication plans and dates in their APPs, in our understanding, making failure to meet these commitments a reportable event (see Regulatory Notice 1). However, this approach drives providers to more cautious promises, for example by incentivising easy to use and fully controllable channels such as our own websites – given that many alternatives, such as conferences or academic journals, bring with them additional uncertainty of outputs being lengthily peer reviewed and potentially rejected.

Common ground and promoting diversity

We hope that our guidance encourages providers to choose not only the safest, but the most impactful methods for disseminating their evaluation findings. The avenues that are explored to host evaluation outputs, and the associated communications activity, will determine how value is created through evaluation work across the sector. The challenge is to create a space in which multiple and potentially contradictory findings can coexist, and where discussions about knowledge creation and evaluation practices can thrive. Equally important is ensuring that there is a space for diversity of evaluation approaches which allow us to highlight what is most insightful without discarding valuable findings due to simplistic “gold-standard” thinking. The deep-dive, qualitative case-studies and small-N methodologies more likely to be achievable for smaller providers should not be devalued in preference to Type 3, which is largely unattainable in the smaller context.

Our publishing guidance compiles a diverse list of publication types and platforms that are available for those in the sector seeking to publish their evaluations. As we begin to publish more evaluation across the sector, we can support each other by making conscious choices to share findings which others can usefully learn from, using the outputs and channels which are most appropriate for our audiences engagement.

The full guidance can be downloaded here. It is not intended to be an exhaustive list of all outputs and channels, but please do get in touch if we have missed anything important. We welcome feedback and further examples of good practice which could be included, and will update the guidance sporadically.

The authors would like to thank Samantha Child and Emma Thomas from Applied Inspiration for providing feedback on this article and the detailed guidance, and for raising concerns relevant to small and specialist providers.



Source link

Leave a Reply