I spoke at the Research Impact Academy Research Impact Summit (Twitter #RISummit) this week, a fabulous free annual event, make sure to check it out! As a follow up on twitter I was asked by @BellaReichard about my comments on Shiny vs. Authentic case studies. I tried and failed to write a short twitter response, so I’ve expanded here to better express what I mean. Thanks Bella for asking and giving me the impetus to outline my thoughts a little more.
Impact is, at its heart, making a difference through research. But within the sector, formal agendas (such as, but not restricted to REF), generally necessitate curated accounts (eg. impact case studies) which tell the story of successes. These accounts have financial or reputational weighting, ie. the stronger the story, the bigger the win, and are subsequently often also used as the basis of research to ‘understand how impact works’. The REF 2014 impact database has been used fairly extensively for that purpose both within research and within university strategy development.
However, impact is a far more complex, engaged and risk-filled process than these accounts bear witness to. Let’s be frank, it’s in no institution’s interest to say ‘we could’ve had this impact, but XYZ went wrong’, so it’s no criticism in that respect. However, the effect is to continuously present impact as big and ‘shiny’, absent of challenges, and collectively imply that anything falling short of these goliaths ‘isn’t impact’. It’s analogous to the publication bias against null findings, heightening the risk of us repeating mistakes and introducing considerable ethical implications into the research arena
The relative absence of ‘authentic’ accounts of impact – those inclusive of barriers, challenges, misunderstandings, lost opportunities (etc) – compounds this. I’ve seen so many colleagues convinced of their inadequacy, the pointlessness of pursuing smaller effects, and convinced a lack of impact is a failure on them, rather than a consequence of more contextual factors. So much of the sector memory on impact is about ‘what works’, and collectively muting ‘what doesn’t’ stalls our learning, dooms us to repeat misjudgements, and continues to allow individuals to mark themselves against an often unachievable benchmark
Basically, impact isn’t always ‘big and shiny’, despite the wealth of accounts to the contrary’, and we need to more fully (authentically) understand it to do it well.
So….if it’s not in the interest of institutions to shout about what goes wrong, and by extension a risk to academics to ‘admit to their failures’, how can we do this? I can’t see it being realistic anytime soon for page-limited case studies to be intoned with the inherent messiness of impact. And perhaps it serves little purpose if you consider case studies to be more like competition entries than comprehensive accounts. So instead, practically, we need to do several things to lift people’s understanding of what impact is/isn’t, stop people being made to feel like a failure, and strengthen our overall connection with society:
- Explore, collect and share experiences of ‘what doesn’t work’, valuing the insights these offer instead of fuelling perceptions of ‘failure’
- Ensure our research, practical and sector wide discussions of impact take account of the incomplete nature of dominant accounts (ie. recognise shiny case studies only tell one part of the story)
- Listen to, and elevate the voices of non-academics about how to connect research with their needs. We will continue to shiny-fy (now a word) impact if we only ever hear from academics.
We have such a wealth of collective learning. Let’s connect it 🙂