Featured

New patient-centred outcome measures for venous thromboembolism

Some of you will know about my blood clot history. Potted version – a larger-than-should-be-fair deep vein thrombosis after having a baby in 2008, unsuccessful vein bypass surgery in 2010, a stupid and unexpected clot in 2016, a few other clots here and there until TA-DA I’m fixed by venous stents in 2018.

Whilst the whole clot thing is literally a complete pain, I’ve been enormously lucky to be able to use the experience in a positive way after years of it being pretty horrid. I’ve now been involved in various activities as a patient advocate for a while, including being part of an International Consortium for Health Outcomes Measurement (ICHOM) working group to develop a standard set of patient-centred outcome measures for Venous Thromboembolism (VTE).

The experience of this has been fascinating, partly as a patient, partly as a health psych and partly as an impact person. In a nutshell, the process has involved a series of meetings over a number of months, where clinical experts in VTE and a number of patients living with venous conditions review, assess and vote on what outcomes “matter most to people (≥16 years old) with Pulmonary Embolism, Deep Vein Thrombosis, and other related conditions“. The first thing that struck throughout this was the equal weighting with which my voice as a patient was included in decision making. The ICHOM team and the clinicians involved really did try to ensure the patient voice was brought to the fore in all conversations. The second thing was the way the process effectively combined rigour with democracy, giving open platforms to discuss issues which were subsequently fed into group wide votes. Thirdly, whilst I can’t pretend to have always understood the clinical terminology, care was taken to clarify in the meetings or offer additional discussions to explain things further. I know I have an advantage as a patient already being ‘in health and research’ as it were, but that notwithstanding at no point did I feel my inclusion was tokenistic, rushed or glossed over. Quite the opposite.

The final ICHOM VTE outcome set has now been publicly launched and is free to access. It includes measures across four categories – Patient-reported outcomes, long term consequences of disease, complications and treatment related complications – covering the experiences of care, with what it’s like to live with venous conditions. The site provides a series of support materials, such as reference guides, and is a new contribution to the existing set of 40 (and growing) outcome measure sets for other conditions.

My hope is that this work will herald a new way of supporting patients with VTE, combining clinical excellence with patient experiences. Venous disorders can absolutely wipe quality of life from under your feet, but with a more values-led, comprehensive and standardised set of measures, over time we might just be able to make life a bit better. I’m enormously proud to have been part of this, and my huge and personal thanks go to the Chairs – F.A. (Erik) Klok (Leiden University Medical Center) and Stephen Black (King’s College London), the ICHOM project team and the working group members who worked so hard to get this right for VTE patients across the world.

ICHOM VTE Standard Set – Image copied – https://connect.ichom.org/patient-centered-outcome-measures/venous-thromboembolism/

Shiny vs. authentic impact

I spoke at the Research Impact Academy Research Impact Summit (Twitter #RISummit) this week, a fabulous free annual event, make sure to check it out! As a follow up on twitter I was asked by @BellaReichard about my comments on Shiny vs. Authentic case studies. I tried and failed to write a short twitter response, so I’ve expanded here to better express what I mean. Thanks Bella for asking and giving me the impetus to outline my thoughts a little more.

Impact is, at its heart, making a difference through research. But within the sector, formal agendas (such as, but not restricted to REF), generally necessitate curated accounts (eg. impact case studies) which tell the story of successes. These accounts have financial or reputational weighting, ie. the stronger the story, the bigger the win, and are subsequently often also used as the basis of research to ‘understand how impact works’. The REF 2014 impact database has been used fairly extensively for that purpose both within research and within university strategy development.

However, impact is a far more complex, engaged and risk-filled process than these accounts bear witness to. Let’s be frank, it’s in no institution’s interest to say ‘we could’ve had this impact, but XYZ went wrong’, so it’s no criticism in that respect. However, the effect is to continuously present impact as big and ‘shiny’, absent of challenges, and collectively imply that anything falling short of these goliaths ‘isn’t impact’. It’s analogous to the publication bias against null findings, heightening the risk of us repeating mistakes and introducing considerable ethical implications into the research arena

The relative absence of ‘authentic’ accounts of impact – those inclusive of barriers, challenges, misunderstandings, lost opportunities (etc) – compounds this. I’ve seen so many colleagues convinced of their inadequacy, the pointlessness of pursuing smaller effects, and convinced a lack of impact is a failure on them, rather than a consequence of more contextual factors. So much of the sector memory on impact is about ‘what works’, and collectively muting ‘what doesn’t’ stalls our learning, dooms us to repeat misjudgements, and continues to allow individuals to mark themselves against an often unachievable benchmark

Basically, impact isn’t always ‘big and shiny’, despite the wealth of accounts to the contrary’, and we need to more fully (authentically) understand it to do it well.  

So….if it’s not in the interest of institutions to shout about what goes wrong, and by extension a risk to academics to ‘admit to their failures’, how can we do this? I can’t see it being realistic anytime soon for page-limited case studies to be intoned with the inherent messiness of impact. And perhaps it serves little purpose if you consider case studies to be more like competition entries than comprehensive accounts. So instead, practically, we need to do several things to lift people’s understanding of what impact is/isn’t, stop people being made to feel like a failure, and strengthen our overall connection with society:

  1. Explore, collect and share experiences of ‘what doesn’t work’, valuing the insights these offer instead of fuelling perceptions of ‘failure’
  2. Ensure our research, practical and sector wide discussions of impact take account of the incomplete nature of dominant accounts (ie. recognise shiny case studies only tell one part of the story)
  3. Listen to, and elevate the voices of non-academics about how to connect research with their needs. We will continue to shiny-fy (now a word) impact if we only ever hear from academics.

We have such a wealth of collective learning. Let’s connect it 🙂

Questions from DHP: some responses!

The questions below are a summary of queries raised in the DHP session, with some responses from me 🙂

Is theory building impact?

Impact is the provable benefit of research in the real world. Ie, the effects felt by people, business, the economy, the environment (etc) which arise somehow from our research. The way we get there is varied, connected and can be immediate or take a long time. Applied research tends to be a more direct pathway, for example with interventions being trialled or used by people, seeing benefit pretty much straight away. For research as the more exploratory or basic end of the continuum, the path is invariably more indirect. This kind of research can be analogised as providing the ‘building blocks’ of knowledge for applied research, or providing the first baton pass in the impact marathon. So is theory building impact? Not in the formal definition of impact, no. But is it a vital part of the puzzle? Absolutely yes.

What resources are available for supporting impact planning (and what does a good plan look like)?

There are so many resources now available for impact, a result of how the agenda has cemented and matured across the sector. I’ve put a range of resources on my blog post, but as a quick crib sheet:

A good impact plan is strategic (has a sense of goals and the methods to get there), is rooted in the needs of users (the ‘so what’ aspect), and strikes a balance between an achievable plan without being unreasonably ‘certain’ of what’s possible in a changing environment.

What’s the role of participatory research in impact?

Participatory research is so incredibly valuable for impact. It helps identify the base ‘problem’, shape the research process, identify any necessary ‘course corrections’ throughout the process, and ensure a meaningful line of sight to effects and ways to measure them. Not all research is participatory, so there should be no presumption of precisely what relationship is needed between academics and non-academics, but if your work needs to be ‘used’, it needs people at the heart of it. If you’re starting out, find academics who’re published in the area and follow their work / social media / training events, and look outwards other countries who’ve centralised knowledge mobilisation and co-production (eg. The Co-produced Pathway to Impact Describes Knowledge Mobilization Processes) or broader (non research) good practice for engaging outside of academia (eg. plan, monitor and evaluate participatory methods)

Whose impact counts?

A: I’ve slightly paraphrased this question as in its original form related to tensions between stakeholders and academics in determining what the focus of an intervention should be. There isn’t a single simple answer to this as there’s no single simple way to say whose voice counts most. In any situation there may be a myriad of goals people want to focus on, or think are important, and it’s like to be a process of negotiation and discussion, particularly when you don’t hold all the cards. I’d say always to centre the needs of the main beneficiary (eg patient), and fairly and accurately determine what the intervention could reasonably achieve. It’s all well and good people wanting an intervention to change the world, but if in reality it can only raise awareness/ help build self efficacy, any impact goals outside of that may well need to be achieved by other means.  

How might someone scale up a case study intervention? Should you revisit the ‘problem’, and ascertain if the problem is the same in other settings first?

Simply put, if you’re ‘relocating’ an intervention to another location (eg. another service, community, venue etc), you should sensecheck if the problem and conditions are still a match. This can be light touch, for example speaking with the service manager of the new location, or a heavier duty needs assessment as suits. Checking the ‘problem is still the problem’ means you can repeat the intervention with confidence it’s addressing the right thing. Similarly by checking that the context is as conducive you can avoid unanticipated problems (eg. if your intervention requires gym access and you’ve run version 1 in the middle of a busy city, trying to repeat this as version 2 in a geographically spread rural location may not be as successful)

What top tips would you give for building impact for the next REF, and how do we best engage others who might not be aware of or interested in REF?

For so many of us, REF has been rough, and has left a legacy of a community conflating impact with assessment and hating it as a result. For the next REF we need to do a few things. Firstly, we need to heal from this one (my thoughts here!). Secondly, we need to set in place more supportive, literate and healthy institutional practices to build an inclusive environment. Thirdly, we need to recognise, and help everyone at all levels of the sector recognise, that ‘making a difference’ needs an investment in people, skills and connections with non-academia. Building engagement with impact needs to start with ‘making a difference’, and not with the agendas the oversimplify (or complicate!) what counts.

How do we best present qualitative evidence?

Qualitative evidence is so important – it shows the depth and the meaning of the change. This is almost always strongest a) using the voice of the person who benefitted (eg. quotes), b) articulated with phrasing indicating the nature/direction of the change, and c) connected back to the ‘so what’. The more we, as a community, can convey meaningful change through qualitative data, the more normalised it will be.  

A common issue with interventions (especially tech based) is low usage and high attrition, which may influence efficacy. Any tips?

Thankfully there’s already an awesome paper on this: Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies

How well has impact been received by academic and non-academic people? What types of challenges are you facing?

Let’s say it’s a mixed bag….! Some academics love impact but hate REF. Some hate impact and can’t see why it should be applied to their research. The most heart breaking bit is when people feel they’re being told their research has no value unless it has impact, or that their impact ‘isn’t enough’. Sure REF might have specific expectations and institutions might have to pick their ‘stars’, but that is fundamentally different to a statement of the value any specific piece of research has. Some research can’t ever reasonably be expected to deliver impact in the way it’s so often simply conceived. A minor soapbox moment – notwithstanding the amazing work of those whose work is showcased in case studies, too many people are feeling inadequate because of the myths and unchecked assumptions about impact, and that can’t be right. Non-academics have been unbelievably helpful, and the REF agenda has engineered an academic community more ‘primed’ to find better ways to connect with them. But it remains a challenge to do this without placing such a burden (providing evidence) on them to sour relationships.

Featured

DHP session: Impact, Health Psychology and you.

This post accompanies a Division of Health Psychology BREATHE pre-conference workshop, June 2021

I have always felt immensely lucky to call myself a Health Psychologist. I mean, like legitimately, not some kind of niche fancy dress situation. Anyway, one of the things that has always kept me gravitationally pulled to health psychology (HP), even as my career has headed impact-wards, is the core premise of ‘making a difference’. Be it through research, practice, teaching or any aspect of the breadth of work HP covers, at its heart HP is about working out better, stronger and fairer ways to support people make positive changes. I suspect many in the profession are in it for much the same reason.

I was delighted to be invited to deliver a workshop for the Division of Health Psychology 2021 annual conference. DHP is my academic home, but one I’ve probably wandered away from a little too long doing this impact thing. It’s been a scenic route, and it’s great to be back in the fold.

The aim of this session was threefold: to cover what impact is (and isn’t), to look at it through the lens of HP, and help people find themselves in this thing called impact. Moreover I suppose I wanted to break down some of the confusion, myths and frustrations around impact, and give people space and time to look at how it fits meaningfully, appropriately and authentically within their work.

My slides and a bunch of references and resources are below. Enjoy!

UPDATE: Some responses to questions raised in the session now here.

Slides

Featured

Next steps for REF? We need to repair the sector’s health first

This post accompanies a talk at the Westminster Forum Projects | Next steps for the REF – independence and positive research environments, delivering and measuring impact, and the future of open access event, 23/3/21. Slides available here


A while ago I was invited to speak at the Westminster Forum in a panel session entitled “Research environments in the REF – stimulating positive cultures and wellbeing, academic independence and interdisciplinary research“. When I first accepted the invitation we were pre-COVID, some time ahead of the REF submission and the prospect of talking about ‘next steps’ seemed eminently sensible. However, with the rescheduled conference now clashing with the final throes of REF (no criticism, simply an artefact of REF date extensions + challenges of arranging a conference in the midst of a pandemic), I find my mindset has changed. Not that we shouldn’t think about next steps, but because if we don’t take stock of the damage across the sector first, we can never really reach a point of wellbeing.

Before I start, it’s important to note that we shouldn’t pretend that REF can be blamed for everything – that would be an immensely simplistic and scapegoating way to assign all the ills of the sector – but with an intentional focus here on REF and impact, it’s essential that we acknowledge the collateral damage felt by so many. REF is undoubtedly a double edged sword, certainly for impact; it drove the need for jobs in this space (my own included) and legitimised those working in more ‘applied’ fields, yet simultaneously formalised and scrutinised impact to an arguably harmful level. Impact has been, to a very large extent, conflated with REF, and whilst the broader impact pilot light hasn’t gone out, impact strategies are now immeasurably flavoured by anxieties about ‘what counts’ and ‘what’s biggest’. We talk about impact as a whole, yet screen out the weaker chaff from the stronger wheat to maximise our chances of income. Whilst that seems an enormously sensible strategy for an institution under assessment, it takes no account of the damage and disenfranchisement of those not picked for the Case Study team. Impact is for everyone. Go do impact. No not that way, that’s not enough. Move aside for those doing better stuff. As much as we’d like to pretend we don’t, we still trade off impact star players for our cases with no recognition of how many others were put on the subs bench.

REF has introduced terms into our academic lexicon we will struggle to unlearn. Outputs, impacts and people are appraised in terms of how ‘REF’able‘ they are. Evidence has – much to the chagrin of my international counterparts – become both a verb (‘can it be evidenced?’) and noun (‘we need the evidence’). Yet its language legacy is not matched by sustained capacity or expertise. A 2020 survey led by ARMA showed that 58% impact personnel were on short term contracts, with 72% contracts finishing at the end of REF. 72%. We grew an army of people to deliver REF impact, now or soon to be disbanded, with those left burned out and wondering how to re-energise a tired and distrusting sector.

I talk routinely about the need for impact literacy (the understanding of impact) and institutional health (the infrastructure needed to support healthy practices). However these these need to take a temporary backseat before thinking about ‘next steps’ whilst we recognise how the sector is feeling. I’m aware that focusing on ‘feelings’ may appear to be a superficial and transient indulgence given sectoral pressures to obtain ever more reduced funds, but if we don’t genuinely take stock and understand why such committed people are so burnt out, so despondent, we will not only lose vital knowledge and skills, but also irrevocably stain the relationships between academia and society.

The sector is not well

Ahead of the talk I reached out to colleagues and was saddened, yet not at all surprised by their level of despondency. Within impact, people who have fought so hard over the years to drive a positive impact culture, now exhausted and planning to leave their job or even the sector. Tired of the narrowing of impact to page length and font compliance. Exhausted by the discord in rhetoric between ‘impact matters’ and ‘only if it’s big’, and disillusioned by the tensions arising from conflicting rules and disparity between weightings for impact and the underlying environment. It says it all that when I asked them for images to illustrate REF, I received pictures of burning buildings and frayed rope. I also reached out more widely to colleagues in the academic community* to invite comments on ‘next steps’ for REF, and was inundated with stories of demotivation, damage and despondency. There’s no way to do justice to the extent or depth of these issues, but are perhaps best encapsulated by one comment that “the damage done perpetuates many harms and maintains toxic working practices”. Issues include:

  • Inequalities cemented and deepened; those with capacity to work longer hours, travel, physically well and with no care responsibilities are more able to meet REF-related progression criteria and thus ‘climb the ladder’. Those who can’t, including part time academics, disproportionately struggle
  • Academic methodologists and non-research staff made invisible, their work pivotal for, but omitted from accounts of impact glory.
  • Anxieties related to rule interpretation, risks of accidental non-compliance, second guessing reviewer expectations, seeking to perfect cases without knowing what ‘perfect’ looks like, and marrying authenticity of accounts within rules and template space.
  • The making of an unrelenting engine; Excessive administrative burden, substantial time demands beyond standard workload, continual internal deadlines, multiple iterations of cases and review points, excessive process time and energy, all of which prevent full consideration of the consequences of decisions taken.
  • Disciplinary disprivilege; Despite recognition of subject-based differences in the relationship between research and impact relationship, certain kinds of research/impact remain privileged by the exercise (eg ICS template unsuitable for more iterative participatory or practice based research)
  • Disillusionment; early optimism that social engagement would be valued (alongside outputs) swiftly replaced with despondency over requirements to instrumentalise research and commodify partnerships
  • Pausing rather than promoting research; Instructions to intentionally delay publication when there’s already ‘enough’ for REF and wait for the next cycle.
  • Bullying, harassment and damage to mental health, limited support (worsened by COVID). Stories of REF being used to “threaten, control, shame and otherwise exploit workers”, with people made to feel inadequate or a “failure” if their work isn’t included.
  • Contractual precarity and employment barriers; Short term contracts, teaching-only contracts, blocks on appointments or roles extended only so long as to complete a case study
  • Short termist REF framed approaches: institutional strategy scheduled in REF cycles, with research value conflated with its value within assessment
  • Overall: The efforts of trying to manage, negotiate and de-toxify these issues

Beyond the need to address these fundamental problems, colleagues also called for:

  • Practical necessities; clear and non-contradictory assessment guidance needed sooner, reduced scale of bureaucracy to learn
  • Extending focus; on team science, including those not on research contracts (techs etc)
  • Fuelling positive research culture not just assessing research environment
  • Embedding meaningful approaches to and measures of EDI
  • True recognition of interdisciplinarity
  • Support for early non-academic engagement without expectation of a specific return
  • Focus on systemic inequity, with resources focused on coaching and support
  • Recognition of the consequences of midstream funding cuts (eg. ODA projects)

I hear fairly routinely the phrase ‘keep going, nearly there’ at the moment (ie. ahead of the 31st March deadline, just over a week and counting), and have done for months. Positioning REF as some kind of endurance race with an inevitable sense of relief and doubtless a celebration event or two. Doubtless this motivational chant is meant well, and for many is an accurate homily, but this belies the deep scars and potentially undoable damage for many. Are we really upholding the principles of social good by wearing down the people who fuel its development? The academics whose knowledge underpins change. The impact specialists and research managers who sit alongside, intermediating between a drive for social change and compliance with assessment rules. Disregarding the real-world effects on colleagues tasked to make real world impact? Is there genuinely a belief that assessment doesn’t change impact behaviour? Impact cannot just be positioned as academic duty, nor having ‘no impact’ considered some sort of defiance of sector expectations. We’ve traded too long off the motivation of people want to make a difference, but the personal toll doing that whilst meeting requirements for every other academic monolith is just too high.

The need to repair

It would be of course overly idealistic, and arguably impractical to simply stop assessments, particularly as they do offer at least a scripted and largely transparent process to allocate public funds. It is similarly simplistic to blame university management when there are many examples of supportive and inclusive practice. There have always been philosophical debates about ‘what counts’ and what is ‘excellent’, particularly across disciplines, so a one size fits all approach cannot fit everyone, nor am I advocating an oversimplified alternative. There are noises that the future won’t simply be REF mark 3, but actually look to address some fundamental dilemmas about how we assess research. That is an immensely welcome prospect if true. But to what extent is there really going to be flex in a system ultimately reportable to Treasury? Reducing meaningful sector engagements into comparative and scorable scenarios, with results not upgradable for 7 years, is a continuingly troubling pressure on an already exhausted sector

The equation that gets us to a healthier position must include new variables. Thus far there has been dangerously little consideration of the resource burden on universities and the toll on people, with rhetoric idyllically expectant that universities can just ‘cream off’ the best examples of impact. However, this misses several fundamental points.

Firstly, the rule book(s) for REF runs to hundreds of pages, across multiple documents and weaved into multiple FAQs. Even where universities can ‘cream off’ the best cases, the necessary checks and balances requires people to develop an expert level, legalesque memory of specific points of guidance, where to find it, and to what extent it is mandatory (vs. open to interpretation). By way of clear illustration of complexity, Dr Anthony Atkin (University of Reading) recently mapped the multiple checkpoints needed to determine a single point on eligibility:

The Spider’s Web of REF impact rules. Dr Anthony Atkin (ARMA Protagonist Winter 2019)

Secondly, particularly for the smaller universities there is not simply a ‘pool’ of strong cases to draw from. If we need two cases, we have to create two cases and often cannibalise resources from elsewhere to do so. Rather than cast for the biggest impact fish, we have to set in motion a full engine of activity to get membership to a suitable pool. The capacity burden on institutions where research – or departments – are much more newly instated is far in excess of that needed for longstanding, socially partnered and challenge led initiatives` already underway.

Thirdly, assessment, or more specifically the curated, sanitised, and positivist cases created for submission, creates a false sense of dyadism between knowledge and application. Research does not simply catalyse into impact. There has been a tendency since 2014 to use the Impact Case Study database as an exemplar dataset, displaying countable effects on policy, society, the economy and more. But these obfuscate what doesn’t work, how much effort is wasted or otherwise screened out of the final story. The sector becomes held against an unhealthy benchmark of achievement, in much the same way that photoshopped celebrities drive an unhealthy view of ‘what’s beautiful’.

For many of us whose roles extend beyond REF, the task ahead of us is immense. Patching the wounds of this REF, disconnecting the now conditioned response between meaningful impact and evidential compliance, and doing so as our own attitudes to impact are at best diluted. A post REF future must recognise the ghosts of REF past. ‘Next steps’ cannot presume either a blank canvas or a sector somehow warmed by their achievements thus far. We need impact literacy. We need institutional health. We need to remember what impact is truly about and mentally and practically unbind it from REF.

The sector is reeling in so many ways, and there’s no way to do justice to the issues in a single post. But I do know this….

We need a break. We need to learn from the past. And we need to repair.

*with special thanks to WIASN for offering such important and candid commentary.

Featured

Where the Pathway ends: taking impact off-road

The original version of this article was first published in Research Professional’s Funding Insight service” on 6th February 2020

So that’s it. On 26 January the government confirmed its intention to cut impact sections from grant applications. RIP Pathways to Impact, then. As we move swiftly through the five stages of collective grief (although according to my Twitter feed many have rapidly bypassed denial and anger and jumped ecstatically to acceptance) we are left wondering what a less tokenistic and administratively lighter impact-afterlife looks like.

Since UK Research and Innovation’s announcement, we have had a series of comprehensive and thoughtful responses from, for example, James Wilsdon, Kieran Fenby-Hulse, Research Impact Canada, the London School of Economics, and the Institute for Development Studies. These and others have summarised many of the key reflections and questioned if impact is still alive (spoiler: yes). Notwithstanding the nuanced commentary of each, they broadly concur on three main things:

  1. Impact pathways were reductionist and flawed, but did offer a leverage point to plan engagement and routes for research implementation.
  2. The problem wasn’t just in Pathways to Impact, but in pursuing impact within a complex and unbalanced ecosystem.
  3. Removal of Pathways to Impact both reflects, and provides opportunities for, a more impact-mature sector, but we’re far from being fully impact-literate yet.

The last decade has witnessed a significant growth in impact knowledge, capacity and expertise. Impact now routinely forms a key part of research office function, and impact specialism is a far more established area of professional practice. While arguably in the UK this has much to do with Research Excellence Framework-related investment (and, frankly, REF-related anxieties), impact expertise is now diffused across the research system in specialist roles and support infrastructure. Research managers are more routinely involved in impact throughout the research lifecycle, but the experience of supporting impact on the ground suggests we should approach the post-Pathway brave new world with caution.

Thinking ahead

Pathways and REF Impact Case Studies have always been, in a conceptual but practically untidy way, opposite ends of an impact spectrum. Research implementation is a complicated business, and Pathways was often one of the few points of contact to support  researchers’ thinking about implementation realities. If speculation is correct, the Pathways to Impact will be replaced with a more combined research-with-impact case for support, an increased importance of logic models and raised expectations for impact to be embedded more strongly in institutional strategy.

If this reinforces the need for researchers and research institutions to review why, how, if and when research can contribute to socially meaningful goals—including challenges and risks—then we’ve stepped forward. However, if this presumes project-level planning is unnecessary, or magnifies existing system biases around institutional ‘high achievers’ or impact being a natural consequence of excellent research, then we really haven’t learnt much at all.

While UKRI’s decision seems to herald recognition of impact achievements thus far, the suggestion that the sector is now sufficiently impact-literate to lose Pathways without ramification is concerning. There are of course many examples of impact excellence and impact-related skills are much more prevalent than at the inception of Pathways. However, sparkly stories of impact achievement belie the patchwork nature of knowledge, engagement and support.

The need for healthy connections

Impact is, and has always been, more than a pathway document or a case study. It is, at its heart, a way to honour the university’s role within the society. Universities have other ways of doing this, for example at the University of Lincoln there is an ongoing drive to support our region as a Civic University, and to act as a “Permeable” university to break down barriers with wider society across all university functions.

Impact, however, has too often been unhealthily segmented away from core business, and the separation of impact within a separate Pathways section was indicative of this. Systemically we invest more in impact because we’re assessed more on it. We produce great stories of impact because the small stories don’t win financial rosettes. We partition the component parts of people’s roles into measurable chunks to make assessment practicable. And the sector’s memory for impact is undermined by the short-termism of professional impact roles and their REF-tied end dates.

The announcement does not and should not signify a downturn in the impact agenda, but instead should act as a catalyst for more comprehensive and less siloed approaches.

Next steps

The question really is what’s next? Will presumptions of sector maturity divert us from the development still needed? Will there be investment which drives impact in all its shapes and sizes (not just the shiny unicorn type)? Can we build an ecosystem which actually helps drive and ensure skilled judgment of meaningful impact? And in the midst of all these questions we need to remember that there are many other funders besides Research Councils for whom impact plans remain an important part of the application process.

Whether you’re overjoyed about no longer having to ‘pathway’ research impact, or concerned about the incoming impact-replacement service, March 2020 symbolises change. We have many years of experience, and extensive expertise to draw on to ensure that the promise of societal impact from research is fulfilled. Whatever the Pathways to Impact afterlife looks like, let’s get it right.

 

Chasing the ‘impact unicorn’ – myths and methods in demonstrating research benefit’.

An earlier version of this post appeared on the National Institute for Health Research (NIHR) blog

Whilst academics and clinicians alike are well aware of the need to ‘make research useful’, formal expectations around impact have pushed us to assume only large scale effects are ‘worthy’.

With continued pressure to secure funding and ‘do more with less’, assessment driven thinking and impact measures such as the Research Excellence Framework 2014/2021 risk overshadowing the most basic of principles – that research – of any type, scale, or subject can do good in the world.

NIHR has always been anchored in improving patient care and wellbeing, and so investigators have a genuine opportunity to connect research with patient benefit. The challenge is how can this be done? How do we get back to basics in this pressured environment? In my experience as an academic, impact lead and formerly Association of Research Managers and Administrators (ARMA) impact champion, there are numerous unhelpful myths which derail impact. So let’s rebuild.

First the myths……

Myth 1: Impact is something big which happens at (or beyond) the end of a research project.

No. Impact is a change, irrespective of its size, nature or timing. Impact is the provable benefit of research in the real world. Of course we want the biggest and best effects we can get, but if we only gaze at a longer term fantasy we’re blinkered to the smaller, stepwise changes that get us there. We need to reset our thinking to recognise the value of those necessary milestones such as improved clinician knowledge and skills) which pave the way to something bigger including improved accuracy of diagnosis and treatment). Unless we focus on realistic steps, we will forever chase an elusive impact unicorn.

Myth 2: Only applied research has impact

Compared to applied research, fundamental research undoubtedly requires several more steps in the translational chain before it reaches impact. However, even though it can take many years to mature, such research often starts an impact marathon with multiple baton passes: new knowledge may be cited by those in another discipline, which forms the basis for a new method, which is integrated into a new technique, which is trialled in practice and so on. The challenge (and opportunity) is to map those forward steps.

Myth 3: You can’t plan impact

It’s true impact cannot be templated. Analysis of REF case studies showed over 3,700 distinct impact pathways, proving there’s no one size fits all. However, it isn’t true it can’t be planned. Whilst there is always the possibility of unexpected impact, planning impact can help us to identify:

  • What effects are possible, most appropriate, when they may happen and what measures or indicators might be used (eg.Patient Reported Outcome Measures (PROMS))
  • Stakeholders, including public and patient involvement
  • Identifying risks to getting research into practice – what regulatory hurdles need to be overcome? Who might object to the work? How likely will the research enter the care pathway?

Towards opportunities….

As the sector’s impact learning curve accelerates, two key opportunities for strengthening our impact are clear:

Opportunity 1: Building impact literacy

The opportunity for all those involved in health-related research is to become impact literate. That is, to understand what impact the research may have and for who, how research can be mobilised to action, and what skills are needed to make this happen. More fundamentally thinking about impact needs to start from ‘why’, understanding the meaning, purpose and ethics which should lead decisions about impact possibilities.

Since first publishing on impact literacy in 2017, impact has been cemented further into research consciousness, and it’s clear that deeper understanding is needed at both the individual and institutional levels. Earlier this year we published a new impact literacy paper, detailing both individual and organisational dimensions, alongside how levels of impact literacy can be developed. The new model is shown in Figure 1 below.

Figure 1: Revised model of Impact literacy (2019*, updated from 2017)New IL diagram

Opportunity 2: Building competencies

Alongside developing understanding we must develop skills. Impact doesn’t just happen, people make it happen. This process of translating research into tangible effects takes effort, and professional development is crucial for strengthening impact across the research community.

………………..

So let’s return to basics. Impact is a change, of whatever magnitude, type or flavour. It is the shorthand for ‘doing good from research’ and depends on us thinking about the chains, connections and people between research and effects. We can empower ourselves with the skills and understanding to judge how impact best works for our research, and develop fair, measured and proportionate expectations.

Ask yourself: how can you make impact fantasy into reality?

 

*Bayley, J and Phipps, D (2019). Extending the concept of research impact literacy: levels of literacy, institutional role and ethical considerations [version 1; peer review: 1 approved] Emerald Open Research, 1:14 (https://doi.org/10.12688/emeraldopenres.13140.1)

Notes from the BPS Northern Ireland branch conference

Thanks to all those who came to the impact literacy session at the BPS Northern Ireland conference (April 2019). References to everything discussed in the talk are below (selected slides to follow!).

IMPACT LITERACY AND SKILLS

Impact literacy workbook and Impact Institutional Healthcheck available at https://www.emeraldpublishing.com/resources/

Bayley, J.E. and Phipps, D. (2017) Building the concept of research impact literacy. Published online in Evidence & Policy Available online http://www.ingentaconnect.com/content/tpp/ep/pre-prints/content-ppevidpold1600027r2

Bayley, J.E, Phipps, D., Batac, M. and Stevens, E. (2017) Development and synthesis of a Knowledge Broker Competency Framework. Evidence and Policy. Available online https://doi.org/10.1332/174426417X14945838375124 (OA version: https://pure.coventry.ac.uk/ws/portalfiles/portal/7270403/PRE_REVIEW_Knowledge_Broker_competencies_for_repository_OPEN.pdf)

REF

REF 2014 impact case study database – http://impact.ref.ac.uk/CaseStudies/

REF 2021 guidelines – http://www.ref.ac.uk/publications/2018/draftguidanceonsubmissions201801.html

MODELS AND FRAMEWORKS

Buxton, M., & Hanney, S. (1996). How can payback from health services research be assessed? Journal of Health Services Research, 1(1), 35-43

Donovan, C. and Hanney, S., 2011. The ‘payback framework’explained. Research Evaluation, 20(3), pp.181-183. Available at http://jonathanstray.com/papers/PaybackFramework.pdf

Phipps, D.J., Cummings, J. Pepler, D., Craig, W. and Cardinal, S. (2015) The Co-Produced Pathway to Impact describes Knowledge Mobilization Processes . J.Community Engagement and Scholarship. See http://jces.ua.edu/the-co-produced-pathway-to-impact-describes-knowledge-mobilization-processes/

Michie, S. Atkins, L, and West, R. (2014). The Behaviour Change Wheel: A Guide to Designing Interventions. London: Silverback Publishing. See www.behaviourchangewheel.com

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179-211. Further information available at http://people.umass.edu/aizen/tpb.diag.html

Bartholomew-Eldredge, L.K., Markham, C.M., Ruiter, R.A., Kok, G. and Parcel, G.S., 2016. Planning health promotion programs: an intervention mapping approach. John Wiley & Sons. Further information at https://interventionmapping.com/

Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2008). Developing and evaluating complex interventions: the new Medical Research Council guidance. British Medical Journal, 337, a1655 Available online https://mrc.ukri.org/documents/pdf/complex-interventions-guidance/ NB UPDATED GUIDANCE WILL BE OUT IN 2019

MY BLOGS

Avoiding imposter syndrome and impact

Chasing the impact unicorn

(Impact) life beyond REF

BROADER READING AND RESOURCES

Responsible metrics: www.responsiblemetrics.co.uk

Open Access via Unpaywall add on : unpaywall.org

CASRAI (information standards) https://casrai.org/

Analysing REF case studies: https://www.kcl.ac.uk/sspp/policy-institute/publications/Analysis-of-REF-impact.pdf

London School of Economics blog http://blogs.lse.ac.uk/impactofsocialsciences/

Evidence and Policy journal  https://policypress.co.uk/journals/evidence-and-policy

Research Evaluation journal  https://academic.oup.com/rev/

 

Featured

Chronic (sector) health and getting back our mojo

I’ve taken a step back in recent times from Twitter. Well social media in general to be honest. It felt like I needed to, but I couldn’t at the time articulate why. I have, for the large part of late 2018 and early 2019 been fairly unwell, so that’s probably the main issue. The stents have worked, but the nerve pain is new and that’s by definition more distracting than a well-practised pain with a 10 year heritage. Add to that a number of sick bugs from school (thanks kids) and basically I’m differently wonky with a hint of nausea. Anyway the thing you become aware of with any chronic health issue is how much of you it dilutes – everything is effortful, laboured and takes a disproportionate toll on whatever you try to do.

With social media, I was – I realise – getting utterly worn out by the continual stories about bad practice within the sector. Not tired of people telling the stories (they absolutely need telling), but tired of us seemingly getting no further past a sector-eats-itself situation. Stories abound about contract changes for REF / reluctance to employ early career researchers / systematic barriers to equality and diversity (etc etc) and the continued corrosion of research(er) wellbeing in the pursuit of rankings. In short, the sector is chronically unwell.

We seem to continue to find new and inventive ways to eat our young and marginalise those with less ranking ‘currency’. We’re increasingly legitimising universities as the sole dominion of research  (category A anyone?) and continuing to deify metrics despite epiphanies about responsible practice. We have re-paradigmed research through our various ratings system such that only dramatic step changes in knowledge (4* anyone?) are ordained at the altar of worthiness, and the peripheralisation of ‘smaller’ research, ‘lower level’ outputs and ‘limited effects’ is leaving so many in the sector feeling  overwhelmed, overlooked and undervalued.

This week I heard news of significant redundancies in my previous institution. Whilst I don’t know the details (nor the strategy on which the decision is based), I do know that as in so many other examples, good people are feeling betrayed. We all know there’s no Elysian Fields in which everyone gets funded and impact never dies, but for many, Dante’s inferno would be a more adequate metaphor. Where loyalty is penalised and territorialism rewarded. Where overwork is perversely incentivised and wellbeing reduced to tokenistic suggestions to ‘do more exercise’. Where stress and depression are considered unfortunate but unavoidable consequences, and where positive things happen only because good people keep other good people going. I maintain that we are enormously privileged in academia to have a voice and have the opportunity to make a difference,  but I’m hearing people ask more and more if it’s worth it.  Everyone is fighting so hard – often to stand still – and whilst it’s to their absolute credit that they keep going that isn’t sustainable strategically or psychologically.

My self imposed twitter detox has – in hindsight – reflected a sense of helplessness in addressing such pervasive problems. It’s perhaps no surprise that in parallel my professional attention has shifted significantly towards un/healthy practice in all its many guises and finding ways to rebalance things.  The sector voice is loud on the problems, and it’s time to step back into the ring and pick up the fight.

Ultimately this post is my weary, reflective and hopeful call for ‘better’. In whatever way that’s needed. Not shinier or bigger, but more decent and more meaningful across the piece. We all know the research landscape is complex, but we shouldn’t need to adopt a Hunger Games strategy  just to survive.

I’m professionally in a far healthier place, and hoping to re-find my twitter mojo soon, but for now my diluted energy is focused on trying to help salve a few things. The sector diagnosis might be chronic, but we’re not at terminal stage yet and that gives me enormous hope.

*Hugs it out*

J

Post Thrombotic Syndrome, Nice and me.

I’m sat in Nice airport having just spoken at an event where I was invited to speak about my (patient) experience of Post Thrombotic Syndrome. Basically if you’ve ever heard me mention ‘my leg‘, that’s shorthand for ‘veins-battered-by-multiple-DVTs-leaving-me-in-constant-pain-and-struggling-to-walk. Otherwise known as PTS.

Last year I had venous stents fitted – a fairly new(ish) procedure where stents are inserted into the veins to open them up and help blood flow. Many of you kept me sane whilst I stayed in hospital for a week having a ‘re-do’ when one blocked and I had to have my blood basically turned to water and another stent added as a fix. Firstly thank you (you know who you are), and secondly several months on it’s clear the stents are doing their job. I have some annoying ongoing nerve pain sure, but that hopefully may resolve when I actually get my backside back to the gym again and lose some Christmas-overdoing-it-with-chocolate weight.

Anyway, today I was part of a session about making ‘meaningful change’ (I wasn’t even there doing impact, but what do you know, it’s everywhere). I had the joy of speaking in the closing plenary with my nurse (the wonderful Vanessa), and meeting some fabulous Boston Scientific people (shout-out to the fabulous Jodie). It was a wonderful opportunity to stand in front of those working internationally to develop/sell technology (eg my stents), and explain what difference it can make. Not in terms of sales figures, or patency rates, or broad tones about quality of life, but in actual real human terms. All I did was tell the story of my life since 2008 (abridged, of course, albeit they had to see some of my holiday photos), the limitations PTS brings and the opportunities venous stents create. It was an immensely easy story to tell, but the reaction (apparently there were tears!) told me how important it is never to lose sight of the patient. What technologies and procedures and interventions mean to them. We can throw around whatever metrics we want, but ultimately it comes down to being able to take your kids to the park and be able to say yes to opportunities in life rather than no.

I was offered the chance to do the talk via video link (rather than take 4 flights in two days) but there was a very simple reason I flew to Nice and spoke in person.

Because I could.

Thank you stents.