Measuring research impact

pic of Trisha Greenhalgh

World-renowned public health expert and advocate for increasing the use of knowledge in healthcare policy and practice, Professor Trisha Greenhalgh OBE is the guest speaker at our public lecture public lecture at the University of Sydney on 19 March.

Professor Greenhalgh will be joined by an Australian leader in measuring research impact, Professor Anne Kelso, CEO of the NHMRC.

In this Q&A we ask Professor Greenhalgh about this important nexus of research and practice.

We are hosting this special event with The Sax Institute, The Australian Prevention Partnership Centre and the Partnership Centre for Health System Sustainability (due to overwhelming response the event is full, but like us on Facebook or follow us on Twitter where we will feature the live webinar registration link).

Would you give us a sneak peek into the themes of your guest lecture in Australia and why these matter to you now?
Academics are under increasing pressure to achieve “impact” – though this term means different things to different people. I’m going to look at how different ways of defining research impact drive different activities in higher education. In a nutshell, if we define impact in terms of publishing in high-status academic journals, that’s all most academics will do.

But we could define impact more broadly – in terms of research that changes policy, or, indeed, affirms existing policy by strengthening the underpinning evidence base, or research that contributes to a better society such as promoting public debate, enabling better use of societal resources or contributing to sustainability of the planet. Some universities are embracing a bold and progressive definition of impact, but this may imply a radical realignment of the mission and values of higher education.

Is the UK Research Excellence Framework a model we could adopt or learn from in Australia for measuring research impact in healthcare?
Yes potentially, though I understand that the initial REF impact work drew on work from Australia! Any major national initiative has to be fit for purpose and has an element of path-dependency, so the REF approach will not graft seamlessly onto the Australian setting. Nevertheless, there is much to learn from the story of the REF.

Broadly speaking, what are some of the strengths and weaknesses of this model in measuring research impact in healthcare?
A huge strength is that there is both a “carrot” and a “stick” to incentivize impact-related activity by academics. In the UK, researchers are now following through on their research activity to try to drive their work into practice and policy. They are writing lay versions of their findings in plain English, for example the online journal ‘The Conversation’ (see Australian version here and UK version here), spending time talking to policymakers and being ‘in the room’ when key decisions are made, and adapting their recommendations so that they are workable in clinical practice.

One weakness is that short-term, easily measurable impacts are much easier to document, so some of the long-game work may sometimes get overlooked. An obvious example is that it’s easier to change a paragraph in a guideline than ensure that the guideline is actually followed.

How can we reduce application time of research implementation in your experience?
That’s a question with no simple or universal answer. I’ve written a whole book on this topic ‘How to Implement Evidence-Based Healthcare' (Wiley 2018). In the book, I cover a different unit of analysis in each chapter (for example, Evidence, People, Teams, Organisations, Technologies etc) and suggest how we might reduce the evidence-practice gap by attention to this dimension.

Can you give us a recent example of a successful translation of knowledge into practice within the UK healthcare system?
Yes! I recently developed an evidence-based, multi-level framework for supporting the implementation of technology-based change programmes in health and social care. It’s called NASSS (non-adoption, abandonment and challenges to scale-up, spread and sustainability).

The framework seeks to explain failed technology programs – but also reduce the failure rate of new programs! I was recently contacted by NHS England, the implementation arm for national health policy and they are now working with me to use the NASSS framework to inform the procurement and contracting process for new technology-based programmes in the NHS.

What, in your opinion, are some of the perennial blockers to the translation of knowledge into practice and policy? And what have we learned about how to overcome these?
Let me use the previous example to illustrate this. I think the successful translation of the NASSS framework has begun to happen for three reasons. First, we had policymakers on the steering group for the research that produced the NASSS framework. Second, we tested the framework on numerous NHS projects and programs to ensure it had real-world authenticity. And third, we allocated time and resources to the work of translating our academic findings into something the policymakers could actually use.

I’m now working with a senior contact at NHS England to generate a self-assessment questionnaire, based on NASSS, to give to commercial companies who approach the NHS with a technology they believe is going to “save lives” or “save money”. The ‘blocks’ we’ve hopefully overcome, then, are a) academics working in isolation from policymakers until the project is nearly finished (always a bad idea); b) ignoring real-world issues in the quest for some kind of ‘pure’ scientific finding (often a bad idea); and c) assuming that research, once generated, will ‘flow’ into practice (it won’t).

Last year you published ‘How to Implement Evidence-Based Healthcare’ – who should be reading this book and what should they expect to get from it?
The book is intended for a broad audience. I’ve tried to reach out to the young and mid-career academic, the clinician on the ward, the manager at the front line, the executive in the boardroom, the technology designer in the start-up, the policymaker who is struggling with budgets, deadlines and competing demands – and the patient/citizen/taxpayer. Different chapters will be relevant for different people and groups.

It’s over 20 years since you wrote the bestseller ‘How to Read a Paper’ – when it comes to translating knowledge into practice and policy within a healthcare system what have we learned since then?
Yes, that book goes on and on, doesn’t it? I wrote the first edition when my younger son was a toddler and now he’s a junior doctor using the book to revise for postgraduate exams! Back in the mid-1990s when HTRAP first came out, there was a prevailing assumption that knowledge ‘flowed’ from the research community into the world of practice and policy. All the evidence accumulated since suggests that it doesn’t.

The most effective research – in my opinion – is research that is co-developed and also collaboratively delivered by researchers working WITH policymakers, clinicians, managers and patients/citizens. This may not be universally true (there’s still a major role for lab-based basic science of course) but for health services research I’m keen on the co-creation model. I think there is beginning to be a shift, for example, from ‘classical’ randomised controlled trials (in which everything is designed to generate a generalisable effect size) to pragmatic RCTs (in which real-world messiness is taken into account). The purists will always favour the former but the policymakers are much more interested in the latter!