On Impact

Recently, I was asked to write a commentary for the Journal of the American Medical Association. I read their advice for commentary authors and was interested to find an injunction against use of the word ‘impact’. My work as an NIHR ARC Director is all about achieving ‘impact’. Why would we be paid to do applied research if not to have impact?

The JAMA instructions did not really explain the reason for eschewing the word ‘impact’, other than to say it should be restricted to events like car crashes.

I have therefore reflected on what may be problematic with the use of impact to describe the practical consequences of applied health research. In particular, I reflected on the need to demonstrate impact.

The first point that came to mind concerned null results. Much applied research provides null results. In that case, the status quo remains – nothing needs to change. That does not mean that a null result is without impact. It may still have impact in the sense that the counterfactual intervention does not come into use. Impact for sure. But it requires mental effort to discern this counter-factual impact; it is not immediately accessible to the imagination. It does not create the same feeling as discovering a new and successful treatment, like embolectomy for stroke. Consider the scenario where the embolectomy trials had been null. The message would be “carry on with current treatment.” That may feel like an attenuated form of impact.

My second reflection builds on the first – if positive results yield greater perceived impact than null results, then it follows that research is rewarded according to outcome rather than quality. In turn, this could create a perverse incentive to ‘talk-up’ the positive effects of findings. To be sure, a null trial has impact in the sense that it supports the evidence-based practice movement. But the specific authors of a specific null study will find it hard to demonstrate impact.

A further thought concerns the relationship between a research project and ‘impact’. Insistence on impact might generate the illusion that change (or lack of change) turns on just one study. While there are examples of such impactful research – the CRASH-2 trial perhaps – most change results from multiple studies. Take embolectomy for thrombotic stroke – change in practice (impact) resulted from three separate randomised trials, all published in the New England Journal of Medicine (not to mention all the enabling work that preceded the trials). The demand for impact that can be hypothecated on a particular piece of research is, by-and-large, unrealistic and again, creates an incentive to over-claim. The truth, for most research, is that it contributes to impact but it is not impactful in isolation.[1]

Lastly, insistence on impact prioritises certain types of research over others. Much Applied Health Research is ‘upstream’, aiming to influence the determinants of effectiveness and safety, rather than tackling them directly. Take safety research. Some focusses on particular threats to safety, such as prescribing error or falls in hospital. However, other research targets more distal determinants, for example, reducing staff conflict or promoting continued professional development.[2] The requirement to demonstrate impact incentivises the former type of research over the second. Take, as another example, Marmot’s studies of wealth and health. The studies have been hugely influential but there is no straight, observable line from research to impact. While the work was important, it is difficult to hypothecate any particular impact on the findings.

I do not know why impact ‘bugs’ the editors of JAMA, but my reflections make me feel that they could be on to something. Perhaps we valorise impact too much. Or perhaps we should continue with the word but lessen the requirement to demonstrate impact. With all that said, those who pay for research will always want to show that the money is well spent. Maybe the onus should not be on the researcher to demonstrate (talk-up?) the impact of their research, but on funders or independent appraisers to examine return on investment in the round.

— Richard Lilford, ARC West Midlands Director


References:

  1. Schmidtke KA, Evison F, Grove A, Kudrna L, Tucker O, Metcalfe A, Bradbury AW, Bhangu A, Lilford R. Surgical implementation gap: an interrupted time series analysis with interviews examining the impact of surgical trials on surgical practice in England. BMJ Qual Saf. 2023; 32(6): 341-56.
  2. Aunger J & Turner A. Setting Initial Research Directions for the Core Research Theme of the Midlands Patient Safety Research Collaboration (PSRC). NIHR ARC West Midlands & Midlands PSRC News Blog. 2024; 6(2): 4-9.
Skip to content