Our focus in this paper is to illustrate the perceptions and experiences of public health researchers in demonstrating and delivering on non-academic impact from their work. This is captured in section (ii) of our results. First, to provide some background and overview of the public health research portfolio at the NIHR, in section (i) we describe the general trends observed from the full dataset we received of projects that were reported on Researchfish and were tagged as public health projects by the NIHR team.
Mapping the funding mechanisms and impacts for public health research
In this section we describe the general trends observed by mapping the data from the projects we received (those that had data available within the Researchfish reporting dataset).
A variety of funding mechanisms support public health research
As noted above, we identified 1386 projects funded between 2000 and 2016 as being public health focussed for our analysis. We mapped the funding streams for each of these projects and show these in Fig. 1 and note that there are many more funding mechanisms within NIHR that fund public health research activity (Fig. 1). In addition to the NIHR School for Public Health Research and the NIHR Public Health Research Programme, 89% of the 1386 projects are funded via other funding programmes, demonstrating the diversity of funding streams that support public health-related research. A large proportion of these are funded through the Health Services and Delivery Programme (HS&DR). It is worth noting that several of the large funding streams (e.g. the NIHR HTA programme) support a significant number of projects that are classified as public health (Fig. 1). Although this may be due, in part, to the length of time that some of the programmes have been running, this chart captures the diversity of ‘public health’ and the range of types of research that can contribute to public health outcomes.
A variety of impacts arise from public health research
We also mapped the types of impacts reported by researchers in the dataset received. Within the Researchfish online interface, all individual entries are labelled as ‘outputs’ of research, including academic and non-academic outputs and any wider outcomes that may be considered ‘impact’, entered by researchers themselves. Looking at this data self-reported by researchers via the Researchfish platform – a subset of 857 studies matched within the NIHR Public Health Overview portfolio – we see a diverse range of different types of research activities and outputs (Fig. 2). In line with previous analyses of data captured via the Researchfish platform, we found that investigators reported academic publications more frequently than any other output category. Following publications (not included in the table), ‘engagement activities’ is the most commonly reported item in Researchfish (3383 instances), followed by ‘collaborations’ (1692). However, we also note a smaller but still substantial number of impacts on patients (724), and policy and practice (658). We note that these graphs rely on data self-reported by researchers, and therefore the emphasis given to particular types of research activity may determine what was reported within Researchfish. Each researcher may also have interpreted their activity different (putting an entry under ‘engagement activity’ which someone else may have considered to be ‘policy and practice’ or ‘collaborations’). Through our qualitative interviews, the findings of which are described in section (ii), we were keen to identify what these activities, such as ‘engagement activities’ or ‘collaborations’ entailed.
Researchers’ perspectives on pathways and mechanisms to impact
In this section we report on the findings from the qualitative interviews, focussing on researchers’ experiences in producing and articulating the impact from their work.
Different interpretations of the meaning of impact from public health research
The diversity we observed in the quantitative data was reflected in our in-depth case studies. We found that researchers had different interpretations of what public health could include as a research discipline, and several of those we contacted questioned whether their research should be classified as public health at all.
One of the main themes arising from our interviews was diverse interpretations of how evidence created from public health research can lead to impacts in national policy. Interviewees felt that the diverse forms of evidence produced by public health research do not always correspond with those required by policy makers to effect change. For larger, drug-based interventions, randomized controlled trial methodologies may still generate the most appropriate form of evidence, but this is not the case for many interventions in public health, especially those related to lifestyle factors. Our participants reported lack of clarity as to what constitutes appropriate evidence to achieve impact. As one researcher noted, “there are different expectations for different fields about what counts as robust evidence. You could argue that you are unlikely to do harm in a community by making their park easier to use, but we are held to same standard as clinical drug trials. You can’t easily do RCTs in this field, but that’s the standard to which health evidence is held.” Interviewee 1.
We also found that researchers struggled to ascertain clarity on the level of evidence required to influence public health policy. Having worked on a study that was deemed by a national committee not to have produced sufficient evidence to bring about a policy change, one interviewee told us that they had subsequently asked what would be needed to achieve this, but were not given a tangible answer. Reflecting on this, the researcher commented “[policymakers] are understandably cautious … so it makes it difficult for an academic to know how much evidence you need in order to actually effect the change” Interviewee 2. For this researcher, the uncertainty surrounding the level of evidence required left them feeling that it was a struggle to make changes in a timely way.
The uncertainty regarding the type of evidence required for change was a recurring theme emerging from our discussions with researchers, one of whom had specifically developed techniques to model the economic value of generating further evidence regarding screening and treatment strategies to prevent infections in early infancy. Reflecting on how to tease out whether and how further evidence was needed, the researcher explained:
“‘What do you want to do now, given the evidence such as it is?’ and, ‘Do you need more evidence to inform that choice in the future?’ By separating those two questions out, you can have a sensible answer to both, to completely move away from hypothesis testing.” Interviewee 3
Reflecting on the way researchers discussed national policy impact, we noted that their responses often suggested that their main interpretation of what ‘impact’ means is evidence that produces change in national policy. When interviewed, a few of the researchers were cautious in describing the impact of their projects, emphasising that they had not quite produced all the evidence to reach policy. Yet when probing further we found that small, unintended benefits from their work was directly observed with the non-academic organisations with whom they worked, such as health service delivery or local government. When describing the relationships developed during research projects (described further in the next section), they noted that these relationships themselves had the capacity to influence change directly within an organisation. For example, one researcher commented that while they considered the potential primary impact of their research to occur at a national level, the more immediate benefits occurring at a local level were the “unintended impact.” Interviewee 4.
The challenges in getting evidence into policy and practice apply to both national and local contexts, as pointed out by several of the researchers we interviewed (whose perspectives are from the UK contexts). Reflecting on the devolved nature of public health in the UK, one participant told us:
“The problem is that the research has been rather dissociated from the practitioners … Cost effectiveness, timing, relevance and generalisability have been rather lost … We’re learning how to work with local government, making relationships … We’d done all that over 50 years with the NHS, and now we’ve got to do it with local government.” Interviewee 5
In spite of enthusiasm for interdisciplinary collaborations, uptake in practice is not always possible due to the silos in the way health, and other services that support health, are delivered throughout the country. One interviewee referred to the lack of cross-departmental collaboration on public health issues:
“We have multidisciplinary/interdisciplinary research findings, so to achieve practice based on them you need intersectoral budget management to reflect delivery demands. I don’t see any sign of that happening.” Interviewee 1
The responses in this section suggest that there are different interpretations of what constitutes evidence required for public health research impact, especially if the impact is intended to occur at national policy level. Smaller, localised benefits from public health research are also acknowledged, although not always interpreted as what counts as ‘impact’ by the researchers.
Engaging external stakeholders to facilitate impact
Following academic publications, engagement activities were the most commonly reported item in the Researchfish outcomes data we received in our dataset (Fig. 2). Our interviews enabled us to explore the nature of these activities. One of the most significant impact mechanisms reported by our interviewees was the relationships they developed with a range of external stakeholders, including hospital trusts, the Department of Health and the medical technology industry. Several of our participants suggested that these relationships were a way to navigate the complexities of the public health landscape. We noted that these relationships appeared to be most effective if they were in place from the start of the research, and several of our interviewees reported calling on relationships with external stakeholders that they had known professionally for many years.
One researcher informed us that their team had been chosen by the Department of Health to respond to a public health emergency on the basis that the researchers and their work were already known and trusted by those in a position to implement their findings. This researcher also noted that good relations between the Joint Committee on Vaccination and Immunisation (JCVI) and researchers meant that research findings were able to inform practice much sooner:
“There are good links between academia and JCVI which makes the UK well-positioned in being able to access the necessary data quickly in order to inform critical decisions without having to wait for things to be published.” Interviewee 6
As a counterpoint to navigating through the intersectoral silos identified in the previous section, a selection of interviews demonstrated initiative in reaching out to organisations outside the health sector, pointing out that they could be the people to help facilitate impacts over the lifetime of the research study and beyond:
“If you want something done about the environment, you have to be working with parks managers in local authorities and organisations like the Forestry Commission or the National Trust - people who give grants and who are going to make physical difference to real environment - if you want your research to inform policy and practice.” Interviewee 1
Similarly, another one of our participants emphasized the importance of generating the necessary relationships with a broad range of external stakeholders to garner the necessary support for a new health initiative:
“You have to set it up with the policy makers first, and then the funders, do the work and gather the evidence to make the change. I wouldn’t go directly to NIHR unless I had support from the screening committee of England. You need to lay the groundwork. That’s one way to impact, getting opinion leaders on board, getting the community behind you and the professional bodies and policy makers”. Interviewee 2
Several of the researchers we interviewed pointed out the importance of these relationships in facilitating hitting the right policy or impact ‘window’, sharing examples of where timing had been a principle factor for both policy-makers and practitioners. One researcher expressed concerns that findings were not informing practice in time to be useful to practitioners:
“Practitioners are completely uninterested in research findings in 5 years’ time.” Interviewee 3
Although they emphasized the need to reduce the time it takes for research findings to reach practitioners, we also found that research that is ‘ahead of the curve’ may not attract interest from policy makers until years later, particularly if these relationships are not already in place. Another researcher told us that although their project initially failed to gain the necessary support in PHE to effect the desired change in national screening guidelines, they were approached years later and asked to contribute their expertise on the screening technique their team had advocated, at a time when the need was felt to be greater and the benefits of the technology better understood. Here we see a different scenario, in which researchers had to wait for a ‘policy window’ years down the line:
“Interestingly, PHE have now realized they have a real manpower problem and so have come back to me nine years after I published evidence on reading screening mammograms. Sometimes it’s about timing. Maybe we were ahead of the game at that point … it’s about timeliness.” Interviewee 2
While generating relationships with external stakeholders was often rewarding, researchers also commented on how this can take time and is “resource intensive.” Interviewee 7. It can therefore be frustrating when there is high turnover either in policy or hospital practice so that personal relationships with relevant organisations are lost or have to be rebuilt over time. The challenge in building relationships or indeed in engagement activities over and above writing and disseminating an academic paper, is that these usually need to continue after a funded project is closed, and so finding the resources to sustain them is tricky.
Dissemination mechanisms to facilitate impact
For each of the in-depth case studies, we found that researchers employed a variety of different dissemination activities beyond academic publications. Our participants provided detailed perspectives on these activities, and these discussions suggested that many of the presentations were made to non-academic audiences using infographics, animation and web-based media to communicate main headlines clearly, or engaging with mainstream media as appropriate. Messages and mechanisms were often tailored to those in a position to drive implementation forward:
“It is not science sitting in isolation; it’s science sitting in a complex group of stakeholders. The pure science piece is the submission to NIHR. But how it is disseminated and spread will have to be very carefully undertaken.” Interviewee 8
Several of our researchers highlighted mainstream media as a means to facilitate impact, offering examples of where a news story had helped drive change. One researcher told us:
“I’ve had experience of high profile bits of work which have effected policy change, and that has involved a lot of exposure on the media, you know, almost putting pressure on the policy makers to do something about it.” Interviewee 9
Another researcher told us that they had given an interview about a high-profile research project for a popular monthly magazine, and subsequently received a telephone call late at night from a patient overseas who wanted to discuss the findings after having read about the study in the magazine.
Being mindful of the right communication channels also meant not investing in activities that were not appropriate for a particular project, for example, not soliciting mainstream media if the individual conversations and meetings with stakeholders were more important and would help drive the adoption in practice. One researcher told us:
“We were directly addressing those to the audiences that needed to hear them, either through presentations or the reports. So we stopped there, and actually I think that’s appropriate. I think those messages needed to be agreed with and then owned by others in order to take them forwards. I think it would have been inappropriate for us to be pushing [media engagement].” Interviewee 10
Several researchers also noted that engaging with social media could be challenging, with some having had negative experiences which had made them more cautious about this kind of dissemination mechanism:
“It’s a jungle out there. It all got very nasty very quickly, and actually my poor junior researcher who happened to be corresponding author and the first author on the paper, he had the most awful time with Facebook campaigns against him, and it was a really nasty business.” Interviewee 9
Overall we observed a balanced approach on the part of most researchers to selecting and engaging with different dissemination mechanisms. However, when reflecting on their experiences with media engagement, several researchers noted that bad experiences with both mainstream media and social media would make them more cautious about using these mechanisms in future.
Acknowledging “negative” findings
Several of our case studies revealed the difficulties associated with achieving impact with a negative research finding, especially if findings ran contrary to current thinking and practice. As one researcher explained:
“Positive results can hold attention and enthusiasm because they can be scaled up [ …] so that’s in your interest as an academic. Negative results are harder [ …] This is a particularly difficult space for research for evaluating someone’s innovation—the air can go out of the room.” Interviewee 7
The same researcher discussed the wider culture surrounding positive and negative findings in terms of research impact, emphasizing the danger of incentivizing positive results:
“If you want impact, you need positive results, and that’s dangerous for research [...]. Having these incentive structures puts academics in a difficult situation: you need something new and exciting that works, and that can’t or doesn’t always happen.”
The potential value of impact derived from negative findings was illustrated by another of our case studies, where researchers were able to implement their negative findings successfully. This project used a multi-parameter evidence synthesis to examine the value of screening as one of a number of interventions to prevent infection in early infancy. The study findings indicated that conducting a larger cluster randomized controlled trial to evaluate the efficacy of screening, planned at a cost of £12 million, would not be worthwhile. The researchers commented that:
“The single most specific impact of the study was to stop the larger cluster randomised controlled trial from going ahead.” Interviewee 10
This saved a significant amount of money, which could then be invested in other public health research projects.
Researchers as drivers of impact
One of the most striking elements to emerge from our case studies was the role of researchers’ own perceptions and skills in determining research impact. We found a range of different opinions among our participants of when and how impact is achieved, with some emphasizing the role of presentations and collaborations, and others suggesting that the publication of research findings in academic journals was the main springboard for impact. One researcher noted that impact occurs once findings have been written up:
“Once the organizational, institutional stuff has been properly written up we can then put in some recommendations or guidelines…That’s the plan, but we haven’t got there yet.” Interviewee 9
Other interviewees highlighted the impacts that occur earlier in the research process, with one noting that participants in their trial benefitted directly from the research in addition to the longer term impacts that they were aiming to have at a national level:
“The main beneficiaries at the time were the individuals who were vaccinated in the trial, and then more broadly it was the availability of the data to guide the department of health on how to move forwards.” Interviewee 6
We also observed a number of different views on the drivers of impact from the perspective of researchers. One researcher highlighted passion as a key factor motivating public health researchers:
“Ultimately, we are not just curious. I come from a discipline where we plan to make a difference in people’s lives in practical ways.” Interviewee 1
Another commented that the most effective means to achieve research impact was through conducting research that was addressing important public health challenges that are of particular interest to policy makers, the media and the public in addition to the academic community:
“I think [effecting change is about] just having some interesting research. You know, something that other people can relate to and it’s important.” Interviewee 9
Some researchers felt duty-bound to ensure that they facilitated impact from their research. As one researcher commented:
“Retreat if you did a bad study, but if you did a good study, then it’s your responsibility to push that out.” Interviewee 8
However, while researchers felt that it was important that their work be disseminated in ways that would effect positive change, some raised questions as to how far researchers should be responsible for impact. One of our workshop participants noted the potential for conflict of interest if researchers felt the need to advocate impacts from their own research, and suggested that a neutral third party could take on the responsibility for advocacy.
Similarly, one of our interviewees commented that although researchers should make their findings clear to those with the capacity to implement change, they felt that researchers should stay removed from decision-making, and did not necessarily have the skills to engage in impact activities:
“I wouldn’t expect impact to be straightforward or simple. I’m not entirely sure I’m skilled enough for that or that it’s my job. Not to say it’s not an important role or responsibility. I feel our role there is to be available, accessible and clear about what we found. In terms of decision making, that is several steps away from me and that’s the way it should be I think. They are responsible. They have to go to their local elected representatives.” Interviewee 7
We noted that there was also an acknowledgement that the skills required for engagement beyond academic peers, be it with social media, mainstream media, or indeed other forms of communication, are not always readily available to researchers. As one researcher commented:
“We have capacity issues in public health researchers, particularly those with clinical qualifications.” Interviewee 8
Our conversations with researchers suggested that they would benefit from support for impact and engagement, both in terms of building skills, and also through building impact elements more explicitly into the research process. One interviewee explained that researchers are not always taught about impact and how to effect change, and suggested that one way to support them in doing that would be to make impact elements a feature of funding applications:
“We’ve very good at teaching people research methodology, but we’re not very good at teaching them how to influence…In the funding applications… it could be not only have we discussed this with patients and the public but we have discussed this with policy holders, and we have checked that if this is successful then this is what would need to be done. I don’t want to put another barrier in to people getting research money, but it could be useful in getting people to think…” Interviewee 2