- Research article
- Open Access
Gathering opinion leader data for a tailored implementation intervention in secondary healthcare: a randomised trial
BMC Medical Research Methodology volume 14, Article number: 38 (2014)
Health professionals’ behaviour is a key component in compliance with evidence-based recommendations. Opinion leaders are an oft-used method of influencing such behaviours in implementation studies, but reliably and cost effectively identifying them is not straightforward. Survey and questionnaire based data collection methods have potential and carefully chosen items can – in theory – both aid identification of opinion leaders and help in the design of an implementation strategy itself. This study compares two methods of identifying opinion leaders for behaviour-change interventions.
Healthcare professionals working in a single UK mental health NHS Foundation Trust were randomly allocated to one of two questionnaires. The first, slightly longer questionnaire, asked for multiple nominations of opinion leaders, with specific information about the nature of the relationship with each nominee. The second, shorter version, asked simply for a list of named “champions” but no more additional information. We compared, using Chi Square statistics, both the questionnaire response rates and the number of health professionals likely to be influenced by the opinion leaders (i.e. the “coverage” rates) for both questionnaire conditions.
Both questionnaire versions had low response rates: only 15% of health professionals named colleagues in the longer questionnaire and 13% in the shorter version. The opinion leaders identified by both methods had a low number of contacts (range of coverage, 2–6 each). There were no significant differences in response rates or coverage between the two identification methods.
The low response and population coverage rates for both questionnaire versions suggest that alternative methods of identifying opinion leaders for implementation studies may be more effective. Future research should seek to identify and evaluate alternative, non-questionnaire based, methods of identifying opinion leaders in order to maximise their potential in organisational behaviour change interventions.
Healthcare delivery and outcomes vary enormously within many developed healthcare systems . Even accounting for the “irreducible uncertainty”  associated with healthcare, much of this variation can be explained by the attitudes, clinical judgements and decisions, and consequent behaviours of health professionals . One of the most powerful means of shaping these influential variables is the opinions of other healthcare professionals – particularly peers [4, 5].
Particularly influential practitioners can be thought of as “opinion leaders” (OLs) [6–9]. Despite their widespread use, OLs aren’t universally effective. In a Cochrane review of 18 opinion leader studies, OLs were found to be associated with a median adjusted 12 per cent absolute increase in health professionals’ compliance with recommendations . However, the range across studies varied from a 15 per cent decrease to a 72 per cent increase in compliance. The mechanisms by which they operate are only just beginning to be demonstrated and understood  but common features include generating consensus , increasing the observability and reducing the potential risk of new clinical behaviours , and producing more efficient learning [13–15].
Opinion leaders have been widely used in a variety of primary and community health care contexts; for example, to influence patient behaviour ; and as a change mechanism in health professional  and online [17, 18] communities. In “diffusion of innovation” based theories [6, 19, 20], opinion leaders play a key role in increasing the uptake of recommendations and adoption of innovations - for a classic example see , and more recently [22–24].
Identifying opinion leaders
Despite a range of tested techniques [13, 25], identifying OLs is challenging. Methods need to be robust, reliable, attract low resources and identify high quality opinion leaders; but to be useful in implementation programmes, they also need to be able to be used alongside other research tools. Identification of OLs for use in behaviour change interventions is often informal , but more formal techniques used include:
Key-informant - asking a smaller number of individuals who are knowledgeable about a network to identify influential individuals;
Self-designating - self-reporting of own opinion leader status. This can be limiting as it does not guarantee that the OL is credible within the community and does not ensure that the OL shares the agenda of community members and researchers;
Selection based on an individuals’ “objective” position or status, for example a celebrity or elected official  or someone who has published on a topic or held a key position in the organisation or geographical area .
Identification methods recognise the role of social networks in opinion leadership. Sociometric techniques, in which community members nominate opinion leaders, are valid, reliable and “sophisticated” (sic.) [13, 15] methods of collecting opinion leader information . Other methods, such as self-designating, do not always identify individuals perceived as opinion leaders by their peers. Sociometric techniques are also associated with higher response rates [26, 29]. Network analysis can help identify who is most central to a community and therefore more influential . All individuals are ‘actors’ within a communication network, and through the nomination of influential individuals, connections between actors can be described [30, 31].
Opinion leaders in implementation and “coverage”
A recent review of the use of social networks in healthcare implementation studies examined 52 papers  and revealed a growing interest in use of the technique but that the application to implementation work was limited . Network data collected is often of insufficient quantity (i.e. too few contacts named) or insufficient quality (i.e. contacts identified are not suitable for use in information dissemination). Insufficient quantity can be captured and described using the ‘coverage statistic’ : a rate describing the degree to which identified OLs have contact with (and, therefore, influence over) the adopter population. Coverage rates are the proportion of the (whole) population that name at least one nominated opinion leader . Coverage rates are rarely and idiosyncratically reported in intervention studies .
Capturing more than just an OL name requires longer, and more costly, questionnaires. Typical extensions of the most basic network capture methods include multiple full name nominations of influential peers and additional information, such as the frequency of contact  and the direction in which the information flows between the respondent and the nominated peers . Thus, the researcher seeking richer network data, with potential utility in an implementation strategy faces a trade-off: better quality data versus potentially lower response rates (longer questionnaires are less likely to be completed) and higher production and transaction costs.
As part of a larger implementation programme (see Hanbury et al.  for details of this study) to improve the provision of support for families of people with a diagnosis of psychosis. This was a three phase study. In the first, the study mapped multiple dimensions from Greenhalgh’s framework  as part of a diagnostic analysis to identify barriers to the use of family interventions for service-users with schizophrenia. In the subsequent implementation phase, five interventions were developed to specifically target each barrier. These interventions were an education event, promotion of outreach sessions to generate interest in the topic, promotion of the relevant clinical pathway, development of a register of co-workers). We sought to identify opinion leaders for use in an intervention to change professional behaviour. We aimed to capture good quality data without sacrificing coverage and quantity. The final phase aimed to evaluate the impact of the interventions on process of care measures .
Accordingly, we asked two questions:
Does requesting additional information about nominees affect the response rates to questionnaires collecting opinion leader information?
Is it possible to collect sufficient respondent ‘coverage’ rates for exploitation in implementation programmes?
Ethical approval was granted for this study by Leeds (West) Research Ethics Committee (Reference 10/H1311/1).
This was a randomised controlled trial design (Figure 1) of two questionnaire-based approaches to collecting opinion leader information within a larger implementation study. The trial took place in a single, large (~4000 staff), mental health and learning disability NHS Foundation Trust in the North of England.
All healthcare professionals in the NHS Trust involved in the care and management of patients with a psychosis diagnosis were eligible to participate. The Trust’s Medical Director provided the names of eligible staff. See Table 1 for details of participants.
The “intervention” took the form of one of two differing OL identification questions embedded in otherwise identical questionnaires:
The first ‘sociometric’ variant (Additional file 1) asked respondents to nominate individuals with whom they discussed the topic of schizophrenia over the past twelve months. Respondents were asked to provide names and job roles of their nominated contacts as well as the ‘direction’ of contact (who usually gives or receives advice) and the frequency and mode of communication. We planned to use this information to help design methods to influence attitudes, norms and intentions and eventually, professional behaviour.
Variant 2 – the ‘Brief’ nomination tool (Additional file 2): respondents were asked to name anyone they perceived to ‘strongly influence’ local practice in the area of schizophrenia. The only subsequent information requested was the contact’s job role and whether they part of the respondent’s own team. We asked the team question to identify OLs whose influence extended between teams and across formal structures.
The primary outcome was the, successfully completed questionnaire, response rate in each intervention arm. Rates were calculated from all questionnaires received at the end of April 2011. The secondary outcome measure was the coverage rate in each questionnaire variant. The formula developed by Grimshaw and colleagues  were used to calculate coverage. No changes were made to these outcome measures after the trial commenced.
Data collection, sample size and randomisation
Questionnaires were administered to 695 individuals during March and April 2011. Participants were randomly allocated to either the sociometric or the brief version of the questionnaire using simple randomisation via the RAND function in excel. To maximise response rates, participants were given a choice of paper and online (accessed via a hyperlink in an email) versions of both questionnaires. Individualised pre-notification letters were sent . A reminder email and link to the online questionnaire was sent two weeks later to those who had not responded. Three weeks later a paper based reminder with a paper copy of the questionnaire was sent to all remaining non-responders. We marketed the survey via the NHS Trusts’ electronic newsletter, postcard-sized promotion leaflets, and senior managers in the Trust were asked to encourage colleagues to participate in meetings and committees.
Response rates for each questionnaire version were calculated. We calculated separate rates for the questionnaire minus the OL identification section and for the OL section alone for each questionnaire. A list of contacts nominated by respondents was generated, to which we applied the definition of ‘Opinion Leader’ employed by Grimshaw et al. : an individual nominated by more than one respondent [6, 37]. This created a distinction between ‘contacts’ named by one respondent only and ‘Opinion Leaders’ nominated by more than one respondent.
Analysis had four main stages:
Response rates for each questionnaire version were calculated and compared using chi square, non parametric tests of significance.
Respondent coverage rates were calculated: the percentage of respondents linked to a nominated OL. The coverage rate was the percentage of survey respondents nominating at least one of the OLs. Double counting was prevented by not counting cases where an individual had nominated more than one OL.
Population coverage rates for OLs were also calculated : the proportion of the population linked to an OL. This was calculated by dividing this number by the total survey population (n = 695).
Maximum coverage rate for any single OL was the last to be calculated: the individual opinion leader nominated by the most respondents. Coverage rates were calculated for each questionnaire version separately and then combined.
75 questionnaires were returned as staff had left the NHS Trust. A qualitative research study undertaken after the trial indicated that the Trust’s own human resources data was of poor quality and thus the amount of incorrect contacts in the sampling frame may be considerably higher. Table 2 indicates the characteristics of respondents included in analysis.
Respondent characteristics/Numbers analysed
Primary outcome: the response rate
Response rates were not significantly [(1, N = 173), =1.36, p .244] higher for the brief version (30%, N = 93) than for the sociometric version (25%, N = 80). There was also no statistically significant difference between the sociometric and the brief questionnaire in responses to the OL section specifically, or the number of respondents providing at least one full name. More full names were provided by respondents to the longer version than by the single item technique, but not significantly so (Table 3).
Coverage: Table 4 provides coverage rates for each variant. Thirteen individuals met the criteria for opinion leaders (more than one nomination). Three were identified by the brief questionnaire, three were identified via the sociometric questionnaire and a further seven OLs were nominated by staff completing both versions. The percentage of respondents linked via these OLs (respondent coverage) was calculated for each version separately (of these opinion leaders, seven were nominated by just two respondents and the remaining six were named by three or more respondents). For the brief survey the respondent coverage rate was 13.3% and the sociometric, 11.25%, making the total respondent coverage rate just 12.14%. The population coverage rate for the short survey was 3.85% and for the long 2.9%. The maximum coverage rate of a single OL was 5%.
Comparing two questionnaires, we found that response rates to questions about opinion leaders did not differ regardless of the approach used. Our second finding was that the opinion leaders identified via these questionnaires reached only a small proportion of the population. These results reflect those of Grimshaw and colleagues  but coverage is far lower than those of Cosens et al.  who achieved a 58% population coverage rate. With low response rates, it is unsurprising that population coverage rates were also very low. Questionnaire length and the amount of information requested did not affect response rates. Optimistically, this means that implementation strategy designers seeking to identify opinion leaders and ask for information of value to designing behaviour change interventions should feel comfortable doing so. The value that this additional information can add to implementation may be worthwhile if this approach does not incur any extra cost.
Less optimistically however, response rates were low to both approaches. Low response rates may be attributed to the length of the questionnaire overall: that our response rate was lower than some similar studies (Doumit et al.,  achieved a 38% response rate whilst Cosens et al.  achieved 61%) may be due to the opinion leader section forming just one component of a longer questionnaire exploring multiple determinants of innovation adoption. Questionnaire approaches that focus only on gathering opinion leader data may achieve higher response rates, for example, Cosens et al.  achieved 40% response rate to the opinion leader section in their study. We hypothesise that these better response rates may be due to the reduced burden on participants. If implementation intervention designers are wedded to questionnaire based approaches to OL identification then this trade-off between the need to reduce burden and simultaneously collect data on several factors (such as attitudes, norms and intentions, as well as OL nominations) will be unavoidable. Those health professionals who were motivated to respond to the questionnaire were less willing to respond to questions about their contacts; a pattern also identified by Grimshaw and colleagues, who found some respondents perceived the notion of opinion leaders as too “abstract”, making OL identification questions difficult to answer . A reluctance to provide names may present significant challenge to collecting opinion leader information from community nominations, regardless of the techniques used.
Increased response rates with questionnaires could be possible . Whilst proven techniques were applied in this study (for example, an incentive of entry into a prize draw was offered to all respondents, respondents had the opportunity to complete either a paper or online copy of the survey, reminder emails and letters were sent to non-respondents) we still had a poor response. It is likely then that the questionnaire length together with the reluctance to name individual colleagues reduced response rates.
These findings should lead implementation experts to ask whether gathering opinion leader information using questionnaires is a suitable approach; especially given the burden placed on respondents by multidimensional questionnaires. Social networks and opinion leaders operate within a wider implementation context . By focussing on systematically collecting questionnaire based opinion leader data this wider context may be missed. Whilst individual actors and organisational context are both important, it is increasingly recognised that the interplay between these is critical . Face to face collection of social network data permits a better understanding of the context in which peer communication takes place and of the precise nature of the communication and relationships.
Low response and coverage rates mean that data on opinion leaders cannot be usefully applied to implementation efforts. Even with high quality data and high coverage rates, subsequent utilisation of opinion leaders in implementation strategies is challenging. For example, administrative delays in implementation work can make it difficult to harness the support of opinion leaders ; identifying individuals is not sufficient if they do not endorse the innovation , those identified may not view the innovation to be adopted positively , and the opinion leader status may be temporary . Thus, the implementation designer must face the challenge of collecting sufficient information about opinion leaders; which in turn requires a more comprehensive tool and thus induces greater sense of burden in respondents.
The Trust’s own contact list was of out of date and inaccurate, this seriously hampered the construction of a good quality sampling frame. As an external team of researchers we were reliant upon the information provided by the NHS Trust. Our misplaced assumption that this information was comprehensive and up-to-date meant that our questionnaire did not reach a proportion of the population which in turn affected our sample size and response rates.
We did not collect economic data on the time and financial resources consumed by both approaches. This was an omission and prevented a more informed assessment of the relative cost effectiveness of each approach.
The study took place within a changing organisational context, which may have influenced response rates to the questionnaire, and makes generalising these findings to more stable organisations difficult. The relationship between response rates and coverage rates is unknown: higher response rates may have generated different coverage rates. Lastly, using a questionnaire that focused solely on gathering opinion leader data may have provided better response rates.
Theoretically and empirically, informal communication channels are a valuable means of diffusing innovation [12, 19, 20, 37]. However, collecting sufficient information using social network techniques is resource intensive and requires high levels of engagement from both researcher and respondents. Our results suggest that questionnaire based approaches to OL identification lead to information that is almost unusable in the context of designing a theoretically-informed behaviour change intervention. The study reinforces Grimshaw and colleagues’  assertion that there is limited empirical evidence to support the collection of opinion leader data. Other researchers may wish to consider future studies which compare, and assess the cost-effectiveness of, OL-only questionnaires vs. embedded approaches, or questionnaire vs. qualitative/observational techniques.
The Dartmouth Atlas of Healthcare. http://www.dartmouthatlas.org/,
Hammond KR: Human Judgement and Social Policy: Irreducible Uncertainty, Inevitable Error and Unavoidable Injustice. 1996, Oxford: Oxford University Press
Eddy D: Variations in physician practice: the role of uncertainty. Health Affairs. 1994, 3: 15-
Bridgewater B, Keogh B: Surgical “league tables”. Heart. 2008, 94 (7): 6-
Flodgren G, Parmelli E, Doumit G, O’Brien MA, Grimshaw J, Eccles M: Local opinion leaders: effects on professional practice and health care outcomes (Review). Cochrane Database Syst Rev. 2011, 8
Grimshaw JM, Eccles MP, Greener J, Maclennan G, Ibbotson T, Kahan JP, Sullivan F: Is the involvement of opinion leaders in the implementation of research findings a feasible strategy?. Implement Sci. 2006, 1 (3):
Cain M, Mittman R: Diffusion of innovation in healthcare. iHealth Reports. 2002, California: California Healthcare Foundation
Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004, 82 (4): 581-629. 10.1111/j.0887-378X.2004.00325.x.
van Eck PS, Jager W, Leeflang P: Opinion leaders’ role in innovation diffusion: a simulation study. J Prod Innovat Manag. 2011, 28: 16-
Effective Practice and Organisation of Care (EPOC): EPOC Resources for review authors. 2013, Oslo: Norwegian Knowledge Centre for the Health Services, http://epocoslo.cochrane.org/epoc-specific-resources-review-authors,
Locock L, Dopson S, Chambers D, Gabbay J: Understanding the role of opinion leaders in improving clinical effectiveness. Soc Sci Med. 2001, 53: 745-757. 10.1016/S0277-9536(00)00387-7.
Jippes E, Achterkamp M, Brand P, Kiewiet D, Pols J, van Engelen J: Disseminating educational innovations in health care practice: training versus social networks. Soc Sci Med. 2010, 70: 1509-1517. 10.1016/j.socscimed.2009.12.035.
Valente TW, Puampuang P: Identifying opinion leaders to promote behavior change. Health Education and Behaviour. 2007, 34 (6): 881-896.
Borbas C, Morris N, McLaughlin B, Asinger R, Gobel F: The role of opinion leaders in guideline implementation and quality improvement. CHEST. 2000, 118: 8-10.1378/chest.118.1.8.
Valente TW, Davis R: Accelerating the diffusion of innovations using opinion leaders. AAPSS. 1999, 566: 55-67. 10.1177/0002716299566001005.
Young JMHM, Ward J, Holman CJ: Role for opinion leaders in promoting evidence-based surgery. Arch Surg. 2003, 138 (7): 6-
Jaganath D, Gill HK, Cohen AC, Young S: Harnessing Online Peer Education (HOPE): integrating C-POL and social media to train peer leaders in HIV prevention. AIDS Care. 2012, 24 (5): 7-
Young SD, Konda K, Cacares C, Galea J, Sung-Jae L, Salazar X, Coates T: Effect of a community popular opinion leader HIV/STI intervention on stigma in Urban, Coastal Peru. AIDS Behav. 2011, 15: 7-
Rogers E: Diffusion of Innovations. 2003, London: Free Press, 5
Greenhalgh T, Robert G, Bate P, Kyriakidou O, Macfarlane F, Peacock R: Diffusion of innovations in health service organisations: a systematic literature review. 2005, Oxford: Blackwell BMJ Books
Coleman J, Katz E: Social processes in physicians’ adoption of a new drug. J Chronic Dis. 1959, 9: 19-
Arling P, Doebbling B, Fox R: Levering social network analysis to improve the implementation of evidence-based practices and systems in healthcare. 44th Hawaii International Conference on System Sciences. 2011, : , 2011-
Cohen A, Glynn S, Hamilton A, Young A: Implementation of a family intervention for individuals with Schizophrenia. J Gen Intern Med. 2009, 25 (Suppl 1): 32-37.
Norman CD, Huerta T: Knowledge transfer and exchange through social networks: building foundations for a community of practice within tobacco control. Implement Sci. 2006, 1 (20):
Grol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003, 362 (9391): 1225-1230. 10.1016/S0140-6736(03)14546-1.
Kravitz RL, Krackhardt D, Melnikowa J, Franz CE, Gilbert WM, Zache A, Paternitia DA, Romanoa PS: Networked for change? identifying obstetric opinion leaders and assessing their opinions on caesarean delivery. Soc Sci Med. 2003, 57: 2423-2434. 10.1016/S0277-9536(03)00137-0.
Valente TW: Social network thresholds in the diffusion of innovations. Soc Networks. 1996, 18: 20-
Fitzgerald L, Ferlie E, Wood M, Hawkins C: Interlocking Interactions, the diffusion of innovations in healthcare. Hum Relat. 2002, 55 (12): 20-
Cosens M, Ibbotson T, Grimshaw J: Identifying opinion leaders in ward nurses: a pilot study. J Res Nurs. 2000, 5 (2): 7-
Wasserman S, Faust K: Social Network Analysis: Methods and Applications. 1994, Cambridge: Cambridge University Press
Scott J: Social Network Analysis: a handbook. 2000, London: Sage
Chambers D, Wilson PM, Thompson C, Harden M: Social network analysis in healthcare settings: a systematic scoping review. PLoS ONE. 2012, 7 (8):
van Duijin MAJ, van Busschbach JT, Snijders TAB: Multilevel analysis of personal networks as dependent variables. Soc Networks. 1999, 21: 12-
Yousefi-Nooraie R, Dobbins M, Brouwers M, Wakefield P: Information seeking for making evidence-informed decisions: a social network analysis on the staff of a public health department in Canada. BMC Health Serv Res. 2012, 12 (118):
Hanbury A, Thompson C, Wilson PM, Farley K, Chambers D, Warren E, Bibby J: Translating Research into Practice in Leeds and Bradford (TRiPLaB): protocol for a programme of research. Implement Sci. 2010, 5 (37):
Edwards PJRI, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, 3: MR000008-
Doumit G, Wright F, Graham I, Smith A, Grimshaw J: Opinion leaders and changes over time: a survey. Implement Sci. 2011, 6:
Rycroft-Malone J, Seers K, Chandler J, Hawkes CA, Crichton N, Allen C, Bullock I, Strunin L: The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implement Sci. 2013, 8 (28):
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/14/38/prepub
This article presents independent research funded by the National Institute for Health Research (NIHR) through the Leeds York Bradford Collaboration for Leadership in Applied Health Research and Care. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
The authors declare that they have no competing interests.
KF participated in the design of the project protocol, developed the questionnaire, conducted the analysis presented here, and drafted the manuscript. AH designed the project protocol, developed the questionnaire, and drafted the manuscript. CT designed the project protocol, developed the questionnaire and reviewed the manuscript. All authors read and approved the final manuscript.
Katherine Farley, Andria Hanbury and Carl Thompson contributed equally to this work.
Electronic supplementary material
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
About this article
Cite this article
Farley, K., Hanbury, A. & Thompson, C. Gathering opinion leader data for a tailored implementation intervention in secondary healthcare: a randomised trial. BMC Med Res Methodol 14, 38 (2014). https://doi.org/10.1186/1471-2288-14-38
- Coverage Rate
- Opinion Leader
- Behaviour Change Intervention
- Questionnaire Version
- Longe Questionnaire