Methodological approaches to study context in intervention implementation studies: an evidence gap map
BMC Medical Research Methodology volume 22, Article number: 320 (2022)
Within implementation science studies, contextual analysis is increasingly recognized as foundational to interventions' successful and sustainable implementation. However, inconsistencies between methodological approaches currently limit progress in studying context and guidance to standardize the use of those approaches is scant. Therefore, this study's objective was to systematically review and map current methodological approaches to contextual analysis in intervention implementation studies. The results would help us both to systematize the process of contextual analysis and identify gaps in the current evidence.
We conducted an evidence gap map (EGM) based on literature data via a stepwise approach. First, using an empirically developed search string, we randomly sampled 20% of all intervention implementation studies available from PubMed per year (2015–2020). Second, we assessed included studies that conducted a contextual analysis. Data extraction and evaluation followed the Basel Approach for CoNtextual ANAlysis (BANANA), using a color-coded rating scheme. Also based on BANANA and on the Context and Implementation of Complex Interventions (CICI) framework–an implementation framework that pays ample attention to context– we created visual maps of various approaches to contextual analysis.
Of 15, 286 identified intervention implementation studies and study protocols, 3017 were screened for inclusion. Of those, 110 warranted close examination, revealing 22% that reported on contextual analysis.
Only one study explicitly applied a framework for contextual analysis. Data were most commonly collected via surveys (n = 15) and individual interviews (n = 13). Ten studies reported mixed-methods analyses. Twenty-two assessed meso-level contextual and setting factors, with socio-cultural aspects most commonly studied. Eighteen described the use of contextual information for subsequent project phases (e.g., intervention development/adaption, selecting implementation strategies). Nine reported contextual factors' influences on implementation and/or effectiveness outcomes.
This study describes current approaches to contextual analysis in implementation science and provides a novel framework for evaluating and mapping it. By synthesizing our findings graphically in figures, we provide an initial evidence base framework that can incorporate new findings as necessary. We strongly recommend further development of methodological approaches both to conduct contextual analysis and to systematize the reporting of it. These actions will increase the quality and consistency of implementation science research.
Successful implementation of interventions in real-world settings depends on the dynamic, multi-dimensional, multi-level interplay between context, intervention and implementation strategies [1, 2]. Therefore, a thorough understanding of the implementation context is critical. This is true not only for the initial implementation, but also for sustainability and scale-up [3,4,5,6,7]. Filling this need is the role of contextual analysis, i.e., the mapping of multi-dimensional and multi-level contextual factors relevant for the implementation of an intervention in a specific setting.
Within an implementation scienceFootnote 1 project, we understand contextual analysis as a separate study. It starts well before implementation and continues throughout the project. The in-depth contextual knowledge informs subsequent phases of the project, especially the development or adaption of an intervention and choices of implementation strategies [8,9,10]. Within that setting, contextual analysis helps to interpret the studied intervention's effectiveness and implementation outcomes and guides choices of sustainability strategies [11, 12].
Although the importance of context has been widely emphasized regarding implementation, little attention has been paid to its assessment in studies partly driven by funding frameworks that do not normally recognize this phase's importance [13,14,15]. Yet, conceptual and methodological challenges hamper the assessment of context additionally. Even the concept of context is only partially mature [16,17,18]: a recent systematic review revealed inconsistencies in current theoretical and operational definitions .
No unifying definition of context yet exists. Instead, we see terms including setting—sometimes divided into inner and outer setting—environment, or system characteristics, with each signifying a slightly different perspective [16, 19, 20]. Further, no explicit methodological guidance yet describes how to assess, analyze or report context and setting.
Within a postpositivist paradigm, researchers tend to focus on single factors (commonly referred to as facilitators and barriers) to the exclusion of those occupying multiple levels and dimensions [18, 20, 21]. These factors are often selected without theoretical support; and even where contextual analyses are conducted, the findings are rarely used to inform subsequent project phases (e.g., implementation strategy choices). Additionally, no specific methods to study contexts are described, the range of psychometrically sound measurement tools (particularly to assess macro-level factors) limited, and reporting guidelines (e.g., Standards for Reporting implementation Studies (StaRI) [22, 23]) ambiguous regarding how contextual analysis to report [18, 24].
Based on a methodology reported by Stange and Glasgow  within a series of patient-centered medical home research for the US Agency for Healthcare Research and Quality (AHRQ), we developed the Basel Approach for CoNtextual ANAlysis (BANANA) and applied it successfully in two implementation science projects [25,26,27]. BANANA provides methodological guidance for contextual analyses and can point to relevant aspects in reporting contextual analyses. This approach's theoretical grounding is the Context and Implementation of Complex Interventions (CICI) framework , a meta-framework incorporating insights from previous frameworks (e.g., the Consolidated Framework for Implementation Research ), but also filling previous gaps (e.g., differentiating between context and setting, focusing more on macro-level factors, considering how other interventions can affect implementation). Starting from an ecological perspective, the authors conceptualized context as a “set of characteristics and circumstances that consist of active and unique factors, within which the implementation is embedded” , whereas setting refers to the physical location in which an intervention is to be implemented and interacts with both context and implementation . Context “is an overarching concept, comprising not only a physical location but also roles, interactions and relationships at multiple levels” . Contextual factors can be grouped into geographical, epidemiological, socio-cultural, socio-economic, political, legal or ethical domains, and include, e.g., the social structure, financial aspects, or the political climate.
To guide contextual analysis in implementation science projects, BANANA includes six components: (1) choosing a theory, model or framework (TMF) to guide contextual analysis. (To enhance analytical granularity, the TMF can be complemented with one that is setting-specific.); (2) reviewing empirical evidence about relevant contextual factor(s), including facilitators and barriers, as well as practice patterns related to the implementation and intervention; (3) involving relevant stakeholders in the contextual analysis. This includes implementation agents, i.e., individuals (or organizations) targeted or affected by the implementation of an intervention (target group, e.g., patients, family caregivers), who implement an intervention (implementers, e.g., healthcare professionals) or who decide on the implementation of an intervention (decision makers, e.g., policy makers and funders) . Other stakeholders can include experts with advisory roles within the project (e.g., for intervention development); (4) collecting and analyzing data, combining qualitative and quantitative methods where appropriate; (5) identifying and describing the relevance of contextual factors for intervention co-design, implementation strategies and outcomes; and (6) reporting the contextual analysis . To strengthen the methodology for contextual analysis in implementation science, we recognized that it would be essential first to understand the key methods currently in use. Therefore, we set out to gather an evidence base. To identify gaps in that base, we systematically reviewed and mapped the methodological approaches described. More specifically, first, we aimed to determine the percentage of published intervention implementation studies reporting on contextual analysis. Second, we aimed to assess, map and evaluate those studies that reported on contextual analysis. We focused on a) which methodological approaches were used for contextual analyses and what gaps exist in current approaches, and b) which results were used to inform subsequent phases of the associated studies.
To draft an evidence gap map (EGM) we reviewed and categorized the methodologies applied to contextual analyses in the identified studies. This process was basically a systematic search that included surveying the current state of methodological approaches to contextual analysis. As the name implies, this was very useful to identify gaps in those approaches [28,29,30]. As for the mapping aspect, the results are presented in a user-friendly format, usually combining tables or visualizations and descriptive reports to summarize existing evidence and facilitate methodological improvements regarding the topic—in this case, contextual analysis [28,29,30,31]. We reported our findings according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses–Scoping Reviews (PRISMA–ScR) Checklist (Additional file 1) .
Scope of the evidence gap map (EGM) and development of research questions
As a first step, to develop comprehensive, relevant research questions, this study's authors—all experienced implementation science scientists—discussed the scope and focus of the EGM [31, 33]. As noted, a stepwise approach helped us identify relevant literature and provide a comprehensive overview of the available evidence (Additional file 2): First, we aimed to identify intervention implementation studies and assessed whether they included contextual analyses (Step 1). Second, focusing exclusively on studies that reported contextual analyses, we mapped both the researchers' methods (Step 2a) and how they used the results to inform further phases of their projects (Step 2b).
Inclusion/ exclusion criteria
In step 1, we employed ten inclusion criteria to the prospective sample. We included (a) peer-reviewed articles or study protocols (b) concerning intervention implementation studies (c) if they employed experimental or quasi-experimental designs (d) to test intervention effectiveness (e) in real world settings. They also needed (f) to include at least one of Brown et al.'s "7 Ps" , i.e., programs, practices, principles, procedures, products, pills, and policies, and (g) to report on the evaluation of the implementation pathway. This included qualitative or quantitative information on the implementation process and/or on at least one implementation outcome as defined by Proctor et al.  (Additional file 2). During the screening we identified a large number of feasibility studies that did not fit the scope of our study. Therefore, we decided only to include feasibility studies (h) if they assessed at least one additional implementation outcome (e.g., feasibility and acceptability). Further, only papers (i) written in English or German and (j) with available full texts were included. Because the level of detail of contextual analysis in study protocols is usually limited, we used the "cited by" function in PubMed to determine whether the intervention study had been already published and contained further information on contextual analysis. In cases where we identified the study protocol and related intervention implementation study, only the intervention study was included in the review. Further, we excluded studies reporting on context exclusively as part of the process evaluation or retrospectively.
Systematic searching – search strategy development
We applied Hausner et al.'s empirical-based approach  to develop our search strategy. Following a four-step process, we first used a precise search string to identify a subset of 163 articles in Pubmed that met our EGM's inclusion criteria (Additional file 3). Those articles were randomly assigned to a development (n = 81) or a validation set (n = 82). Second, using Pubmed ReMiner (https://hgserver2.amc.nl/cgi-bin/miner/miner2.cgi), we identified the search terms (keywords and MeSH terms) most commonly used in the development set articles. The identified search terms were used to develop a search string. In a third step, this string was tested against the validation set. The final search string consisted of 22 keywords (MeSH and free terms) and achieved a sensitivity of 95.1% (i.e., it identified 75 of the 81 development records). The fourth step consisted of documenting the search string development (Additional file 3).
Our main aim was to identify and map gaps in the current evidence base on approaches to contextual analysis and not to provide an exhaustive overview on all existing evidence. Therefore, we searched only the PubMed electronic database. Further, to maximize timeliness, we limited our search to the past six years (2015–2020). Using a random number generator, we then selected a random sample of 20% of the articles identified from each year. No further filters were applied.
For step 1, using the web application Rayyan (https://rayyan.qcri.org), two reviewers (JM, TB) independently screened titles and abstracts of the randomly selected implementation science papers against the described inclusion criteria . Second, each reviewer (JM, TB) independently screened the full texts of all included papers. In case of disagreement between the two reviewers, a third reviewer (SDG) was consulted to reach consensus. For step 2, the first two reviewers (JM, TB) independently screened the full texts of previously included intervention implementation studies against the respective eligibility criteria. Again, the third reviewer (SDG) was consulted in case of disagreement.
Data extraction and analysis
We extracted the general data of all included intervention implementation studies (e.g., design, setting). Guided by BANANA, specific characteristics of studies including contextual analyses were extracted, including general information (e.g., whether context was analyzed at various timepoints, TMFs used), implementation agents involved in each analysis and methods applied to conduct contextual analysis (i.e., quantitative and qualitative methods). Further, we assessed the results of the contextual analyses, i.e., we noted how those results were used for subsequent study phases and whether the researchers had considered how contextual factors might influence implementation and summative outcomes (Additional file 2). As it quickly became clear that few studies explicitly reported the use of hybrid designs, we used Curran et al.'s description to categorize these in the remainder that we checked, i.e., as hybrid type 1/2/3Footnote 2 . All extracted data were charted in an Excel file. General study characteristics were analyzed descriptively, calculating frequencies and percentages.
Mapping of identified methodological approaches
We mapped the identified approaches to contextual analyses against the components of BANANA. To provide a user-friendly format, we created color coded tables and depicted the information graphically (i.e., in an EGM). The structure of the tables follows the BANANA approach and provides a comprehensive overview of all relevant information. More detailed information on the assessed approaches can be found in the Additional files 4 and 5.
To provide an overview of contextual factors assessed, an EGM was developed using two software tools: EPPI-Reviewer Version 126.96.36.199  and EPPI-Mapper Version 1.2.5 . As terminology and conceptualization of contextual factors varied widely across the identified studies, with none differentiating between context and setting, we used the CICI framework to categorize identified micro-, meso- und macro-level aspects . Contextual factors were grouped to the seven CICI context domains (i.e., geographical, epidemiological, socio-cultural, socio-economic, political, legal and ethical) and subcategories further specifying contextual domains (e.g., infrastructure, organization structure, leadership). Setting factors as part of the context (i.e., those referring to the physical location in which an intervention is implemented) were categorized into three domains: work environment, physical characteristics and practice patterns. Since included studies did not differentiated setting as a part of context, JM inductively categorized all identified setting factors for each domain (e.g., pertaining to work flow, capacity, availability of resources) to clearly structure and summarize them. These choices were then reviewed by TB. Inconsistencies were discussed with SDG and FZ. Using dots, the evidence map concisely depicts which aspects of context and setting were assessed in each implementation and at which level. Each dot's color indicates whether the method used was quantitative or qualitative; its size indicates how many studies investigated this aspect. I.e., the larger the dot, the more studies have considered this specific aspect. As the evidence map is interactive, categories can be shown or hidden to provide simpler or more complex views. The respective studies' references (including abstracts) can also be displayed.
Evaluation of identified methodological approaches
To critically evaluate the methodological approaches to contextual analysis reported in the included studies, we grouped the extracted data via five of the six components described in the BANANA approach . The sixth step of BANANA was not evaluated as it refers to the reporting of the contextual analysis, which was an inclusion criterion for the assessed studies. We applied color-coding to indicate whether each study clearly addressed a component (green), only mentioned it partly (yellow), or failed to address it (red). The color coding was done independently by two researchers (JM, TB). In cases of disagreement, a third researcher (SDG) was consulted to decide on the rating.
We used a two-phase sampling process. In Phase 1, our PubMed search returned 15,286 records. After removing duplicates, we randomly sampled 20% of the remaining studies from each of the six selected publication years (2015–2020) (n = 3017). In Phase 2, we screened this sample via the inclusion criteria noted above. Figure 1 presents a flow chart of the screening process. This left 110 intervention implementation studies for data extraction. For Phase 1, our inter-rater reliability was 76.7%; for Phase 2 it was 91.1%. As the included articles were both, original studies and study protocols, in the interests of readability, we will describe all results in the past tense.
General characteristics of included studies (Step 1)
Of the 110 extracted articles the majority were study protocols (n = 90); most (n = 82) were either from North America (n = 45) or Europe (n = 37) (Table 1). The studies were conducted in a wide range of settings, the most common being primary care (n = 20), community care (n = 15), the health care system (n = 13) and schools (n = 12). Eighty-four of their designs were experimental; twenty-six were quasi-experimental. Further details of the studies are described in the Additional file 4.
Characteristics of studies reporting on contextual analysis and methodological approaches applied (Step 2)
Of the sample's 110 studies, 24 (21.8%) reported conducting contextual analyses (Table 2). As authors of seven studies had released further information or results elsewhere, we located and extracted those records (n = 15) as well. Based on Curran et al.'s definitions , we identified (or categorized if not described) 17 hybrid type 1, five hybrid type 2, and two hybrid type 3 designs. Seven of the 24 assessed context at one time point; 12 assessed it at two, and five at three timepoints during their projects (Additional file 5).
TMFs used and empirical evidence considered for contextual analysis
The included studies used eleven distinct TMFs. Those used can be broadly categorized into process models (e.g., Knowledge-to-Action Models), determinant frameworks (e.g., CFIR), or classic theories (e.g., social cognitive theory) . One, the RE-AIM (reach, effectiveness, adoption, implementation, maintenance) Planning and Evaluation Framework is a process and evaluation framework that includes a determinant component . Only one study specifically described how it used a TMF (CFIR) for contextual analysis and how that TMF guided it . The others (n = 15) referred more generally to their TMFs guiding their overall implementation process, with RE-AIM (n = 7) and the Consolidated Framework for Implementation Research (CFIR) (n = 3), cited most often. Four studies reported combining two TMFs, e.g., CFIR and RE-AIM. In addition, seven considered empirical evidence about relevant contextual factors (Fig. 2).
Consideration of implementation agents
Only nine studies collected data of all three types of implementation agents, with implementers most often being involved in the assessment of context (n = 19) (Fig. 2). In some cases, stakeholder groups who functioned as expert panels or advisory boards throughout the project (n = 11) were established. These included, e.g., health care providers from various medical fields, people affected by specific illnesses or health conditions, leaders and administrators, and delegates for non-profit organizations or government departments (Additional file 5).
Methods applied for data collection and analysis
Of the 24 studies that reported using contextual analyses, 23 clearly described their methods. Of these 23, while ten explicitly reported using mixed-methods analysis, we found that 13 applied combinations of quantitative and qualitative methods. The remaining ten applied either quantitative (n = 2) or qualitative (n = 8) methods alone (Fig. 2). Quantitative data collection methods included purpose-designed surveys (n = 15), behavior mapping (n = 1), and retrospective use of national survey (n = 1) and surveillance (n = 1) data. Seven qualitative data collection methods were used: individual interviews (n = 13), focus groups (n = 13), observations (n = 2), as well as photovoice methodologyFootnote 3 (n = 2), telephone interviews (n = 1), yarningFootnote 4 (n = 1) and site visits (n = 1).
Contextual and setting factors assessed
We identified 43 separate factors. Following the CICI framework, we first categorized these as either context (n = 30) or setting factors (n = 13), then mapped them on an evidence gap map (Additional file 6) . In general, meso-level factors (n = 22) were most commonly assessed, accounting for almost half of all mentions. The remainder were roughly equally divided between macro- (n = 13) and micro-level factors (n = 12). Fifteen studies considered context on at least two levels. We report a detailed overview of all assessed factors in Additional file 7.
Contextual factors. Within context, socio-cultural factors were most commonly assessed (e.g., knowledge and perceptions, lifestyle, social structure) (n = 20); no studies reported on legal aspects. In descending order of frequency, other contextual domains included political (e.g., policies, leadership) (n = 12), geographic (e.g., larger infrastructure) (n = 5), epidemiological (e.g., incidence and prevalence of disease) (n = 5), socio-economic (occupational aspects, living conditions) and ethical (ethical principles (n = 2), conflicts of interest (n = 2)). Seven studies described their assessment of inner or outer context or of facilitators and barriers, but did not further specify contextual factors in detail.
Setting factors. In view of setting, most studies assessments focused on the work environment (e.g., availability and accessibility of resources) (n = 15). Other setting aspects assessed included practice patterns (e.g., service delivery, care planning) (n = 11) as well as the setting's physical characteristics (e.g., study site, physical environment) (n = 7).
Use of contextual information for subsequent project phases
Eighteen study protocols described further uses of contextual information to develop (n = 17) and/or adapt interventions (n = 11), eight used contextual information to choose implementation strategies, and six used it to interpret study outcomes. Of these, ten described their processes for doing that. Both original study papers described the further use of contextual information; however, only one reported how it was used.
Influences of contextual factors on outcomes
Twelve study protocols and both original studies reported process evaluations. We identified nine studies that explicitly reported contextual factors' influences on implementation outcomes and/or effectiveness outcomes (Fig. 2). Various terms were used to signify similar implementation outcomes; and even where studies labeled these outcomes similarly, their definitions varied. In five protocol papers, as well as in both original studies, it was unclear whether any association had been considered between contextual factors and either implementation outcomes or effectiveness outcomes.
Evaluation of methodological approaches for contextual analysis
Our evaluation of the identified approaches to contextual analysis revealed that few studies addressed the key components of contextual analysis that are described in detail within BANANA (Fig. 3). The components that most studies clearly described were the use of quantitative and qualitative methods (n = 12) and the involvement of implementation agents (n = 9). The latter was also described partly within most of the remaining studies (n = 15). The two least addressed components were the use of contextual information to interpret outcomes (n = 7) and the use of empirical evidence (n = 7).
This study provides an overview of the current methodological approaches to contextual analysis in intervention implementation studies and indicates gaps. Using EGM methodology, we applied a novel approach for summarizing and evaluating available evidence on contextual analysis to develop an initial evidence gap map on contextual analysis methodology. Based on a random sample drawn from 110 intervention implementation studies, we found that fewer than one-quarter of those studies (21.8%) reported on analyses of their projects' contexts and settings. The studies that did report on contextual analyses showed high variability in the methodological approaches they used. This was true both of the analyses and of how they were reported.
Using the BANANA approach—one of the first frameworks for evaluating CAs—we found widespread significant methodological gaps. For example, few contextual analyses were theory based: only one study explicitly reported the use of a TMF for its contextual analysis; and fewer than half (8/22) provided information how they used findings from their CA to inform their project's subsequent steps.
Lack of TMFs guiding contextual analysis
Building our understanding of context demands a stable theoretical basis. In addition to guiding our selection of multilevel contextual factors, this will enable operationalization both of context and of setting. Still, of the 24 studies we reviewed, only one provided both a specific description of its authors' use of a TMF to guide their contextual analysis and their rationale for using the one they did [59, 82]. Congruent with our findings, research shows that 22.5 – 48% of implementation science studies typically use TMFs; and of those that do, few explicitly explain their choices [82,83,84,85].
The phenomenon of “underuse, superficial use or misuse”  of TMFs has been described elsewhere in implementation science literature [85, 87,88,89]. All of the identified TMFs consider context, but differ widely regarding their focus and conceptualization of context [18, 20]. Lacking clear theoretical underpinnings, their assessments of contextual factors appear arbitrary. While limiting both the comparability and the generalizability of their results, this gives the impression of a lack of rigor concerning the contextual analysis. And as this analysis provides the data for further fine-tuning of the project, any such deficiencies will reduce the credibility of all subsequent study phases. This includes also the emerging focus of differentiating setting from context, which was not reflected in includes studies and complicated data analysis [2, 16].
Variability in conceptualization and assessment of context
Consistent with other reviews' findings, the assessed studies' conceptualizations of context tended to be vague. For example, while a diverse range of factors were assessed at numerous levels, no definitions accompanied them. The resulting vagueness (e.g., documentation of inner and outer context, local contextual determinants, environmental-level characteristics, facilitators and barriers), hampered our efforts either to understand, to summarize and to compare those factors [13, 17, 18].
We noted considerable differences regarding which levels' and domains' contextual factors were appropriate targets for investigation. In contrast to Rogers et al.'s review  of studies from 2008–2018, which found that micro-level factors were most often assessed, our results regarding reports published over the last six years (2015–2020) showed a significant focus on the meso level, with socio-cultural contextual factors (e.g., social structure, community structure) most frequently captured. Macro-level factors (including political, legal and socio-economic aspects) were less commonly studied.
This scarcity might also reflect a shortage of tools and frameworks focusing on the macro level [20, 24, 90, 91]. However, evidence points to the importance of macro-level factors for adoption and successful implementation of interventions. For example, policy dynamics—or rather, competing policy agendas—can create major macro-level barriers to implementation [90, 92, 93]. Further, when reviewing research on projects that resulted in mis-implementation of interventions, it quickly becomes clear that the most common causes of premature termination of effective interventions or programs are all funding-related (86–87.6%) [94, 95]. This observation drives home the point that, to maximize the chances of a project's success (e.g., by recognizing changes in funding priorities at an early stage acquiring additional funding), its contextual analyses has to consider and closely monitor factors at every level.
However, the choice of which contextual factors to study and which stakeholders to involve at which phases depends largely on the type of intervention. This may also explain why the recorded contextual factors differed so widely between studies.
Furthermore, both the assessment of context and the reporting of contextual analysis might be biased by their analysts' level of pre-existing knowledge, i.e., researchers' inside knowledge may influence the quality or impartiality of their results. For example, researchers working in a specific setting may already be aware of certain contextual determinants (e.g., processes and practice patterns) or may gather important information informally (e.g., via chance meetings with implementation agents, observation of practice). While this information is not explicitly collected for the contextual analysis, it can lead to confirmation bias. I.e., it can leave "blind spots" in contextual analysis, exerting subtle pressure on analysis or interpretation to favor factors that support pre-existing hypotheses or beliefs .
Limited involvement of various implementation agents
Both to enhance the quality of a project's research and to ensure appropriateness of intervention and implementation strategies through co-design, it is crucial to involve implementation agents in diverse positions [97, 98]. This is true throughout the implementation project but especially so in the contextual analysis. Also, in the reviewed studies, the most commonly considered implementation agents were implementers; however, persons affected by the intervention and decision makers often went unrepresented. Implementation science guidelines generally recommend the most representative possible range of implementation agents' and other stakeholders' voices—the clear assumption being that this improves the likelihood of a successful and sustainable implementation . In order to benefit fully from implementation agents' views, a stakeholder involvement strategy should be developed, specifying both, the tasks performed by the involved implementation agents and the methods used to involve them .
Variability in methods used for contextual analysis
For contextual analysis, either a combination of quantitative and qualitative methods, or, if possible, a mixed-methods approach is recommended. Merging, connecting or embedding data obtained via various means increases both the breadth and the depth of the analysis [101, 102]. It also improves our practical understanding of how interventions can work and of which implementation strategies are needed to successfully implement them [101, 103]. Congruent with Rogers et al.'s findings , we found that only 37.5% of the studies used mixed-methods approaches [104, 105]. Overall, while Rogers et al.'s sample included a smaller proportion of these approaches (19%), the tendency was the same. Like them, our sample also used more qualitative than quantitative methods (respectively 75% and 25% compared to Rogers et al.'s respective findings of 53% and 28%).
Likewise, surveys or interviews (with individuals or focus groups) were our sample's most common methods of capturing contextual details. However, recent studies increasingly emphasize the relevance of direct (e.g., ethnographic) observations in implementation research. These allow insider perspectives, including, for example, records of contextual aspects that implementation agents may take for granted and omit to mention, or tasks performed differently than generally reported [106,107,108,109].
Problematically, as contextual analysis in implementation science is primarily done within a postpositivist paradigm, researchers' understandings of context are often mechanistic and reductionistic. Therefore, we recommend that they also consider constructivist perspectives, particularly rapid ethnographic methods. In addition to probing more deeply into the context (e.g., to uncover hidden processes), these require fewer resources than traditional methods. This efficiency makes them particularly useful for contextual analyses, which are rarely well-resourced [108, 110, 111].
Gaps in reporting and use of contextual information
As noted above, the reviewed studies showed significant gaps in their descriptions of how contextual information was later used. The results mainly informed intervention development. However, reporting gaps may have resulted from the fact that we assessed study protocols almost exclusively.
Another factor influencing the reporting of contextual analyses in study protocols or journal articles is lack of space: a 5000-word article can adequately develop and describe its central topic, but very little more. Therefore, implementation scientists should consider publishing their contextual analyses as separate papers. This would allow detailed descriptions of their methods and results, as well as of how they used those results for further study phases. Detailed reporting guidelines for contextual analysis could help researchers to structure their findings and avoid the types of “blind spots” noted above.
Strengths and weaknesses
The current study's objective is to systematically review and map methodological approaches currently in use for contextual analyses, as well as to identify gaps in the identified approaches. In this regard, this paper's most notable strength is the empirical search string development. Given the reported challenges in finding implementation science literature, the string provides both high sensitivity and high specificity [112,113,114].
Furthermore, we provide a novel framework for evaluating existing CA-related evidence by applying the BANANA approach . This framework can be used as a monitoring system for literature on contextual analysis, while providing quality criteria to evaluate contextual analysis. Moreover, the developed EGM offers a concise and informative overview of the reviewed studies' results, thereby facilitating comparison between them. The map is a “living document” designed to be updatable by future researchers.
However, as we included primarily study protocols, the descriptions given of contextual analysis lacked adequate detail in some cases. This affects our analysis of how contextual information informed the studied projects' later phases. Although we searched study papers related to the protocol, we were unable to verify in every case the extent to which the planned approaches to contextual analysis were carried out in the project, or whether adaptations were made. We suspect that one major reason for the high number of identified study protocols was publishing bias. Considering that we only included articles reporting contextual analyses as part of intervention implementation studies, it is possible that many contextual analyses were reported in study protocols, then conducted as part of implementation projects but not published.
The applied random sampling approach of study papers provided an opportunity to gain an initial overview of current evidence and its gaps. However, this approach may have excluded other relevant study papers that could have provided further insights into approaches to contextual analyses. Another possible weakness is that our strict inclusion criteria might have influenced our results. We focused on contextual analysis as a foundation for further study phases, i.e., prospective assessment of context and setting factors. As studies that conducted their contextual analyses retrospectively (e.g., as part of their process evaluation) would not enhance our understanding of contextual analysis in implementation science, we excluded them. For further research, it would be useful to adapt BANANA by planning a more comprehensive analysis—one that differentiates between the different implementation project phases (e.g., exploration, preparation, implementation and sustainment phase ). This would allow us to study differences in approaches applied to contextual analysis, that might be related to the different phases of an implementation project (e.g., contextual factors assessed might differ in the exploration and sustainment phase.
To the best of our knowledge, this is the first study to provide a novel framework for evaluating and mapping methodological approaches to contextual analysis. Our evidence map provides a broad overview of methodologies applied in contextual analysis and shows which aspects of those studies can serve as models for other implementation science projects. The map is dynamic and can be updated as the literature on contextual analysis evolves.
We found wide variation regarding which methods were used for contextual analysis, which contextual factors were assessed, and how the results were applied in later study phases. Such a high level of heterogeneity is a major barrier to inter-study comparison or to later scale-up efforts. To reduce it, we recommend conducting contextual analyses according to TMFs. In addition to providing clear, proven and repeatable methodologies, these both support stronger conceptualization of the assessed context and enhance the rigor of the entire analytical process. If the described gaps are left open, contextual analysis will become a "black box" in many cases, greatly reducing its contribution over the course of implementation projects. Therefore, the implementation science community needs to take concerted action to develop, test and improve straightforward, robust methodologies for contextual analysis and reporting.
Across health care, researchers need to embrace contextual analysis as an essential element of every implementation science project; funding agencies need to develop specific opportunities to improve it; and journals need to demand full reporting on it. And every implementation science research team needs not only practical guidance on how to carry out contextual analysis, but also special guidelines on how to report their findings. Above all, we need to understand that, to achieve the quality and success that implementation science research promises, we will first need to break open the “black box” of contextual analysis.
Availability of data and materials
All data generated or analyzed during this study are included in this published article as supplementary information files.
Implementation science is a scientific study, promoting “the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care” .
Hybrid Type 1: Prime focus on testing intervention effectiveness, and second, studying implementation. Hybrid Type 2: equal focus on testing intervention effectiveness and implementation strategies. Hybrid Type 3: Prime focus on testing effectiveness of implementation strategies, and second, assessing the intervention.
Yarning is a highly structured qualitative research methodology, to gain knowledge from indigenous people by storytelling .
Basel Approach for coNtextual ANAlysis
Context an Implementation of Complex Interventions (CICI) framework
Evidence gap map
Theory, model or framework
Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.
Pfadenhauer LM, Gerhardus A, Mozygemba K, Lysdahl KB, Booth A, Hofmann B, Wahlster P, Polus S, Burns J, Brereton L, et al. Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implement Sci. 2017;12(1):21.
Glasgow RE, Emmons KM. How Can We Increase Translation of Research into Practice? Types of Evidence Needed. Annu Rev Public Health. 2007;28(1):413–33.
Neta G, Glasgow RE, Carpenter CR, Grimshaw JM, Rabin BA, Fernandez ME, Brownson RC. A Framework for Enhancing the Value of Research for Dissemination and Implementation. Am J Public Health. 2015;105(1):49–57.
Stange KC, Glasgow RE: Contextual factors: the importance of considering and reporting on context in research on the patient-centered medical home. Rockville, MD: Agency for Healthcare Research and Quality; 2013. ARHQ Publication No. 13–0045-EF.
Olswang LB, Prelock PA. Bridging the Gap Between Research and Practice: Implementation Science. J Speech Lang Hear Res. 2015;58(6):S1818–26.
Daivadanam M, Ingram M, Sidney Annerstedt K, Parker G, Bobrow K, Dolovich L, Gould G, Riddell M, Vedanthan R, Webster J, et al. The role of context in implementation research for non-communicable diseases: Answering the ‘how-to’ dilemma. PlosS One. 2019;14(4):e0214454.
De Geest S, Zúñiga F, Brunkert T, Deschodt M, Zullig LL, Wyss K, Utzinger J. Powering Swiss health care for the future: implementation science to bridge “the valley of death.” Swiss Med Wkly. 2020;150:w20323.
Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(1):139.
Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.
Craig P, Di Ruggiero E, Frolich KL, Mykhalovskiy E, White M, Campbell R, Cummins S, Edwards N, Hunt K, Kee F: Taking account of context in population health intervention research: guidance for producers, users and funders of research. 2018.
Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117.
Shoveller J, Viehbeck S, Di Ruggiero E, Greyson D, Thomson K, Knight R. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.
Zullig L, Deschodt M, Liska J, Bosworth HB, De Geest S: Moving from the Trial to the Real World: Improving Medication Adherence Using Insights of Implementation Science. Annu Rev Pharmacol Toxicol 2018.
Rogers L, De Brún A, McAuliffe E. Development of an integrative coding framework for evaluating context within implementation science. BMC Med Res Methodol. 2020;20(1):158.
Pfadenhauer LM, Mozygemba K, Gerhardus A, Hofmann B, Booth A, Lysdahl KB, Tummers M, Burns J, Rehfuess EA. Context and implementation: A concept analysis towards conceptual maturity. Z Evid Fortbild Qual Gesundhwes. 2015;109(2):103–14.
Squires JE, Graham I, Bashir K, Nadalin-Penno L, Lavis J, Francis J, Curran J, Grimshaw JM, Brehaut J, Ivers N et al: Understanding context: A concept analysis. J Adv Nurs 2019, 0(0):1–23.
Rogers L, De Brún A, McAuliffe E. Defining and assessing context in healthcare implementation studies: a systematic review. BMC Health Serv Res. 2020;20(1):591.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.
Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189.
Mettert K, Lewis C, Dorsey C, Halko H, Weiner B. Measuring implementation outcomes: An updated systematic review of measures’ psychometric properties. Implement Res Pract. 2020;1:2633489520936644.
Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, Rycroft-Malone J, Meissner P, Murray E, Patel A. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795.
Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, Rycroft-Malone J, Meissner P, Murray E, Patel A, et al. Standards for Reporting Implementation Studies (StaRI): explanation and elaboration document. BMJ Open. 2017;7(4):e013318.
McHugh S, Dorsey CN, Mettert K, Purtle J, Bruns E, Lewis CC. Measures of outer setting constructs for implementation research: A systematic review and analysis of psychometric quality. Implement Res Pract. 2020;1:2633489520940022.
Leppla L, Mielke J, Kunze M, Mauthner O, Teynor A, Valenta S, Vanhoof J, Dobbels F, Berben L, Zeiser R, et al. Clinicians and patients perspectives on follow-up care and eHealth support after allogeneic hematopoietic stem cell transplantation: A mixed-methods contextual analysis as part of the SMILe study. Eur J Oncol Nurs. 2020;45:101723.
Yip O, Huber E, Stenz S, Zullig LL, Zeller A, De Geest SM, Deschodt M. A contextual analysis and logic model for integrated care for frail older adults living at home: the INSPIRE project. Int J Integr Care. 2021;21(2):9.
Mielke J, Leppla L, Valenta S, Zúñiga F, Zullig LL, Staudacher S, Teynor A, De Geest S: Unravelling implementation context: The Basel Approach for coNtextual ANAlysis (BANANA) in implementation science and its application in the SMILe project. Implement Sci Commun In Press.
Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5(1):28.
Snilstveit B, Bhatia R, Rankin K, Leach B: 3ie evidence gap maps: a starting point for strategic evidence production and use, 3ie Working Paper 28. In. New Delhi: International Initiative for Impact Evaluation (3ie); 2017.
Saran A, White H. Evidence and gap maps: a comparison of different approaches. Campbell Syst Rev. 2018;14(1):1–38.
Snilstveit B, Vojtkova M, Bhavsar A, Stevenson J, Gaarder M. Evidence & Gap Maps: A tool for promoting evidence informed policy and strategic research agendas. J Clin Epidemiol. 2016;79:120–9.
Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, Moher D, Peters MDJ, Horsley T, Weeks L, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and ExplanationThe PRISMA-ScR Statement. Ann Intern Med. 2018;169(7):467–73.
Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The Global Evidence Mapping Initiative: Scoping research in broad topic areas. BMC Med Res Methodol. 2011;11(1):92.
Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A, et al. An Overview of Research and Evaluation Designs for Dissemination and Implementation. Ann Rev Public Health. 2017;38(1):1–22.
Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.
Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Syst Rev. 2012;1(1):19.
Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.
Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217.
Thomas J, Graziosi S, Brunton J, Ghouze Z, O’Driscoll P, Bond M: EPPI-Reviewer: advanced software for systematic reviews, maps and evidence synthesis. EPPI-Centre Software London: UCL Social Research Institute 2020.
Digital Solution Foundry and EPPI-Centre. EPPI-Mapper. EPPI-Centre USRI: University College London; 2020.
Apers H, Vuylsteke B, Loos J, Smekens T, Deblonde J, Van Beckhoven D, Nöstlinger C. Development and evaluation of an HIV-testing intervention for primary care: protocol for a mixed methods study. JMIR Res Protoc. 2020;9(8):e16486.
Berhanu D, Okwaraji YB, Belayneh AB, Lemango ET, Agonafer N, Birhanu BG, Abera K, Betemariam W, Medhanyie AA, Abera M. Protocol for the evaluation of a complex intervention aiming at increased utilisation of primary child health services in Ethiopia: a before and after study in intervention and comparison areas. BMC Health Serv Res. 2020;20(1):1–12.
Bidwell P, Thakar R, Sevdalis N, Silverton L, Novis V, Hellyer A, Kelsey M, van der Meulen J, Gurol-Urganci I. A multi-centre quality improvement project to reduce the incidence of obstetric anal sphincter injury (OASI): study protocol. BMC Pregnancy Childbirth. 2018;18(1):1–11.
D’Onofrio G, Edelman EJ, Hawk KF, Pantalo MV, Chawarski MC, Owens PH, Martel SH, VanVeldhuisen P, Ode N, Murphy SM. Implementation facilitation to promote emergency department-initiated buprenorphine for opioid use disorder: protocol for a hybrid type III effectiveness-implementation study (Project ED HEALTH). Implement Sci. 2019;14(1):48.
Hawk KF, D’Onofrio G, Chawarski MC, O’Connor PG, Cowan E, Lyons MS, Richardson L, Rothman RE, Whiteside LK, Owens PH, et al. Barriers and Facilitators to Clinician Readiness to Provide Emergency Department-Initiated Buprenorphine. JAMA Netw Open. 2020;3(5):e204561–e204561.
Grazioli VS, Moullin JC, Kasztura M, Canepa-Allen M, Hugli O, Griffin J, Vu F, Hudon C, Jackson Y, Wolff H, et al. Implementing a case management intervention for frequent users of the emergency department (I-CaM): an effectiveness-implementation hybrid trial study protocol. BMC Health Serv Res. 2019;19(1):28.
von Allmen M, Grazioli VS, Kasztura M, Chastonay O, Moullin JC, Hugli O, Daeppen J-B, Bodenmann P. Does Case Management Provide Support for Staff Facing Frequent Users of Emergency Departments? A Comparative Mixed-Method Evaluation of ED Staff Perception. BMC Emerg Med. 2021;21(1):92.
Chastonay OJ, Lemoine M, Grazioli VS, Canepa Allen M, Kasztura M, Moullin JC, Daeppen J-B, Hugli O, Bodenmann P. Health care providers’ perception of the frequent emergency department user issue and of targeted case management interventions: a cross-sectional national survey in Switzerland. BMC Emerg Med. 2021;21(1):4.
Bodenmann P, Kasztura M, Graells M, Schmutz E, Chastonay O, Canepa-Allen M, Moullin J, von Allmen M, Lemoine M, Hugli O, et al. Healthcare Providers’ Perceptions of Challenges with Frequent Users of Emergency Department Care in Switzerland: A Qualitative Study. Inquiry. 2021;58:00469580211028173.
Hartzler B, Lyon AR, Walker DD, Matthews L, King KM, McCollister KE. Implementing the teen marijuana check-up in schools—a study protocol. Implement Sci. 2017;12(1):1–14.
Johnson K, Gilbert L, Hunt T, Wu E, Metsch L, Goddard-Eckrich D, Richards S, Tibbetts R, Rowe JC, Wainberg ML. The effectiveness of a group-based computerized HIV/STI prevention intervention for black women who use drugs in the criminal justice system: study protocol for E-WORTH (Empowering African-American Women on the Road to Health), a Hybrid Type 1 randomized controlled trial. Trials. 2018;19(1):1–19.
Knight DK, Belenko S, Wiley T, Robertson AA, Arrigona N, Dennis M, Bartkowski JP, McReynolds LS, Becan JE, Knudsen HK. Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS): a cluster randomized trial targeting system-wide improvement in substance use services. Implement Sci. 2015;11(1):1–18.
Knight DK, Joe GW, Morse DT, Smith C, Knudsen H, Johnson I, Wasserman GA, Arrigona N, McReynolds LS, Becan JE, et al. Organizational Context and Individual Adaptability in Promoting Perceived Importance and Use of Best Practices for Substance Use. J Behav Health Serv Res. 2019;46(2):192–216.
Kwan BM, Dickinson LM, Glasgow RE, Sajatovic M, Gritz M, Holtrop JS, Nease DE, Ritchie N, Nederveld A, Gurfinkel D. The Invested in Diabetes Study Protocol: a cluster randomized pragmatic trial comparing standardized and patient-driven diabetes shared medical appointments. Trials. 2020;21(1):1–14.
Lakerveld J, Mackenbach JD, De Boer F, Brandhorst B, Broerse JE, De Bruijn G-J, Feunekes G, Gillebaart M, Harbers M, Hoenink J. Improving cardiometabolic health through nudging dietary behaviours and physical activity in low SES adults: design of the Supreme Nudge project. BMC Public Health. 2018;18(1):1–9.
Nahar P, van Marwijk H, Gibson L, Musinguzi G, Anthierens S, Ford E, Bremner SA, Bowyer M, Le Reste JY, Sodi T. A protocol paper: community engagement interventions for cardiovascular disease prevention in socially disadvantaged populations in the UK: an implementation research study. Glob Health Res Policy. 2020;5(1):1–9.
Osilla KC, Becker K, Ecola L, Hurley B, Manuel JK, Ober A, Paddock SM, Watkins KE. Study design to evaluate a group-based therapy for support persons of adults on buprenorphine/naloxone. Addict Sci Clin Pract. 2020;15(1):1–11.
Quintiliani LM, Russinova ZL, Bloch PP, Truong V, Xuan Z, Pbert L, Lasser KE. Patient navigation and financial incentives to promote smoking cessation in an underserved primary care population: A randomized controlled trial protocol. Contemp Clin Trials. 2015;45:449–57.
Rahm AK, Cragun D, Hunter JE, Epstein MM, Lowery J, Lu CY, Pawloski PA, Sharaf RN, Liang SY, Burnett-Hartman AN, et al. Implementing universal Lynch syndrome screening (IMPULSS): protocol for a multi-site study to identify strategies to implement, adapt, and sustain genomic medicine programs in different organizational contexts. BMC Health Serv Res. 2018;18(1):824.
Rotter T, Plishka C, Hansia MR, Goodridge D, Penz E, Kinsman L, Lawal A, O’Quinn S, Buchan N, Comfort P. The development, implementation and evaluation of clinical pathways for chronic obstructive pulmonary disease (COPD) in Saskatchewan: protocol for an interrupted times series evaluation. BMC Health Serv Res. 2017;17(1):1–7.
Sævareid TJL, Lillemoen L, Thoresen L, Førde R, Gjerberg E, Pedersen R. Implementing advance care planning in nursing homes–study protocol of a cluster-randomized clinical trial. BMC Geriatr. 2018;18(1):1–12.
Gjerberg E, Lillemoen L, Weaver K, Pedersen R, Førde R. Advance care planning in Norwegian nursing homes. Tidsskr Nor Laegeforen. 2017;137(6):447–50.
Thoresen L, Ahlzén R, Solbrække KN. Advance Care Planning in Norwegian nursing homes—Who is it for? J Aging Stud. 2016;38:16–26.
Thoresen L, Lillemoen L. “I just think that we should be informed” a qualitative study of family involvement in advance care planning in nursing homes. BMC Med Ethics. 2016;17(1):1–13.
Shanley DC, Hawkins E, Page M, Shelton D, Liu W, Webster H, Moritz KM, Barry L, Ziviani J, Morrissey S, et al. Protocol for the Yapatjarrathati project: a mixed-method implementation trial of a tiered assessment process for identifying fetal alcohol spectrum disorders in a remote Australian community. BMC Health Serv Res. 2019;19(1):649.
Smeltzer MP, Rugless FE, Jackson BM, Berryman CL, Faris NR, Ray MA, Meadows M, Patel AA, Roark KS, Kedia SK. Pragmatic trial of a multidisciplinary lung cancer care model in a community healthcare setting: study design, implementation evaluation, and baseline clinical results. Transl Lung Canc Res. 2018;7(1):88.
Kedia SK, Ward KD, Digney SA, Jackson BM, Nellum AL, McHugh L, Roark KS, Osborne OT, Crossley FJ, Faris N. ‘One-stop shop’: lung cancer patients’ and caregivers’ perceptions of multidisciplinary care in a community healthcare setting. Transl Lung Canc Res. 2015;4(4):456.
Gray CS, Wodchis WP, Upshur R, Cott C, McKinstry B, Mercer S, Palen TE, Ramsay T, Thavorn K. Supporting goal-oriented primary health care for seniors with complex care needs using mobile technology: evaluation and implementation of the health system performance research network, Bridgepoint electronic patient reported outcome tool. JMIR Res Protoc. 2016;5(2):e126.
Steele Gray C, Miller D, Kuluski K, Cott C. Tying eHealth tools to patient needs: exploring the use of eHealth for community-dwelling patients with complex chronic disease and disability. JMIR Res Protoc. 2014;3(4):e3500.
Gray CS, Khan AI, Kuluski K, McKillop I, Sharpe S, Bierman AS, Lyons RF, Cott C. Improving patient experience and primary care quality for patients with complex chronic disease using the electronic patient-reported outcomes tool: adopting qualitative methods into a user-centered design approach. JMIR Res Protoc. 2016;5(1):e28.
Sutherland R, Brown A, Nathan N, Janssen L, Reynolds R, Walton A, Hudson N, Chooi A, Yoong S, Wiggers J. Protocol for an effectiveness-implementation hybrid trial to assess the effectiveness and cost-effectiveness of an m-health intervention to decrease the consumption of discretionary foods packed in school lunchboxes: The ‘SWAP IT’trial. BMC Public Health. 2019;19(1):1–11.
Reynolds R, Sutherland R, Nathan N, Janssen L, Lecathelinais C, Reilly K, Walton A, Wolfenden L. Feasibility and principal acceptability of school-based mobile communication applications to disseminate healthy lunchbox messages to parents. Health Promot J Austr. 2019;30(1):108–13.
Sutherland R, Brown A, Nathan N, Yoong S, Janssen L, Chooi A, Hudson N, Wiggers J, Kerr N, Evans N, et al. A Multicomponent mHealth-Based Intervention (SWAP IT) to Decrease the Consumption of Discretionary Foods Packed in School Lunchboxes: Type I Effectiveness-Implementation Hybrid Cluster Randomized Controlled Trial. J Med Internet Res. 2021;23(6):e25256.
Taylor RS, Hayward C, Eyre V, Austin J, Davies R, Doherty P, Jolly K, Wingham J, Van Lingen R, Abraham C. Clinical effectiveness and cost-effectiveness of the Rehabilitation Enablement in Chronic Heart Failure (REACH-HF) facilitated self-care rehabilitation intervention in heart failure patients and caregivers: rationale and protocol for a multicentre randomised controlled trial. BMJ Open. 2015;5(12):e009994.
Greaves CJ, Wingham J, Deighan C, Doherty P, Elliott J, Armitage W, Clark M, Austin J, Abraham C, Frost J. Optimising self-care support for people with heart failure and their caregivers: development of the Rehabilitation Enablement in Chronic Heart Failure (REACH-HF) intervention using intervention mapping. Pilot Feasibility Stud. 2016;2(1):1–17.
Van Delft LMM, Bor P, Valkenet K, Veenhof C. Hospital in Motion, a multidimensional implementation project to improve patients’ physical behavior during hospitalization: protocol for a mixed-methods study. JMIR Res Protoc. 2019;8(4):e11341.
Van Dongen BM, Ridder MAM, Steenhuis IHM, Renders CM. Background and evaluation design of a community-based health-promoting school intervention: Fit Lifestyle at School and at Home (FLASH). BMC Public Health. 2019;19(1):1–11.
van Dongen BM, de Vries IM, Ridder MAM, Renders CM, Steenhuis IHM: Opportunities for Capacity Building to Create Healthy School Communities in the Netherlands: Focus Group Discussions With Dutch Pupils. Front Public Health 2021, 9.
Verjans-Janssen S, Van Kann DH, Gerards SM, Vos SB, Jansen MW, Kremers SP. Study protocol of the quasi-experimental evaluation of “KEIGAAF”: a context-based physical activity and nutrition intervention for primary school children. BMC Public Health. 2018;18(1):1–12.
Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.
Holtrop JS, Estabrooks PA, Gaglio B, Harden SM, Kessler RS, King DK, Kwan BM, Ory MG, Rabin BA, Shelton RC, et al. Understanding and applying the RE-AIM framework: Clarifications and resources. J Clin Transl Sci. 2021;5(1):e126.
Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010;5(1):1–6.
Tinkle M, Kimball R, Haozous EA, Shuster G, Meize-Grochowski R. Dissemination and Implementation Research Funded by the US National Institutes of Health, 2005–2012. Nurs Res Pract. 2013;2013:15.
Birken SA, Bunger AC, Powell BJ, Turner K, Clary AS, Klaman SL, Yu Y, Whitaker DJ, Self SR, Rostad WL, et al. Organizational theory for dissemination and implementation research. Implement Sci. 2017;12(1):62.
Liang L, Bernhardsson S, Vernooij RW, Armstrong MJ, Bussières A, Brouwers MC, Gagliardi AR. Use of theory to plan or evaluate guideline implementation among physicians: a scoping review. Implement Sci. 2017;12(1):1–12.
Birken SA, Powell BJ, Shea CM, Haines ER, Alexis Kirk M, Leeman J, Rohweder C, Damschroder L, Presseau J. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12(1):124.
Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging Research and Practice: Models for Dissemination and Implementation Research. Am J Prev Med. 2012;43(3):337–50.
Moullin JC, Sabater-Hernandez D, Fernandez-Llimos F, Benrimoj SI. A systematic review of implementation frameworks of innovations in healthcare and resulting generic implementation framework. Health Res Policy Syst. 2015;13:16.
Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the consolidated framework for implementation research. Implement Sci. 2015;11(1):72.
Watson DP, Adams EL, Shue S, Coates H, McGuire A, Chesher J, Jackson J, Omenka OI. Defining the external implementation context: an integrative systematic literature review. BMC Health Serv Res. 2018;18(1):209.
Ziemann A, Brown L, Sadler E, Ocloo J, Boaz A, Sandall J. Influence of external contextual factors on the implementation of health and social care interventions into practice within or across countries—a protocol for a ‘best fit’ framework synthesis. Syst Rev. 2019;8(1):258.
Albers B, Shlonsky A. When Policy Hits Practice – Learning from the Failed Implementation of MST-EA in Australia. Hum Serv Organ Manag Leadersh Gov. 2020; 44(4):381–405.
Bruns EJ, Parker EM, Hensley S, Pullmann MD, Benjamin PH, Lyon AR, Hoagwood KE. The role of the outer setting in implementation: associations between state demographic, fiscal, and policy factors and use of evidence-based treatments in mental healthcare. Implement Sci. 2019;14(1):96.
Allen P, Jacob RR, Parks RG, Mazzucca S, Hu H, Robinson M, Dobbins M, Dekker D, Padek M, Brownson RC. Perspectives on program mis-implementation among U.S. local public health departments. BMC Health Serv Res. 2020;20(1):258.
Padek MM, Mazzucca S, Allen P, Rodriguez Weno E, Tsai E, Luke DA, Brownson RC. Patterns and correlates of mis-implementation in state chronic disease public health practice in the United States. BMC Public Health. 2021;21(1):101.
Nickerson RS. Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Rev Gen Psychol. 1998;2(2):175–220.
Ramanadhan S, Davis MM, Armstrong R, Baquero B, Ko LK, Leng JC, Salloum RG, Vaughn NA, Brownson RC. Participatory implementation science to increase the impact of evidence-based cancer prevention and control. Cancer Causes Control. 2018;29(3):363–9.
Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, Suleman R. Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect. 2014;17(5):637–50.
Bombard Y, Baker GR, Orlando E, Fancott C, Bhatia P, Casalino S, Onate K, Denis J-L, Pomey M-P. Engaging patients to improve quality of care: a systematic review. Implement Sci. 2018;13(1):98.
Mason RJ, Searle KM, Bombard Y, Rahmadian A, Chambers A, Mai H, Morrison M, Chan KKW, Jerzak KJ. Evaluation of the impact of patient involvement in health technology assessments: A scoping review. Int J Technol Assess Health Care. 2020;36(3):217–23.
Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed Method Designs in Implementation Research. Adm Policy Ment Health. 2011;38(1):44–53.
Beidas RS, Wolk CL, Walsh LM, Evans AC Jr, Hurford MO, Barg FK. A complementary marriage of perspectives: understanding organizational social context using mixed methods. Implement Sci. 2014;9:175.
Palinkas LA. Qualitative and Mixed Methods in Mental Health Services and Implementation Research. J Clin Child Psychol. 2014;43(6):851–61.
Green CA, Duan N, Gibbons RD, Hoagwood KE, Palinkas LA, Wisdom JP. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities. Adm Policy Ment Health. 2015;42(5):508–23.
Palinkas LA, Mendon SJ, Hamilton AB. Innovations in Mixed Methods Evaluations. Ann Rev Public Health. 2019;40(1):423–42.
Gertner AK, Franklin J, Roth I, Cruden GH, Haley AD, Finley EP, et al. A scoping review of the use of ethnographic approaches in implementation research and recommendations for reporting. Implement Res Pract. 2021;2:1–13.
Eldh AC, Rycroft-Malone J, van der Zijpp T, McMullan C, Hawkes C. Using Nonparticipant Observation as a Method to Understand Implementation Context in Evidence-Based Practice. Worldviews Evid Based Nurs. 2020;17(3):185–92.
Haines ER, Kirk MA, Lux L, Smitherman AB, Powell BJ, Dopp A, Stover AM, Birken SA: Ethnography and user-centered design to inform context-driven implementation. Transl Behav Med 2021.
Mielke J, De Geest S, Zúñiga F, Brunkert T, Zullig LL, Pfadenhauer LM, Staudacher S. Understanding dynamic complexity in context—Enriching contextual analysis in implementation science from a constructivist perspective. Front Health Serv. 2022;2:953731.
Conte KP, Shahid A, Grøn S, Loblay V, Green A, Innes-Hughes C, Milat A, Persson L, Williams M, Thackway S, et al. Capturing implementation knowledge: applying focused ethnography to study how implementers generate and manage knowledge in the scale-up of obesity prevention programs. Implement Sci. 2019;14(1):91.
Haines ER, Dopp A, Lyon AR, Witteman HO, Bender M, Vaisson G, Hitch D, Birken S. Harmonizing evidence-based practice, implementation context, and implementation strategies with user-centered design: a case example in young adult cancer care. Implementation Sci Comm. 2021;2(1):45.
Lokker C, McKibbon KA, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, Davis DA, Straus SE. Finding knowledge translation articles in CINAHL. Medinfo. 2010;160:1179–83.
McKibbon KA, Lokker C, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, Davis DA, Straus SE. Search filters can find some but not all knowledge translation articles in MEDLINE: An analytic survey. J Clin Epidemiol. 2012;65(6):651–9.
Mielke J, Brunkert T, Zullig LL, Bosworth HB, Deschodt M, Simon M, De Geest S. Relevant Journals for Identifying Implementation Science Articles: Results of an International Implementation Science Expert Survey. Front Public Health. 2021;9:639192.
Aarons GA, Hurlburt M, Horwitz SM. Advancing a Conceptual Model of Evidence-Based Practice Implementation in Public Service Sectors. Adm Policy Ment Health. 2011;38(1):4–23.
Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1(1):1.
Wang C, Burris MA. Photovoice: Concept, Methodology, and Use for Participatory Needs Assessment. Health Educ Behav. 1997;24(3):369–87.
Quintiliani LM, Russinova ZL, Bloch PP, Truong V, Xuan Z, Pbert L, Lasser KE. Patient navigation and financial incentives to promote smoking cessation in an underserved primary care population: A randomized controlled trial protocol. Contemp Clin Trials. 2015;45(Pt B):449–57.
Barlo S, Boyd WE, Pelizzon A, Wilson S. Yarining as protected space: principles and protocols. AlterNative. 2020;16(2):90–8.
The authors would like to thank Sarah Musy for her support in developing the empirical search strategy and Mieke Deschodt for providing feedback to the study conceptualization. Further, we would like to thank Chris Shultis for language editing.
Ethics approval and consent to participate
Consent for publication
JM, TB, FZ and MS have no competing interests. LLZ reports research support from Sanofi, Proteus Digital Health, and the PhRMA Foundation, as well as consulting for Novartis. SDG consults for Sanofi and Novartis – all activities are unrelated to the current work.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist.
Research questions and screening tool inclusion-/exclusion criteria.
Empirical search string development.
STEP 1 - General characteristics of identified implementation intervention studies (n = 110).
- Study characteristics of implementation intervention studies that performed contextual analyses.
Evidence gap map.
Overview of contextual factors identified in implementation intervention studies (mapped according to the Context and Implementation of Complex Interventions (CICI) framework (1)).
About this article
Cite this article
Mielke, J., Brunkert, T., Zúñiga, F. et al. Methodological approaches to study context in intervention implementation studies: an evidence gap map. BMC Med Res Methodol 22, 320 (2022). https://doi.org/10.1186/s12874-022-01772-w