Skip to main content

Implementation fidelity in a multifaceted program to foster rational antibiotics use in primary care: an observational study

Abstract

Background

The ARena study (Sustainable Reduction of Antimicrobial Resistance in German Ambulatory Care) is a three-arm, cluster randomized trial to evaluate a multifaceted implementation program in a German primary care setting. In the context of a prospective process evaluation conducted alongside ARena, this study aimed to document and explore fidelity of the implementation program.

Methods

This observational study is based on data generated in a three-wave survey of 312 participating physicians in the ARena program and attendance documentation. Measures concerned persistence of participation in the ARena program and adherence to intervention components (thematic quality circles, e-learning, basic expenditure reimbursements, additional bonus payments and a computerized decision support system). Participants’ views on five domains of the implementation were also measured. Binary logistic and multiple linear regression analyses were used to explore which views on the implementation were associated with participants’ adherence to quality circles and use of additional bonus compensation.

Results

The analysis of fidelity showed overall high persistence of participation in the intervention components across the three intervention arms (90,1%; 97,9%; 92,9%). 96.4% of planned quality circles were delivered to study participants and, across waves, 30.4% to 93% of practices participated; 56.1% of physicians attended the maximum of four quality circles. 84% of the practices (n = 158) with a minimum of one index patient received a performance-based additional bonus payment at least once. In total, bonus compensation was triggered for 51.8% of affected patients. Participation rate for e-learning (a prerequisite for reimbursement of project-related expenditure) covered 90.8% of practices across all intervention arms, with the highest rate in arm II (96.5%). Uptake of expenditure reimbursement was heterogeneous across study arms, with a mean rate of 86.5% (89.1% in arm I, 96.4% in arm II and 74.1% in arm III). Participants’ views regarding participant responsiveness (OR = 2.298) 95% CI [1.598, 3.305] and Context (OR = 2.146) 95% CI [1.135, 4.055] affected additional bonus payment. Participants’ views on participant responsiveness (Beta = 0.718) 95% CI [0.479, 0.957], Context (Beta = 0.323) 95% CI [0.055, 0.590] and Culture of shared decision-making (Beta = -0.334) 95% CI [-0.614, -0.053] affected quality circle attendance.

Conclusion

This study showed an overall high fidelity to the implementation program. Participants’ views on the implementation were associated with degree of intervention fidelity.

Trial registration

ISRCTN, ISRCTN58150046.

Peer Review reports

Introduction

Background

Pragmatic trials are applied to inform health policy decision makers about the effectiveness of interventions used in healthcare practice [1]. Formative evaluations of such trials can provide added information to primary and secondary study outcomes since effect sizes alone do not grant sufficient information about the replicability of trial outcomes [2] or the level of implementation fidelity [3]. Findings need to be contextualized to the feasibility of implementation programs [4]. Only if feasibility is high, observed effects can be attributed to the respective program. To understand mechanisms affecting feasibility, investigations on implementation fidelity are inevitable [4]. Potentially, factors affecting implementation fidelity can be explored in qualitative research approaches [5,6,7], yet this approach does not allow statistical associations and critical consideration of program feasibility regarding study outcomes. Hence, the present study reports findings of a quantitative fidelity analysis conducted alongside a multifaceted pragmatic trial.

This study is based on a three-armed cluster randomized trial (ARena) designed to sustainably reduce antimicrobial resistance in German ambulatory care [8]. In Germany, about 85% of antibiotics used in human medicine are prescribed in ambulatory care [9]. Most common prescription fields are respiratory tract infections which are, contrary to the effects of antibiotics, predominantly of viral origin [10]. Multiple reasons have been identified for such inappropriate prescribing patterns: Physicians report to face diagnostic insecurities, demanding patient expectations and a personal desire to be on the safe side in treatment procedures [11,12,13]. Since physicians are aware of this matter, 75% of surveyed resident physicians in Germany wish to receive training offers which address a rational use of antibiotics [13]. Previous approaches to foster this rationality included public awareness campaign strategies, financial incentivization of a rational prescription-behaviour, reliable patient information sources, improvement of patient-provider communication and the provision of point of care testing [14,15,16,17,18,19]. Frequently, a combination of listed interventions promised the highest effects [20]. Nevertheless, relevant data from German ambulatory practices are rare, and findings of implementation programs conducted in other healthcare systems are only partly transferrable to primary care settings in Germany. Besides, a sustained uptake of measures beyond intervention periods could not yet be proven.

The ARena study addressed this gap by providing a standard set of implementation strategies across study arms comprising of e-learning on communication with patients, quality circles (QC) with data-based feedback for physicians, information campaigns for the public, patient information material and performance-based additional bonus compensation. QCs have widely been adopted and participation rates of primary care physicians in Europe increased substantially in the last decades [21,22,23,24]. Initially used to support continuous medical education, QCs are nowadays mainly applied for quality improvement purposes [23]. In this respect, QCs intend to foster guideline-oriented prescribing patterns and to support desired change of outdated routines. Yet, effects meeting these targets are heterogeneous within and across studies and cannot be considered thorough yet [25,26,27,28].

The performance-based additional compensation in ARena was designed as a bonus payment system similar to Heider & Mang [29] based on antibiotic prescribing. Thus, it needs to be distinguished from pay-for-performance systems where additional reimbursements are paid for reaching predefined thresholds of quality indicators. Systematic reviews addressing bonus payments have been conducted in the context of smoking cessation endeavours [30] and to increase the supply of breast, cervical and colorectal cancer screenings [31]. Both reviews included studies of moderate quality and the inconsistency of results did not permit a conclusion about additional bonus compensations. Research investigating effects on additional bonus compensations regarding a rational use of antibiotics was not identified. The fidelity analysis reported in this study explored the overall participation in the implementation program across all study arms. A particular focus was put on the two key program components of QCs and additional bonus compensation since these were distinctive features in comparison to other research efforts regarding the rational use of antibiotics conducted at the same time in German primary care [32].

Objective

The aim of this study was to document and explore fidelity to an implementation program embedded in a multifaceted cluster randomized trial in a two-step approach: (1) Description of participants’ engagements to intervention components and perceived influencing domains affecting fidelity; (2) Exploration of the associations between engagement in intervention components and perceptions of influencing domains.

Methods

Theoretical conceptualization

Fidelity as a term describes the level to which an intervention was delivered as intended [4]. In this respect, the most commonly practiced framework [33] distinguishes between adherence and moderator domains to provide a cause-effect-principle explaining fidelity measures. Following this comprehension, adherence is defined as a bottom-line measurement describing the dose and content of an implementation program. If an intervention completely adheres to a study protocol, fidelity can be rated high. To understand mechanisms affecting adherence scales, factors that affect the level of fidelity need to be identified. In this study, these factors originated from five self-reported domains describing participants’ perceived views on implementation. Figure 1 comprises the elements of adherence and considered domains. Since the framework on implementation fidelity has continuously been extended, this analysis included the additional domain of ‘context’ introduced by Hasson [34]. The characterization of dose in quality improvement measurements provided by McHugh et al. [35] was included into the theoretical model for this present study.

Fig. 1
figure 1

Modified conceptual framework of implementation fidelity

Study design of the ARena trial

The ARena implementation program was designed as a three-armed, non-blinded cluster randomized trial with an added cohort reflecting standard care. Randomization was performed by the Institute of Medical Biometry at the University Hospital Heidelberg. The implementation program was organized by the aQua Institut, Goettingen, and embedded into 14 primary care networks (PCN) in two federal states (Bavaria and North Rhine-Westphalia) in Germany. PCNs are regional associations of primary care practices aiming at facilitating quality improvement initiatives, representing interests at health insurance companies as well as reimbursing additional activities for member practices [36]. In order to understand the role of PCNs in the dissemination of the implementation program, this level of randomization has been chosen for primary outcome analysis. The implementation program consisted of different components applied to each of the three study arms. Arm I received a standard set comprising a public information campaign, patient information material, e-learning addressing physician–patient communication, thematically relevant QCs (common respiratory tract infections (CRTI), urinary tract infections (UTI), community acquired pneumonia (CAP), multi-resistant pathogens (MRP)) containing data-based feedback for physicians, and the performance-based bonus. Arm II received the standard set plus e-learning modules addressing patient communication and QCs targeting non-physician health professionals as well as patient information material provided via tablet devices. Arm III received the standard set, a computerized decision support system integrated in existing practice management software and multidisciplinary QCs in local groups. All participating practices could receive reimbursement for project-related expenditure. A detailed display of the study design is provided by the study protocol [8].

The intervention period encompassed 21 months. In total, 196 practices with 312 physicians and 99 medical assistants (MA) participated. The statutory health insurer AOK (Public organization of statutory health insurance) provided routinely collected claims data referring to consultations for non-complicated infections in the intervention arms and the added cohort reflecting standard care in Bavaria and North Rhine-Westphalia. The study design and detailed sample size descriptions of the ARena trial are illustrated in Additional File 1, Supplementary Fig. 1. Detailed sociodemographic characteristics of included cases, sample size calculation, information about relevant data protection as well as outcomes of the ARena trial regarding a sustainable reduction of antimicrobial resistance in German ambulatory care have been reported elsewhere [37, 38]. All routinely collected claims data relevant for the ARena trial were stored on secure servers at the aQua institute, Göttingen, Germany and were analyzed by a qualified statistitian.

Study design of the process evaluation

The ARena trial was accompanied by a process evaluation (PE) which intended to understand working mechanisms affecting primary and secondary outcomes as well as determining the level of fidelity to the program [8]. The PE was designed as a prospective observational study and conducted with a mixed methods approach containing a longitudinal survey study and an interview study. The survey study consisted of written questionnaires targeting participating physicians of study arms I, II and III and participating MAs of study arm II. For each intervention arm, a tailored questionnaire was developed. Data collection took place at three different points in time (T0-T2). The interview study targeted participating physicians, MAs and stakeholder representatives of PCN managements, health insurance providers, the association of statutory health insurance physicians and self-help organisations. Additionally, implementers documented overall participation over the course of the study, utilization of e-learning and computerized decision-support-system (CDSS), attendance to QCs, and reimbursement for project- and patient-related expenditures. This present study was based on survey data collected during the PE and the additional documentation (attendance data). Findings of the PE analyses have been reported elsewhere [39,40,41]. Figure 2 summarizes the study design and sample size of the PE.

Fig. 2
figure 2

Study design and number of participants in the process evaluation

Study population

An extensive description of the study population of the ARena trial is provided in the protocol [8]. To be eligible for participation in the PE, practices needed to be enrolled in one of the 14 participating PCNs and had to be allocated to one of the three intervention arms. Physicians had to represent one of the medical specialist groups of general practitioners, internists, gynecologists, ear-nose-throat specialists, urologists, pulmonary specialists or pediatricians. MAs eligible for participation in the PE were employees of participating practices. Across participant groups, further inclusion criteria were written and spoken German language skills, 18 years of age or older and a written declaration of consent to participate in the study. No additional exclusion criteria were assigned.

Recruitment and sampling for the survey study

The PE followed a voluntary response sampling strategy. By signing the consent form of the ARena trial, participants also consented into participating in the PE. The Department of General Practice and Health Services Research at the University Hospital Heidelberg compiled a cover letter and written information material detailing the procedures and aim of the PE. The Department’s ARena study team of researchers developed the study-specific survey questionnaires based on the Theory of Planned Behaviour [42]. The questionnaire items were used to gain insights into the impact of the intervention components and contextual factors [8]. The aQua Institut (Goettingen) led the project and thus contacted enrolled practices and sent the survey questionnaires by mail. After four weeks, e-mail reminders were sent to increase response rates.

Data collection and measures

Survey data

Participants’ views on the implementation and the engagement in key components of ARena (QCs and additional bonus compensation) were explored in a self-reported questionnaire. Questionnaires were dispatched to participants in January 2018 (T0), October 2018 (T1) and July 2019 (T2). All questionnaires focused on adherence to intervention components and views on the implementation. T1 and T2 questionnaires additionally asked for intermediate and final conclusion regarding the assessment of intervention components. Completed questionnaires were returned and registered by the ARena study team at the Department of General Practice and Health Services Research, University Hospital Heidelberg, between February and April 2018 (T0), November 2018 to January 2019 (T1) and July to September 2019 (T2). Received questionnaires were digitalized and transferred into IBM SPSS Statistics 24. Survey items included for each domain of participant views are listed in Additional File 2, Supplementary Table 1.

The participants’ views on the implementation were measured in five domains: 1) ‘Participant responsiveness’ contained items regarding the respondents’ perceptions about the usefulness of components and their potential to facilitate new impulses in the context of rational antibiotics use. 2) ‘Quality of delivery improvement’ referred to reflections about the extent to which the ARena participation supported guideline-oriented prescribing patterns and fostered security in therapeutic decisions. 3) ‘Contextual facilitators’ referred to the role of PCNs in optimizing patient care as they were seen as a major design element of the ARena study. 4) ‘Positive antibiotic attributions’ considered physicians’ perceptions about positive ancillary effects of antibiotic use such as reduced consultation time. 5) ‘The culture of shared decision-making’ (SDM) score reflected the respondents’ integration of patient- and peer-views into therapeutic decisions.

Attendance and use of financial bonus

Adherence to QCs, e-learning, CDSS, and basic expenditure reimbursements were identified by documented attendance data. Triggered additional bonus payment was identified from the claims data. Overall, data of 196 practices were collected by the aQua Institut over the intervention period of 21 months between October 2017 and June 2019. Variables regarding participant attendance of QC meetings were reported on practice level and were collected in the respective events. Attendance data was documented in Microsoft Excel 2019 and subsequently transferred to IBM SPSS Statistics 26. Variables providing information about additional bonus compensation were collected using the claims data aggregated on practice level.

Adherence was subcategorized in the domains of content and dose. Indicators representing content were exclusively collected for the additional bonus compensation component. Indicators representing dose which was further split into domains of exposure and engagement were collected for both, QCs and additional compensation components.

Statistical analyses

Based on the survey and attendance data, the intervention fidelity was explored. Indicators were developed to map the participants’ engagement in five intervention components. The descriptive analysis explored absolute and relative frequencies on physician and practice level in the intervention arms. Sociodemographic factors, adherence data and participants’ views on implementation were analyzed descriptively. For continuous variables, means, medians, min/max and standard deviations were provided, for categorial and ordinal variables absolute and relative frequencies were reported. Survey items were based on a 5-point-Likert scale ranging from “Strongly Disagree” to “Strongly Agree”. Items representing one domain of participants’ views on implementation were scored using mean value calculations and tested on internal consistency using Cronbach’s Alpha procedures. To explore correlates between variables of interest, binary correlations between dependent variables (engagement in additional bonus compensation; engagement in QC themes) and independent variables (participant responsiveness; quality of delivery; context, culture of SDM; positive AB attribution) were determined by calculating Pearson and Spearman correlation coefficients and guided the variable selection for subsequent regression analyses.

A binary logistic regression model was used to identify directional coherence between the engagement in additional bonus compensation representing the outcome variable and the five domains of participant views on implementation representing predictor variables. A multiple linear regression model was computed regarding association between engagement in the four QC themes reflecting the outcome variable and the five domains of participant views reflecting predictor variables. Predictor variables were considered on a metric scale level and tested on multicollinearity with a set threshold of r ≥ 0.7. Missing values were marked accordingly and excluded from analyses. Effect sizes were reported by Odds Ratios (OR) and Beta coefficients including 95% confidence intervals. To provide information about data accuracy, confidence intervals, standard errors and the coefficients of determination R2 were listed. All models have been adjusted by age, sex and intervention arm affiliation. Additional multilevel analyses were conducted considering a hierarchical data structure of practices representing the random effect (MIXED and GENLIN estimations). Due to a high loss of cases within data linkage efforts between attendance and survey data, outcome variables of estimation models based on self-reports of the T2 survey only. The level of significance was set at p ≤ 0.05. P-values were of explorative nature as a pre-determined statistical power calculation for this analysis was not feasible.

Results

Adherence to the ARena implementation program

In total, data from 196 participating practices (312 physicians; 78.2% GPs) were collected. 290 physicians (92.9%) continuously participated in ARena over the intervention period. The drop-out rate of included practices as observed in the process evaluation was 4.1% at the end of the intervention period. Indicators describing fidelity to the ARena implementation program are provided in Additional File 2, Supplementary Table 2.

The analysis showed continuous physician participation in the intervention components across all intervention arms with 90.1% in arm I, 97.9% in arm II and 92.9% in arm III. 96.4% (n = 54) of the planned QCs could be delivered to between 30.4% and 93% of participants. The maximum of four QCs was attended by 51.6% of the physicians. Participation rate for the e-learning component was 90.8% across all intervention arms on practice level, with the highest rate in arm II (96.5%). Completing the e-learning component qualified 177 practices for the basic expenditure reimbursement meant to cover project-related additional expenditure. This was claimed fairly heterogeneous with a mean rate of 86.5% (89.1% in arm I, 96.4% in arm II and 74.1% in arm III).. In intervention arm III, 51% of practices utilized the offered computerized decision support (CDSS)). A total of 88.4% of physicians (84% of the practices (n = 158)) with a minimum of one index patient received the additional bonus compensation at least one time. The descriptive analysis indicated that fidelity appeared the highest in intervention arm II. Table 1 describes findings regarding continuous participation, utilization of e-learning and CDSS, reimbursement of project-related expenditure and participation in QCs calculated on physician level and findings referring to the additional bonus compensation based on practice level.

Table 1 Adherence to intervention components

Adherence to key components of the ARena implementation program

At this stage, descriptively reported adherence scales represent the entire study sample (Physicians n = 290; Practices n = 196). The indicators representing exposure yielded highest scores. In Fig. 3, the number of attendees of all four QC themes are depicted on physician and practice level.

Fig. 3
figure 3

Attendance to QC themes. *Physicians (N = 290); practices (N = 195)

Of 177 practices entitled, 158 received ≥ one additional bonus payouts (89.3%). Regarding content of the additional compensation intervention, a mean of 51.8% of the maximum bonus size per index patient was achieved. Figure 4 illustrates the engagement in additional bonus compensation reimbursements on practice level and provides the number of practices where the bonus was triggered per quarter of the intervention period. For the first four quarters, an incline from 68 to 91 practices was observed which slightly dropped to 85 practices in quarter seven. Additional bonus payment was triggered for 11.3% (N = 22) of the practices in each of the seven quarters, for 17.9% (N = 35) it was triggered once and for 19% (N = 37) of practices it was never triggered. Additional File 2, Supplementary Table 3 provides the number of practices per number of quarters in which additional bonus payment was triggered.

Fig. 4
figure 4

Engagement in additional bonus compensation over the intervention period. *Practice level (N = 195)

Participant views on implementation

Measures representing the five domains of participants’ views on implementation arose from the T2 questionnaire of the ARena survey study. At this time of measurement 63% (N = 184) of invited physicians responded of which 30.9% were female. Participants had a mean age of 54.2 years (SD = 7.9) and 26.4 years of occupational experience (SD = 7.9). Out of 14 included PCNs in the ARena study, every network was represented by two to 32 physicians. On practice level, one practice was represented by one to four physicians. Sample characteristics regarding intervention arm affiliations are demonstrated in Table 2.

Table 2 Sample characteristics of survey participants (T2)

The reliability of items within scores varied between \(\alpha\) = 0.423 and \(\alpha\) = 0.914. The highest level of agreement was detected in the Context domain representing the role of PCNs in the ARena program (Mean = 4.1, SD = 0.8). The lowest level of agreement was identified in in the domain of positive attributions to AB prescribing (Mean = 2.6, SD = 0.9). Regarding multicollinearity of scores, correlations varied between r = 0.15 and r = 0.61. Comprehensive descriptive statistics of domain scores representing participants’ views on implementation are provided in Additional File 2, Supplementary Table 4.

Participant views affecting key component engagement

Binary correlations between engagement in the additional bonus compensation scheme and participant views were identified in participant responsiveness (r = 0.399), context (r = 0.261) and quality of delivery (r = 0.170). The regression analyses showed significant effects of participant responsiveness, Context and intervention arm affiliation. The explanation of variance was determined at Nagelkerkes \({R}^{2}\) = 0.355. The correlation within clusters (subject = practices) was assessed by the intra class correlation coefficient (ICC) and was at ICC = 0.091. In the multilevel regression analysis, effect sizes stayed consistent but levels of significance in the domain of context as well as intervention arm affiliation merely decreased. Detailed estimates of parameters are provided in Table 3 and Additional File 2, Supplementary Table 5. Regarding engagement in the four offered QC themes, binary correlations were identified in domains of participant responsiveness (r = 0.508), Context (r = 0.351) and quality of delivery (r = 0.187). The regression analyses showed significant effects of in domains of participant responsiveness, context and culture of SDM. The adjusted R square of the multiple linear regression model was at \({R}^{2}\) = 0.299. Intra cluster correlations were determined at ICC = 0.16. In the additionally conducted multilevel model, deteriorations of effect sizes and levels of significance regarding context and culture of SDM were fractional. The effect sizes of participant responsiveness were marginally higher. Extensive reporting of estimates is provided in Table 4 and Additional File 2, Supplementary Table 6.

Table 3 Estimates regarding engagement in additional bonus compensation
Table 4 Estimates regarding engagement in QC themes

Discussion

This study investigated the fidelity to the implementation program components QCs and additional bonus compensation, which were provided across all three ARena intervention arms. The overall fidelity in the quality improvement program in ARena was exceptionally high. This may be related to the particular setting of the ARena trial in PCNs, which was considered to be a supportive setting for the effort to promote rational antibiotic prescribing. For both, QCs and the additional bonus compensation program, a positive attribution to PCN membership was a promoting force of intervention engagement. A previous qualitative study conducted within the ARena PE identified various factors that may have contributed to these impacts [39]. Particularly, peer exchange opportunities, social support, promotion of self-reflection and knowledge manifestation were accelerators of care improvement.

Focusing on participants’ engagement in QC themes, highest attendance rates were measured in CRTI themes and followed by UTI and CAP subjects. QCs regarding MRP issues were least visited as this issue may not be as relevant for the outpatient care setting. Although attendance rates in ARena can be considered high, they were noticeably lower than observed in previously conducted QC initiatives targeting prescribing patterns of primary care physicians in Germany [28] which may be owed to mechanisms of bundled interventions. Established QC movements have shown to be a facilitator to the engagement in this intervention [23]. The German National Association of Statutory Health Insurance Physicians reported in 2018 that 8 400 QCs were conducted in outpatient care [43]. Since these observations indicate familiarity of German resident physicians with this delivery mode of quality improvement initiatives, it presumably has been a determinant of explanation for high QC attendance rates observed in ARena. However, research points to unstandardized procedures of this complex intervention which still impedes high quality recommendations on the effectiveness of QCs [25]. Regarding benchmarking procedures provided in QC sessions, results of a cluster RCT indicate that e-mails providing comparison of antibiotic prescribing rates with local peers did provide small effect sizes to declined prescribing rates [44]. Therefore, a provision of benchmarking data via email may be a convenient approach to reduce participation hurdles and to further tailor interventions. Contrary to results of a SDM culture being an impeding influence to the engagement in QC themes, positive attributions of SDM on rational antibiotic prescribing patterns itself are referred in a systematic review [45].

Considering exposure to additional bonus compensation, 88.4% of practices received compensation at least one time for appropriate prescribing. However, during the seven quarters in which the additional bonus was offered, only 11.3% of practices triggered the payment continuously, which is surprising in light of the view that financial compensation is required for quality improvement (dominant in some policy debates). Explanations of the inconsistency in engagement in additional compensation over the intervention period did not emerge from the available data. Findings in the PE conducted alongside ARena indicate that interviewed physicians did interpret additional compensation as one key to generate behavior change. However, additional compensation might only be an incentive to participate in a study, but of lesser importance after this decision is taken [41]. Jan et al. [46] also identified increased administrative workloads and inadequate understandings of performance-based payment contents as principal reasons for aversions of family practitioners to engage. These aspects are not echoed in this study but can be considered for explanation. Besides, the likelihood of additional bonus payment programs to show desirable effects are reported to be three times higher with larger incentives [47]. Since reimbursements in ARena were proportionately small, this may also explain heterogeneity. Generally, research indicates that German physicians widely voice concerns about establishing additional bonus compensation since they apprehend ethical conflicts between monetary interests and patients’ safety, imposed autonomy of the medical profession as well as a loss of autonomy for the benefit of statutory health insurance funds [48]. Therefore, it could be beneficial to develop additional bonus compensation programs for primary care with theory-driven framework approaches in which needs of target groups are accounted [49].

Previously conducted fidelity analyses of multifaceted effectiveness studies were heterogeneous in theory and designs. For instance, a back pain prevention program among nursing aides introduced in a stepped-wedge trial, was evaluated by a quantified, theory-driven approach using logbooks as well as one-time questionnaires and focused on domains of participation, exposure and responsiveness [50]. Notably, this study considered the evaluation of fidelity to be a domain of separated interest but not as a subordinated construct of listed domains. In a health promotion study fostering physical activity of patients in Dutch rehabilitation centers, implementation fidelity was assessed by a longitudinal survey design in order to detect time trends of fidelity scores [51]. Accompanied by qualitative data collection, organizational and professional differences between clustered centers were explored. A further quantitative fidelity analysis utilized attendance lists, checklists, worksheets, exit surveys and expert observations to allow a sophisticated view on a team-based, behavioral intervention in care aids [52]. Focusing on rational antibiotic prescribing efforts, research on antimicrobial stewardship (AMS) programs in British community healthcare organizations was applied by a web-based survey strategy to gain insights about the engagement in introduced stewardship toolkits being a part of a five-year cross-governmental awareness strategy [53]. In a cluster randomized controlled trial which aimed to promote AMS among British community pharmacies to improve the management of respiratory tract infections, an accompanied process evaluation was conducted [54] in a cross-sectional survey design based on the COM-B (capability, opportunity, motivation and behavior) model [55].

The sketch of previously conducted quantitative fidelity studies highlights the need of tailored concepts which meet specific conditions of respective implementation programs. Portrayed programs were most commonly limited to descriptive score analyses and in some cases extended to mixed model procedures which examined the variance of primary trial outcomes between clusters. Thematically, fidelity analyses of programs regarding a rational antibiotic use were scarce. Investigations considering a dose–response-relationship between the engagement in specific interventions and primary study outcomes have not been identified. Nevertheless, statistical modelling between intervention dosage and primary outcome response appears to be beneficial to detect not only fidelity to study protocols, but to adopt programs so that best possible effect sizes can be achieved. To attain this goal, a discussion about standardized theoretical conceptualization and appropriate data sources may be useful. On the basis of this study and referenced approaches, a mix of methods containing attendance data, self-reports and qualitative investigations seems necessary to extensively examine the theoretically driven concept of fidelity and to offer practical recommendations for future implementation adjustments.

Strengths and limitations

This study strengthens the appraisal of the ARena program regarding intervention fidelity and feasibility of implementation. A profound theoretical conceptualization offered chances to quantify influences of participant views on implementation with intervention engagement. Such approaches guide further developments of programs and allow opportunities for adaptation. The combination of attendance with survey data ensured data triangulation and guided a holistic view on fidelity of core components in the ARena program.

Some limitations must be reported. Initially, it was intended to match data of the present fidelity analysis with data of the primary outcome analysis on practice level as this was expected to potentially generate additional insights into favorable dosages of intervention components in order to achieve best possible outcomes. However, German data protection law did not allow for this type of data linkage. Engagement data used for regression modelling originated from self-reported survey data which implies the risk of social desirability bias. Since this study was explorative in design, insecurities regarding construct validity of calculated scores reflecting domains of participant views and statistical power of sample sizes remain. Moreover, the absence of qualitative data integration prevents explanations of some results. Reasons for low levels of bonus size achievements were not observed. Noticeably, the term of fidelity implies understandings of a ‘gold-standard’ in program implementation. This can be misleading, since adjustments must be considered reasonable to achieve the best possible results under real life conditions and thus must be respected in final assessments.

Conclusion

This study explored the intervention fidelity of a complex intervention in real-world healthcare practice. The linkage of reported engagement with intervention components with perceptions of domains of views on implementation facilitates explanation of the variation of effects and contributes to the development of tailored implementation programs in the future. The fidelity analysis of this study indicates a robust feasibility of the completed ARena implementation program and the overall fidelity to it, in particular regarding QC and bonus compensation components which represented core elements of this complex intervention. The engagement in study components was facilitated by positive attributions towards PCN memberships and responsiveness to interventions. In QCs, efforts of SDM antagonized intervention engagement. These insights support further tailoring efforts of complex interventions in the context of rational antibiotic use in German ambulatory care. Future investigations should consider dose response calculations to adapt frequencies regarding exposure to interventions. In prospect, efforts for standardized quantitative fidelity analyses will be conducive to support a holistic view on this concept as well as comparability of evaluations.

Availability of  data and materials

Due to data protection regulations by the German law and the data provider, original data sets cannot be made accessible. All analyses conducted in this study are provided in the Additional File 1.

Abbreviations

AB:

Antibiotic

abc:

Additional bonus compensation

AMS:

Antimicrobial stewardship

AOK:

Public organization of statutory health insurance

ARena:

Sustainable reduction of antibiotic-induced antimicrobial resistance

AUI:

Acute uncomplicated infections

CAP:

Community acquired pneumonia

CDSS:

Computerized decision support system

CRTI:

Common respiratory tract infections

ICC:

Intra class coefficient

IQR:

Inter quartile range

MA:

Medical assistant

MCAR:

Missing Completely at Random

MRP:

Multi-resistant pathogens

OR:

Odds Ratio

PCN:

Primary care network

PE:

Process evaluation

SDM:

Shared decision-making

UTI:

Urinary tract infection

QC:

Quality circle

Q1:

Quarter one

Q2:

Quarter two

References

  1. Zwarenstein M, Treweek S. What kind of randomized trials do we need? Can Med Assoc J. 2009;180(10):998–1000.

    Article  Google Scholar 

  2. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, Baird J. Process evaluation of complex interventions: medical research council guidance. BMJ. 2015;19(350):h1258. https://doi.org/10.1136/bmj.h1258.

    Article  Google Scholar 

  3. Ware JH, Hamel MB. Pragmatic trials—guides to better patient care. N Engl J Med. 2011;364(18):1685–7.

    CAS  PubMed  Article  Google Scholar 

  4. Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res. 2003;18(2):237–56.

    PubMed  Article  Google Scholar 

  5. Dillman Taylor D, Kottman T. Assessing the utility and fidelity of the adlerian play therapy skills checklist using qualitative content analysis. Int J Play Ther. 2019;28(1):13.

    Article  Google Scholar 

  6. Kimber M, Barac R, Barwick M. Monitoring fidelity to an evidence-based treatment: practitioner perspectives. Clin Soc Work J. 2019;47(2):207–21.

    Article  Google Scholar 

  7. Scantlebury A, Cockayne S, Fairhurst C, Rodgers S, Torgerson D, Hewitt C, et al. Qualitative research to inform hypothesis testing for fidelity-based sub-group analysis in clinical trials: lessons learnt from the process evaluation of a multifaceted podiatry intervention for falls prevention. Trials. 2020;21(1):348.

    PubMed  PubMed Central  Article  Google Scholar 

  8. Kamradt M, Kaufmann-Kolle P, Andres E, Brand T, Klingenberg A, Glassen K, et al. Sustainable reduction of antibiotic-induced antimicrobial resistance (ARena) in German ambulatory care: study protocol of a cluster randomised trial. Implement Sci. 2018;13(1):1–10.

    Article  Google Scholar 

  9. GERMAP 2015 Antibiotika-Resistenz und -Verbrauch. Rheinbach, 2016: Antiinfectives Intelligence; 2016. Available under: https://www.bvl.bund.de/SharedDocs/Downloads/05_Tierarzneimittel/germap2015.pdf?__blob=publicationFile&v=4. Retrieved on 11/06/2020.

  10. Antão E-M, Wagner-Ahlfs C. Antibiotikaresistenz. Bundesgesundheitsblatt - Gesundheitsforschung - Gesundheitsschutz. 2018;61(5):499–506.

    PubMed  Article  Google Scholar 

  11. Altiner A, Bell J, Duerden M, Essack S, Kozlov R, Noonan L, et al. More action, less resistance: report of the 2014 summit of the G lobal R espiratory I nfection P artnership. Int J Pharm Pract. 2015;23(5):370–7.

    PubMed  Article  Google Scholar 

  12. Altiner A, Berner R, Diener A, Feldmeier G, Köchling A, Löffler C, et al. Converting habits of antibiotic prescribing for respiratory tract infections in German primary care–the cluster-randomized controlled CHANGE-2 trial. BMC Fam Pract. 2012;13(1):1–7.

    Article  Google Scholar 

  13. Kötter J. Viele wollen auf „sicherer Seite“ sein. Berlin: Ärztezeitung; 2016. Available under: https://www.aerztezeitung.de/Politik/Viele-wollen-auf-sicherer-Seite-sein-296738.html. Retrieved on 8/26/2020.

    Google Scholar 

  14. Burstein VR, Trajano RP, Kravitz RL, Bell RA, Vora D, May LS. Communication interventions to promote the public’s awareness of antibiotics: a systematic review. BMC Public Health. 2019;19(1):899. https://doi.org/10.1186/s12889-019-7258-3.

    PubMed  PubMed Central  Article  Google Scholar 

  15. Cals JW, Schot MJ, de Jong SA, Dinant G-J, Hopstaken RM. Point-of-care C-reactive protein testing and antibiotic prescribing for respiratory tract infections: a randomized controlled trial. Ann Fam Med. 2010;8(2):124–33.

    PubMed  PubMed Central  Article  Google Scholar 

  16. de Bont EG, Alink M, Falkenberg FC, Dinant G-J, Cals JW. Patient information leaflets to reduce antibiotic use and reconsultation rates in general practice: a systematic review. BMJ Open. 2015;5(6):e007612. https://doi.org/10.1136/bmjopen-2015-007612.

    PubMed  PubMed Central  Article  Google Scholar 

  17. Ellegård LM, Dietrichson J, Anell A. Can pay-for-performance to primary care providers stimulate appropriate use of antibiotics? Health Econ. 2018;27(1):e39–54. https://doi.org/10.1002/hec.3535 Epub 2017 Jul 7.

    PubMed  Article  Google Scholar 

  18. Huttner B, Goossens H, Verheij T, Harbarth S. Characteristics and outcomes of public campaigns aimed at improving the use of antibiotics in outpatients in high-income countries. Lancet Infect Dis. 2010;10(1):17–31. https://doi.org/10.1016/S1473-3099(09)70305-6.

    PubMed  Article  Google Scholar 

  19. Schoenthaler A, Albright G, Hibbard J, Goldman R. Simulated conversations with virtual humans to improve patient-provider communication and reduce unnecessary prescriptions for antibiotics: a repeated measure pilot study. JMIR Med Educ. 2017;3(1):e7. https://doi.org/10.2196/mededu.6305.

    PubMed  PubMed Central  Article  Google Scholar 

  20. Kern WV. Rationale Antibiotikaverordnung in der Humanmedizin. Bundesgesundheitsblatt - Gesundheitsforschung - Gesundheitsschutz. 2018;61(5):580–8.

    PubMed  Article  Google Scholar 

  21. Dowling S, Finnegan H, Collins C. Does participation in CME SLG (small group learning) influence medical practice? The experience of general practitioners attending CME SLG after the introduction of the Medical Practitioners Act. 2015.

    Google Scholar 

  22. Kjaer N, Steenstrup A, Pedersen L, Halling A. Continuous professional development for GPs: experience from Denmark. Postgrad Med J. 2014;90(1065):383–7. https://doi.org/10.1136/postgradmedj-2012-131679 Epub 2014 May 26.

    CAS  PubMed  Article  Google Scholar 

  23. Rohrbasser A, Kirk UB, Arvidsson E. Use of quality circles for primary care providers in 24 European countries: an online survey of European Society for Quality and Safety in family practice delegates. Scand J Prim Health Care. 2019;37(3):302–11. https://doi.org/10.1080/02813432.2019.1639902 Epub 2019 Jul 12.

    PubMed  PubMed Central  Article  Google Scholar 

  24. Verstappen WH, van der Weijden T, Dubois WI, Smeele I, Hermsen J, Tan FE, et al. Improving test ordering in primary care: the added value of a small-group quality improvement strategy compared with classic feedback only. Ann Fam Med. 2004;2(6):569–75.

    PubMed  PubMed Central  Article  Google Scholar 

  25. Rohrbasser A, Harris J, Mickan S, Tal K, Wong G. Quality circles for quality improvement in primary health care: their origins, spread, effectiveness and lacunae–a scoping review. PLoS One. 2018;13(12):e0202616. https://doi.org/10.1371/journal.pone.0202616.

    PubMed  PubMed Central  Article  Google Scholar 

  26. Spiegel W, Mlczoch-Czerny MT, Jens R, Dowrick C. Quality circles for pharmacotherapy to modify general practitioners’ prescribing behaviour for generic drugs. J Eval Clin Pract. 2012;18(4):828–34.

    PubMed  Article  Google Scholar 

  27. van Driel ML, Coenen S, Dirven K, Lobbestael J, Janssens I, Van Royen P, et al. What is the role of quality circles in strategies to optimise antibiotic prescribing? A pragmatic cluster-randomised controlled trial in primary care. Qual Saf Health Care. 2007;16(3):197–202.

    PubMed  PubMed Central  Article  Google Scholar 

  28. Wensing M, Broge B, Riens B, Kaufmann-Kolle P, Akkermans R, Grol R, et al. Quality circles to improve prescribing of primary care physicians. Three comparative studies. Pharmacoepidemiol Drug Saf. 2009;18(9):763–9. https://doi.org/10.1002/pds.1778.

    PubMed  Article  Google Scholar 

  29. Heider AK, Mang H. Effects of Monetary Incentives in Physician Groups: A Systematic Review of Reviews. Appl Health Econ Health Policy. 2020;18:655–667. https://doi.org/10.1007/s40258-020-00572-x.

  30. Hamilton FL, Greaves F, Majeed A, Millett C. Effectiveness of providing financial incentives to healthcare professionals for smoking cessation activities: systematic review. Tob Control. 2013;22(1):3–8.

    CAS  PubMed  Article  Google Scholar 

  31. Sabatino SA, Lawrence B, Elder R, Mercer SL, Wilson KM, DeVinney B, et al. Effectiveness of interventions to increase screening for breast, cervical, and colorectal cancers: nine updated systematic reviews for the guide to community preventive services. Am J Prev Med. 2012;43(1):97–118.

    PubMed  Article  Google Scholar 

  32. Andres E, Szecsenyi J, Garbe K, Hartmann J, Petruschke I, Schulz M, et al. Rationaler Antibiotikaeinsatz: Impulse für den hausärztlichen Versorgungsalltag (Symposium-Bericht)-Online ZFA. 2020;3(956):109-.

  33. Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci. 2007;2:40.

    PubMed  PubMed Central  Article  Google Scholar 

  34. Hasson H. Systematic evaluation of implementation fidelity of complex interventions in health and social care. Implement Sci. 2010;5(1):67.

    PubMed  PubMed Central  Article  Google Scholar 

  35. McHugh M, Harvey JB, Kang R, Shi Y, Scanlon DP. Measuring the dose of quality improvement initiatives. Med Care Res Rev. 2016;73(2):227–46.

    PubMed  Article  Google Scholar 

  36. Gabriel J. Praxisnetze im Wandel–Chancen und Stärken eines Versorgungsmodells. Management von Gesundheitsregionen III: Springer; 2017. p. S.

  37. Poss-Doering R, Kronsteiner D, Kamradt M, Andres E, Kaufmann-Kolle P, Wensing M, Szecsenyi J. Antibiotic prescribing for acute, non-complicated infections in primary care in Germany: baseline assessment in the cluster randomized trial ARena. BMC Infect Dis. 2021;21(1):1–10.

    Article  Google Scholar 

  38. Poss-Doering R, Kronsteiner D, Kamradt M, Kaufmann-Kolle P, Andres E, Wambach V, Bleek J, Wensing M, ARena-Study Group, Szecsenyi J. Assessing reduction of antibiotic prescribing for acute, non-complicated infections in primary care in Germany: multi-step outcome evaluation in the cluster-randomized trial ARena. Antibiotics (Basel). 2021;10(10):1151. https://doi.org/10.3390/antibiotics10101151.

    Article  Google Scholar 

  39. Poss-Doering R, Kamradt M, Glassen K, Andres E, Kaufmann-Kolle P, Wensing M. Promoting rational antibiotic prescribing for non-complicated infections: understanding social influence in primary care networks in Germany. BMC Fam Pract. 2020;21(1):51. https://doi.org/10.1186/s12875-020-01119-8.PMID:32171252;PMCID:PMC7073012.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  40. Poss-Doering R, Kamradt M, Stuermlinger A, Glassen K, Kaufmann-Kolle P, Andres E, et al. The complex phenomenon of dysrational antibiotics prescribing decisions in German primary healthcare: a qualitative interview study using dual process theory. Antimicrob Resist Infect Control. 2020;9(1):1–11.

    Article  Google Scholar 

  41. Poss-Doering R, Kühn L, Kamradt M, Stürmlinger A, Glassen K, Andres E, et al. Fostering appropriate antibiotic use in a complex intervention: mixed-methods process evaluation alongside the cluster-randomized trial ARena. Antibiotics. 2020;9(12):878.

    PubMed Central  Article  Google Scholar 

  42. Ajzen I. The theory of planned behavior. Organ Behav Hum Dec. 1991;50(2):179–211.

    Article  Google Scholar 

  43. Qualitätsbericht 2019. Berlin: Kassenärztliche Bundesvereinigung; 2019. Available under: https://www.kbv.de/media/sp/KBV_Qualitaetsbericht_2019.pdf. Retrieved on 11/30/2020.

  44. Meeker D, Linder JA, Fox CR, Friedberg MW, Persell SD, Goldstein NJ, et al. Effect of behavioral interventions on inappropriate antibiotic prescribing among primary care practices: a randomized clinical trial. JAMA. 2016;315(6):562–70.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  45. Tonkin-Crine SK, Tan PS, van Hecke O, Wang K, Roberts NW, McCullough A, et al. Clinician-targeted interventions to influence antibiotic prescribing behaviour for acute respiratory infections in primary care: an overview of systematic reviews. Cochrane Database Syst Rev. 2017;9:CD012252.

    PubMed  Google Scholar 

  46. Jan CF, Lee MC, Chiu CM, Huang CK, Hwang SJ, Chang CJ, et al. Awareness of, attitude toward, and willingness to participate in pay for performance programs among family physicians: a cross-sectional study. BMC Fam Pract. 2020;21(1):60.

    PubMed  PubMed Central  Article  Google Scholar 

  47. Ogundeji YK, Bland JM, Sheldon TA. The effectiveness of payment for performance in health care: a meta-analysis and exploration of variation in outcomes. Health Policy. 2016;120(10):1141–50.

    PubMed  Article  Google Scholar 

  48. Ammi M, Fortier G. The influence of welfare systems on pay-for-performance programs for general practitioners: a critical review. Soc Sci Med. 2017;178:157–66.

    PubMed  Article  Google Scholar 

  49. Kirschner K, Braspenning J, Jacobs JA, Grol R. Design choices made by target users for a pay-for-performance program in primary care: an action research approach. BMC Fam Pract. 2012;13(1):25.

    PubMed  PubMed Central  Article  Google Scholar 

  50. Ferm L, Rasmussen CDN, Jørgensen MB. Operationalizing a model to quantify implementation of a multi-component intervention in a stepped-wedge trial. Implement Sci. 2018;13(1):26.

    PubMed  PubMed Central  Article  Google Scholar 

  51. Hoekstra F, van Offenbeek MAG, Dekker R, Hettinga FJ, Hoekstra T, van der Woude LHV, et al. Implementation fidelity trajectories of a health promotion program in multidisciplinary settings: managing tensions in rehabilitation care. Implement Sci. 2017;12(1):143.

    PubMed  PubMed Central  Article  Google Scholar 

  52. Ginsburg LR, Hoben M, Easterbrook A, Andersen E, Anderson RA, Cranley L, et al. Examining fidelity in the INFORM trial: a complex team-based behavioral intervention. Implement Sci. 2020;15(1):78.

    PubMed  PubMed Central  Article  Google Scholar 

  53. Ashiru-Oredope D, Doble A, Akpan MR, Hansraj S, Shebl NA, Ahmad R, et al. Antimicrobial stewardship programmes in community healthcare organisations in England: a cross-sectional survey to assess implementation of programmes and national toolkits. Antibiotics. 2018;7(4):97.

    PubMed Central  Article  Google Scholar 

  54. Ashiru-Oredope D, Doble A, Thornley T, Saei A, Gold N, Sallis A, et al. Improving management of respiratory tract infections in community pharmacies and promoting antimicrobial stewardship: a cluster randomised control trial with a self-report behavioural questionnaire and process evaluation. Pharmacy. 2020;8(1):44.

    PubMed Central  Article  Google Scholar 

  55. Michie S, Atkins L, West R. The behaviour change wheel. A guide to designing interventions. 1st ed. Great Britain: Silverback Publishing; 2014. p. 1003–10.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the staff at aQua Institut, Goettingen, for the overall support in conducting the ARena study, the process evaluation and this present study. Special thanks are extended to Dr. Dorothea Kronsteiner, Dr. Jan Koetsenruijter, Christine Arnold, Prof. Dr. Gunther Laux and Prof. Dr. Michel Wensing for methodical advice and to the participating practices.

Funding

Open Access funding enabled and organized by Projekt DEAL. The ARena project was funded by the Innovation Committee at the Federal Joint Committee (G-BA), Berlin (01NVF16008). The funder had no tasks and responsibilities in the project execution.

Author information

Authors and Affiliations

Authors

Contributions

LK drafted and prepared the manuscript. LK, DK, MW, SZ, and RPD contributed to concept and design of this study. PKK and EA collected and provided the data. LK and DK analyzed the data. LK, DK, MW, and RPD interpreted the data. SZ was overall principal investigator of the ARena project. All authors provided substantial comments and approved the final version of the manuscript. LK—Lukas Kühn; DK—Dorothea Kronsteiner; EA – Edith Andres; PKK – Petra Kaufmann-Kolle; SZ—Joachim Szecsenyi; MW – Michel Wensing; RPD – Regina Poss-Doering.

Corresponding author

Correspondence to Regina Poss-Doering.

Ethics declarations

Ethical approval and consent to participate

The medical ethics committee of the Medical Faculty of Heidelberg University (S-353/2017) and the ethics committee of the medical Association Baden-Wuerttemberg (B-F-2017–104) approved the ARena project. Written informed consent for participation and publication of findings is provided for all participants in this study. Confidentiality and anonymity were granted throughout procedures. All research in this study was performed in accordance with the relevant guidelines and regulations and the Declaration of Helsinki. Trial registration: ISRCTN, ISRCTN58150046.

Consent for publication

Not applicable.

Competing interests

There are no competing interests declared.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Supplementary Figure 1.

Study Design & participant numbers of the ARena trial.

Additional file 2: Supplementary Table 1.

Survey items (T2) included for scores of participant views (N = 184). Supplementary Table 2. Indicators reflecting fidelity to the ARena program (N = 290 physicians). Supplementary Table 3. Number of quarters with claimed P4P reimbursements (N = 195 practices). Supplementary Table 4. Scores of domains reflecting participant views (T2) (N = 184 physicians). Supplementary Table 5. Multilevel logistic regression model clustered by practice affiliation regarding use of the bonus payment component (N = 184 physicians). Supplementary Table 6. Multilevel multiple linear regression model clustered by practice affiliation regarding attendance to QC themes (N = 184).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kühn, L., Kronsteiner, D., Kaufmann-Kolle, P. et al. Implementation fidelity in a multifaceted program to foster rational antibiotics use in primary care: an observational study. BMC Med Res Methodol 22, 243 (2022). https://doi.org/10.1186/s12874-022-01725-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-022-01725-3

Keywords

  • Complex intervention
  • Fidelity analysis
  • Pragmatic trial
  • Quality circles
  • Rational antibiotic use