Skip to main content

Describing the content of trial recruitment interventions using the TIDieR reporting checklist: a systematic methodology review

Abstract

Background

Recruiting participants to clinical trials is an ongoing challenge, and relatively little is known about what recruitment strategies lead to better recruitment. Recruitment interventions can be considered complex interventions, often involving multiple components, targeting a variety of groups, and tailoring to different groups. We used the Template for Intervention Description and Replication (TIDieR) reporting checklist (which comprises 12 items recommended for reporting complex interventions) to guide the assessment of how recruitment interventions are described. We aimed to (1) examine to what extent we could identify information about each TIDieR item within recruitment intervention studies, and (2) observe additional detail for each item to describe useful variation among these studies.

Methods

We identified randomized, nested recruitment intervention studies providing recruitment or willingness to participate rates from two sources: a Cochrane review of trials evaluating strategies to improve recruitment to randomized trials, and the Online Resource for Research in Clinical triAls database. First, we assessed to what extent authors reported information about each TIDieR item. Second, we developed descriptive categorical variables for 7 TIDieR items and extracting relevant quotes for the other 5 items.

Results

We assessed 122 recruitment intervention studies. We were able to extract information relevant to most TIDieR items (e.g., brief rationale, materials, procedure) with the exception of a few items that were only rarely reported (e.g., tailoring, modifications, planned/actual fidelity). The descriptive variables provided a useful overview of study characteristics, with most studies using various forms of informational interventions (55%) delivered at a single time point (90%), often by a member of the research team (59%) in a clinical care setting (41%).

Conclusions

Our TIDieR-based variables provide a useful description of the core elements of complex trial recruitment interventions. Recruitment intervention studies report core elements of complex interventions variably; some process elements (e.g., mode of delivery, location) are almost always described, while others (e.g., duration, fidelity) are reported infrequently, with little indication of a reason for their absence. Future research should explore whether these TIDieR-based variables can form the basis of an approach to better reporting of elements of successful recruitment interventions.

Peer Review reports

Background

Clinical trial recruitment is frequently challenging. Trial participation rates are consistently low across public and private research sectors and clinical specialties [1,2,3,4]. In one American study, 40% of National Cancer Institute-funded trials were discontinued, nearly half because of participation issues [5]. Similarly, 37% of trials funded by the UK National Institute of Health Research failed to meet participation targets [6]. There are substantial costs associated with low participation rates and trials failing to meet targets, including wasted resources, delayed innovation, potentially biased results, and ethical issues associated with exposing participants to risk without scientific gain [7].

Research to improve recruitment has yielded few generalizable lessons that can be widely employed to improve the success of trials. Research has focused on individual elements of an overall recruitment strategy; here we refer to these elements as ‘recruitment practices’. A Cochrane review on the topic reviewed 68 publications, including over 74,000 participants in total, to evaluate the effectiveness of many different recruitment practices. The authors found only two practices with clear evidence supporting their effectiveness to improve recruitment rates: open rather than blind trial designs result in greater participation, as does the use of telephone reminders (as opposed to postal reminders) for non-responders of an initial invitation [8]. Other recruitment practices have been tested in multiple studies, (e.g. patient information developed using bespoke user-testing, shortened patient information leaflets, financial incentives), but clear conclusions about their effectiveness have been impeded by the variable quality of included studies, variable reporting, potential methodological biases, or limited sample sizes [8, 9].

Overall strategies to optimize recruitment can constitute complex interventions that include many different elemental recruitment practices, may be targeted to various groups (e.g., potential participants, study recruiters), and may include varying levels of tailoring [10]. For example, the recruitment strategy for one trial involved potential participants receiving church-based educational sessions around clinical trial participation, led by trained faith leaders, that included discussions on the importance of community participation, myths about clinical trials, information on different clinical conditions, and newsletters providing study updates and clinical trial opportunities [11]. Such complex interventions pose a challenge to determining which specific practices led to the trial’s recruitment success, and which might generalize to other settings. In part, this is because of the lack of a widely used, coherent framework to help guide reporting of the important aspects of recruitment strategies. Without such a framework to guide reporting of these interventions and to support knowledge synthesis, it will continue to be difficult or impossible to determine the specific recruitment practices that successfully generalize, and the accumulation of knowledge around how to improve trial recruitment will be slowed.

The Template for Intervention Description and Replication (TIDieR) checklist was designed to provide guidance on 12 core elements to report with complex health care interventions [12]. The checklist and accompanying guide are intended to ensure the reporting of intervention elements that are considered essential for reviewers and editors, and for researchers replicating and building on these interventions. In order to gain a better understanding of which elements of recruitment interventions are most effective, we first need to develop a consistent method to describe and categorize these interventions. We selected the TIDieR checklist as a guide for this work, because the notion of recruitment as a complex intervention may help describe why some recruitment interventions are more effective than others, and the checklist can inform discussion of which complex intervention elements are important to report. We aimed to use the TIDieR checklist as a guide to (1) examine to what extent we could identify/extract information about each of the 12 items within recruitment intervention studies, and (2) observe additional detail for each item to describe useful variation among recruitment intervention studies.

Methods

We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement to support complete reporting of this study (Additional file 1, Appendix A) [13].

Study selection

We sought to identify studies examining the effects of recruitment interventions on clinical trial participation. These are often randomized or quasi-randomized studies embedded within a clinical trial to observe actual effects on trial recruitment, or can be trials using hypothetical clinical trial scenarios to elicit participants willingness to participate.

Study sources

In order to reduce duplication of effort, we used previous work done by the Cochrane and Online Resource for Research in Clinical triAls (ORRCA) groups. The Cochrane review conducted a systematic search using multiple sources (the Cochrane Methodology Review Group Specialized Register in the Cochrane Library, MEDLINE, Embase, Science Criterion Index & Social Science Citation Index) up to and including publications from 2015. In order to further benefit from this cumulative knowledge base we used identical inclusion criteria to the Cochrane review; therefore, we included the same 68 studies from this review (for details see Treweek et al., (2018) [8]). Second, we updated this sample by searching the ORRCA database, which collects and indexes publications relevant to the field of recruitment and retention research for clinical trials on an ongoing basis [14].

Inclusion criteria

In line with the Cochrane review, we included all published articles with the following PICOS [15] inclusion criteria: participants (P) included potential trial participants including both patients and representative community samples; interventions (I) of interest included any intervention aimed at improving recruitment to the host trial of the publication; the comparator (C) could be either study recruitment methods as usual or another intervention aimed at improving recruitment; outcomes (O) of interest included the proportion or number of potential participants recruited to the host trial whether the decision was real or hypothetical ‘willingness to participate,’ and study designs (S) included both randomized and quasi-randomized trials of recruitment interventions. The host trial design also needed to be a randomized clinical trial.

Exclusion criteria

We excluded any articles where the host study being recruited to was a survey, observational cohort, or biobank study as these types of studies are considered lower risk for participants and may not present the same recruitment challenges as those recruiting to active trials.

To update the original set of articles included from Treweek et al. [8], we searched the ORRCA database on November 3rd, 2020 and again on August 11th, 2022 using the following search parameters:

Recruitment database only (excluded retention database).

Year: 2015 to 2022.

Evidence type: Randomized evaluation (including quasi-randomized trials).

Research methods: Nested randomized controlled trial.

Research outcome: number recruited or recruitment rate or willingness to participate or other or unknown.

Search results were de-duplicated and then screened at the abstract and full text level to ensure they met inclusion criteria. This screening was done by NH and reviewed by JCB or KC.

Data extraction

Two of three coders extracted data from each study (NH as primary and KC or SS as secondary). Coders met regularly to discuss discrepancies between items in order to reach consensus, with a fourth coder (JCB) resolving any disagreements. We extracted data into a Microsoft Excel 2010 [16] capture form developed by the authors. This form was pilot tested on an initial set of 6 articles and revised for completeness and functionality.

For each publication, we extracted data for up to three study arms (control/comparator, intervention 1, intervention 2). The control/comparator arm was considered to be the least intense recruitment effort, or standard recruitment effort where not otherwise specified by the study authors. For studies with more than one intervention arm, we defined intervention 1 as a less intensive and intervention 2 as the more intensive intervention. We determined intensity of the intervention using several factors, including financial and time costs to the researchers and burden of time and effort on participants (e.g. phone call (intervention 2) vs. email reminders (intervention 1) vs. no reminder (comparator/control)). For studies with more than three arms, we only extracted data for the two arms considered the most intensive and least intensive based on the above criteria. For publications reporting more than one study, studies were coded separately in the extraction form provided they were independent studies (i.e. used distinct samples, randomization procedures, and interventions). For studies where the same intervention was applied to different samples, we selected the study where the sample most closely resembled the target population of the host trial.

The data extraction form included 8 sections. The current manuscript reports on three of these (background information, intervention details, and risks of bias); we will report data on the other five sections separately (use of shared decision-making, participant-centered involvement, theory use, use of behavior change techniques, and recruitment outcomes).

Background information

Background information included the study’s first author, year of publication, title, source, and country in which the study took place. We also extracted a brief description of the host trial (i.e. the trial into which the participants are being recruited), whether the decisions participants made would result in actual trial participation (real decision) or not (hypothetical decision), the trial phase of the host trial, clinical specialty of the host trial, recruitment trial participant age (mean or median age for full sample), and proportion of reported male/female participants.

We recorded host trial phase (i.e. Phase I, II, III, IV) based on author report, trial registry if provided, or failing either of these, inference from the descriptions provided. Behavioural interventions (e.g. smoking cessation, falls prevention) were included with phase III studies because they were not being evaluated for safety (phase I), efficacy (phase II), or at the surveillance stage (phase IV) but rather evaluating intervention effectiveness, analogous to phase III. This classification strategy is comparable to that of other work focusing on behavioural interventions [17, 18]. Studies that were recruiting into more than one trial were coded as ‘multiple trials/phases’. Studies where a phase could not be determined based on what was reported in the article were coded as ‘other.’

Intervention details (TIDieR)

We initially sought to evaluate the contents of recruitment intervention reporting according to TIDieR checklist items [12] by extracting information relevant to each item. For extracted information we then created descriptive variables to enable core aspects of the reporting of recruitment interventions to be described. Below, we outline the development process for these variables.

TIDieR framework. The TIDieR framework outlines 12 checklist items recommended for reporting the nature of complex interventions: (1) a name or phrase that describes the intervention, (2) rationale/theory/goal of the elements essential to the intervention, (3) physical or informational materials used in the intervention, (4) procedures/processes used in the intervention, (5) intervention provider, (6) modes of delivery of the intervention (e.g. face-to-face, phone, internet), (7) location where the intervention occurred, (8) the number of times and length of time the intervention was delivered, (9) tailoring or personalization made for intervention recipients, (10) modifications made to the intervention throughout the study, (11) whether adherence/fidelity was planned, and (12) actual adherence/fidelity reported [12].

TIDieR reporting. We assessed with what frequency we were able to extract information relevant to each of the 12 TIDieR items for each study. For each TIDieR item, we recorded whether information was extractable and with enough detail provided to understand methods relevant to the item.

TIDieR descriptive variables. Our second aim was to describe additional detail for each item to better understand useful variation among recruitment intervention studies. We characterised as many TIDieR items as possible as categorical variables that could be independently assessed by coders. Two coders (JCB, NH) went through 6 initial studies to develop an initial set of categories for each of the 12 TIDieR items. Subsequent consensus meetings (JCB, NH, and KC) centered on how TIDieR items should be defined in the context of recruitment trials, refining the codebook, and determining what should be extracted for the non-categorisable items.

The final set of descriptive variables included 7 items that could be categorized, and 5 that were collected as quotes. The categorical items were:

  • rationale – whether study authors provide a clear link between what mechanism they believe will improve recruitment and the selected intervention;

  • materials – two items: (1) categories for the ‘active’ ingredients of the intervention (e.g., video, modified documents, additional documents, incentives) and (2) an item indicating whether access to full materials were available;

  • procedure - categorized using ORRCA categories from Treweek et al. [8] (pre-trial planning, changes during trial, modifications to consent process, modification to information given to potential participants, intervention targeted at recruiter/site, incentives, other);

  • intervention provider – categorized as part of clinical care team, research team, or other;

  • modes of delivery - categorized using the ontology developed by Marques et al. (2020) [19] (e.g., informational – human interactional, informational – printed material, environmental change);

  • intervention location – categorized by where the intervention was ‘received’ (e.g., clinical setting, non-clinical setting); and

  • frequency and duration – two items: (1) frequency categorized as once, twice, 3 or more times and (2) duration defined as the amount of time participants/intervention target spent receiving the intervention in minutes.

We could not categorize the remaining five items in ways that were reliably codable and reasonably concise, and so we opted simply to extract relevant quotes for these items: intervention description, tailoring, modifications, planned fidelity/adherence, and actual fidelity/adherence.

Several items were modified from the TIDieR item definitions provided by Hoffman et al. [12] in order to fit the included studies better. For example, item 2 recommends the description of any rationale, theory, or goal of the intervention. We extracted details on theory use in more detail separately, results that will be reported elsewhere. For the current paper, we focused on the intervention rationale. Since all studies provided some form of rationale, this item was defined as whether authors provided sufficient information to discern a clear link for ‘why’ they believed the intervention would improve recruitment; for some studies, raters determined that improving recruitment was not the primary goal of the study (e.g. goal was to improve participant understanding but also assessed recruitment outcomes) and therefore rated as ‘not applicable.’ As well, procedure (item 4), was renamed ‘intervention type’ and defined using the categories provided by ORRCA as listed in Treweek et al. [8]. In addition to capturing the different types of intervention materials (item 3) used, we also rated whether full materials were included in the publication (or as online links) because guidance is increasingly recommending the inclusion of full materials/data for publication. This variable was rated as ‘not applicable’ for studies with no materials relevant to the intervention.

For multi-arm trials, where two interventions were included in the extraction, the two intervention arms were combined when coding the TIDieR descriptive items. While this may lead to a perception that these studies had more complex interventions, most multi arm studies were simply a greater ‘dose’ of the same intervention in each arm and would not affect the TIDieR-related categories selected.

Risk of bias

The 2018 Cochrane review by Treweek et al. [8] assessed risk of bias (RoB) of the recruitment trials using the original 5-item version of the Cochrane risk of bias tool. In our review, we assessed bias using the most recent 22-item Cochrane RoB 2 tool for parallel trials [20]. This tool assesses risk of bias in 5 domains: (1) risk of bias from randomization, (2) risk of bias due to deviations from the intended intervention, (3) risk of bias due to missing outcome data, (4) risk of bias in measurement of the outcome, and (5) risk of bias in selection of the reported results. Assessment within each domain results in a domain judgement of low bias, some concerns, or high bias. Domain judgements are then aggregated into an overall bias rating where low risk indicates low bias in all five domains, some concerns indicate at least one domain with some concerns, and high bias indicates at least one domain with high bias, or multiple domains with some concerns. The tool also requires users to select an effect of interest for their studies (effect of assignment to intervention vs. effect of adherence to intervention). We defined the effect of interest as the effect of assignment to the intervention since trialists implementing recruitment interventions may have little control over intervention adherence (e.g. whether someone opens an email, letter, watches a video). The study team developed an Excel spreadsheet to include responses and justifications to each signaling question and automated the algorithms for domain and overall bias judgements. Risk of bias was assessed by two of three possible coders (NH, KC, SV) with all discrepancies resolved through consensus.

Data analysis

We imported data into SPSS (version 28) for analysis. We calculated means, standard deviations, or frequencies for the demographic and TIDieR-related variables.

Results

Figure 1 outlines our PRISMA diagram for study identification and inclusion. The 68 papers from Treweek et al. (2018) were automatically included as they met all inclusion criteria [8]. The ORRCA database searches resulted in 129 additional records. After duplicates were removed (n = 24), we had 105 records to review for inclusion. A further 51 studies were deemed ineligible based on the wrong host trial design (e.g. survey study, non-randomized trial), 5 studies had no useable recruitment outcomes, and one had no comparator group. The remaining 48 studies identified from the ORRCA database searches were added to the Treweek et al. [8] review studies, for a total sample of 116 papers (see Additional file 1, Appendix B) [11, 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135]. Five papers reported results from multiple recruitment intervention trials; therefore, we extracted data from 122 individual studies within the included publications.

Fig. 1
figure 1

The PRISMA flow diagram for the review detailing the source of publications, number of abstracts and full texts screened, and number of publications included

The final sample of included recruitment studies is described in Table 1. Over half (64%) were published between 2010 and 2020. Most were conducted in the USA (43%), UK (33%), or Australia (9%). The majority (64%) asked potential participants to consider participation in a real trial, rather than a hypothetical trial (36%). The largest proportion (30%) focused on oncology trials. Trials were most frequently categorized as Phase III (41%), while many others reported recruitment to multiple trials that varied in phase (19%), and one third (34%) of studies could not be coded into a trial phase because of a lack of detail in describing the host trial or not fitting into the phase categories used (e.g., screening trials, supplement use). Approximately half of included studies reported information on mean/median participant age (n = 65), which ranged from 14.2 to 77.7 years. Approximately two thirds (n = 81) reported about participant gender (mean percent female = 62%), with most studies including both males and females (n = 62), while others only included females (n = 17) or males (n = 2).

Table 1 Descriptive characteristics of included studies (n = 122)

TIDieR reporting. The frequency with which we were able to extract information relevant to each of the 12 TIDieR items for each study is presented in italics in Table 2. We identified many items as present for all studies: name/description, rationale, materials, procedure, and mode of delivery. Other items were reported with high frequency: provider (83%), location (94%), and intervention frequency (99%); while duration was less frequently reported (26%). The final four TIDieR items were rarely reported. These included tailoring (8%), modifications (0%), planned fidelity (5%), and actual fidelity (14%).

Table 2 Proportion of studies reporting information relevant to the 12 TIDieR checklist items and item-specific intervention details (n = 122)

TIDieR descriptive items. Results detailing the descriptive items based on TIDieR are summarized in Table 2. About half of studies (56%) demonstrated a clear link between the rationale and selected intervention, for example, “The rationale is that senior investigators would have better clinical judgment with which to assess study eligibility. Another common belief is that they exude an aura of expertise that might encourage wavering prospective subjects to participate.” [41]; and 31% did not demonstrate a clear link, for example, “We hypothesized that patients randomized to telephone-based follow-up would be more likely to attend for eligibility screening and be enrolled into the SCOPE trial than those randomized to mail based follow-up” [26]. A smaller portion (13%) did not state recruitment as a primary goal of the intervention (e.g., aim was to improve patient understanding), but did report on recruitment outcomes and was therefore rated as ‘not applicable’ for this item. Materials often amounted to informational documents that were either modified consent documents (34%) or documents in addition to standard consent documents (13%); less frequently, materials involved videos (12%) or computer programs/websites (9%). Authors provided access to full materials for only 25% of studies. The most common ORRCA category for intervention type was modified information presented to potential participants (55%). The intervention provider was most commonly a member of the research team (59%). The modes of delivery (from Marques et al. 2020 [19] ontology) were overwhelmingly informational, often in the form of printed (47%) or electronic (36%) materials. Only three studies used changes to the environment via electronic data capture systems (5%). While the modes of delivery could be captured by a single mode for most studies (n = 92, 75%) other more complex interventions required the selection of two (n = 26, 21%) or three (n = 4, 3%) modes to accurately describe the modes of delivery.

Intervention location was primarily in clinical settings (41%) such as hospitals or primary care clinics, but also frequently non-clinical settings (32%) such as universities, churches, and community centers. Interventions were most commonly administered at a single time point (90%), while others were conducted over 3 or more sessions/time points (9%; e.g., reminders, multiple informational sessions). When reported, the length of time recipients received the intervention ranged from as short as 5 min up to 13 h for multi-day informational sessions. Ten studies (8%) indicated interventions were tailored to participants in some way, such as audio taping the recruitment session for participants to take home, emails with site specific information for recruiters, and being assigned educational videos based on their trial knowledge or attitude scores from questionnaires (see Table 2 for example quotes). No studies reported any intervention modifications during the study. While six studies (5%) appeared to report plans to assess fidelity/adherence to the interventions, 14% (n = 17) reported actual fidelity/adherence observed during the study; whether it was that interventions were delivered as planned, or reporting minor issues in delivery, such as technical errors or site investigator non-compliance. We found fidelity reporting to be more detailed in some studies than others (see Table 2 for example quotes).

Risk of bias. Risk of bias ratings were distributed across the low risk (n = 40, 33%), some concerns (n = 57, 47%), and high risk (n = 25, 21%) categories. Figure 2 presents a summary of the ratings by domain. Bias arising from the randomization process was the biggest source of potential bias in the included intervention studies, while the other four domains were often rated as a low source of potential bias.

Fig. 2
figure 2

Summary of risk of bias ratings by domain and overall across studies (n = 122) [136]

Discussion

Strategies designed to improve clinical trial recruitment are typically not conceptualized as complex interventions, despite the fact that they often have many of the defining characteristics of complex interventions [10] such as multiple components, varying groups/individuals targeted by the interventions (e.g., potential participants, recruiters, health care providers), and levels of flexibility of tailoring involved. In response to numerous calls for clarity around understanding and reporting of important aspects of trial recruitment [137,138,139], we sought to use the TIDieR checklist as a guide to describe the reporting of trial recruitment intervention studies. Our findings suggest that TIDieR can be used to provide a useful description of the main components of complex trial recruitment interventions, highlights variability in the reporting of these interventions, and suggests ways in which reporting of these interventions can be improved.

Our first aim was to understand to what extent we could identify/extract information about each of the 12 items within recruitment intervention studies. While the framework has been applied to a variety of health service interventions [140], it is new to the discussion of recruitment interventions, perhaps because such interventions often focus on simple outcomes (e.g., trial enrolment). However, recruitment interventions are often complex in other aspects, such as number of components and variety of individuals targeted by the intervention [10]. While many items were reported with relatively high consistency for all studies, the last four items were very rarely reported. Although tailoring and modifications would only be reported when present, they were present for surprisingly few studies considering they may be important components of complex interventions. It was unclear whether a lack of evidence of these two items was due to poor reporting or irrelevant to the intervention. Perhaps TIDieR guidance should include recommending statements when these items are not present in order for readers to better understand all intervention components, whether used or not. Recruitment interventions are unlikely to be delivered with 100% fidelity, which may affect the associated recruitment outcomes. However, very few studies reported on fidelity or lack thereof, and even fewer studies provided evidence of a pre-specified fidelity plan. It is unclear whether any lack of effect for these complex interventions are due to the interventions themselves or a problem with intervention delivery. It appears that despite meeting many criteria for complex interventions, recruitment interventions are not being considered as such, reflected by the brevity and simplicity with which they are currently being reported. Trialists should seek to measure and report these details of their interventions with increased detail and consistency, particularly around fidelity and tailoring/modification, in order to advance the literature in this area and allow for more rigorous evaluation across studies.

Insights can be drawn from the items reported with high frequency as well. Materials, procedure, modes of delivery, and frequency, were identified as present for almost every study, indicating these items tend to be well reported in the literature; this may reflect a general belief that these are the most important components of recruitment interventions. It is unclear what level of detail is optimal for assessing the most effective intervention components across studies. Finally, despite extensive piloting, decisions around the reporting of two items (provider, location) often required extensive discussion between raters before consensus could be achieved, suggesting that a more detailed understanding of how these constructs manifest themselves in recruitment strategies and studies is needed. Perhaps a more systematic approach is needed in order to ensure more consistency in reporting all TIDieR items in a way that readers can understand and replicate when appropriate. Methods such as those used in Cochrane reviews, with rigorous consensus processes and standardized extraction tools may apply well to recruitment interventions studies to improve the reporting of intervention details in a way that facilitates replication by future trialists.

Our second aim was to use TIDieR as a guide to describe useful variation among recruitment intervention studies. Our work shows that the TIDieR checklist can be used as a guiding framework for describing important elements of recruitment strategies from a diverse set of recruitment intervention studies, spanning many clinical domains, countries of conduct, and diverse nature of the interventions themselves. Some items provided more useful descriptive information than others; although an intervention name/description was present for all studies, inclusion in TIDieR may not provide much value in reporting relevant intervention details since it could be considered a brief summary of the other more specific TIDieR items. This may also explain why we were unable to explore this item in more detail by developing descriptive categories.

Our approach also allows for combination with other descriptive frameworks that provide more detail on TIDieR-inspired domains. We employed ORRCA intervention design categories [14] to detail the TIDieR procedure domain as it captured the diverse nature of recruitment intervention procedures in a way that highlighted the ‘active ingredients’ of these types of interventions. We also found that using the Marques et al. [19] ontology to categorize modes of delivery highlighted that most of these interventions have thus far focused primarily on various informational modes rather than environmental (e.g., material incentives and reminders) or somatic (e.g., physical stimuli such as light or temperature). The ontology did not capture instances where the intervention was a change to parent trial design (e.g., Zelen design, removing control groups) unless participants were explicitly informed about the trial design (informational mode of delivery). This suggests there may be a benefit to identifying other frameworks beyond Marques et al. to describe more thoroughly the range of modes of delivery used in recruitment interventions.

Trialists may use recruitment interventions in clinical trials from a wide range of clinical specialties, patient populations, and trial phases. We found the detail with which authors described host trials varied across studies. As many as one third were so briefly described that a trial phase could not be determined and was not reported. The motivations to participate in a phase I trial that is testing the safety of a new and experimental drug may be different from motivation to participate in a phase III trial that is testing treatment efficacy against other similarly effective drugs. Therefore, understanding the trial phase and how the findings of a recruitment intervention that is successful in one setting translates (or not) to another is critical. For example, if an intervention proves to be effective in multiple studies across a range of phase III host trials, we cannot assume that it will be effective once implemented in a phase I trial; and therefore warrants further development and evaluation. Consideration should be given as to what host trial details should be reported with greater consistency and clarity when reporting on trails within a trial.

Limitations

Our study had four main limitations. First, we were unable to contact authors of included studies for missing information. While this may have affected risk of bias ratings regarding the reporting of results, we attempted to correct for this by leniency in reporting pre-specified analysis plans since their presence would not have a big impact on reporting of recruitment outcome numbers or rates. Second, due to time and resource limitations, we were not able to search multiple databases for eligible publications; therefore, we may have missed some relevant studies. However, the review and database from which we collected publications both used a systematic approach to searching and screening and we are confident that the vast majority of relevant studies have been included. Third, we elected not to perform meta-analyses on the included papers to explore whether specific TIDieR items or characteristics of items related to recruitment effect sizes. Considering the heterogeneity of included studies, combined with the finding that many studies were rated as high or some concerns for potential bias, we would have relatively low power to detect meaningful differences in effect sizes. Finally, we were not able to operationalize all TIDieR items into descriptive categorical variables; more work needs to be done to specify how these items can be assessed to facilitate better reporting in the recruitment intervention literature in the future.

Future directions

Future work should involve testing the methods used here to develop descriptive extraction variables based on TIDieR items with other intervention types in other reviews to gain a better understanding of whether the categories and methods used here also apply to other settings or whether further development of each item is needed. In addition, further development of the items collected as quotations would aid in more accurately describing and evaluating these interventions.

There may be other key characteristics of recruitment interventions not captured by the TIDieR checklist or the modifications we made in the current study that warrant consideration. Future research should explore the utility of additional reporting items for recruitment interventions, such as adaptability of the intervention to other trial settings or the intervention target (e.g., potential participants, trial recruiters), in addition to who is delivering the intervention [141]. Recent research has also focused on the carbon footprint of clinical trials [140,143,144]. Recruitment interventions may have a direct impact on the carbon footprint of the trials in which they are embedded (e.g., minimizing study materials, speeding up recruitment, implementing virtual trial visits [142]). Including the potential environmental impact as a TIDieR item may help assess the potential longevity of specific recruitment practices and other interventions when deciding which strategies are most appropriate for the trials in question.

It may also be worth exploring other frameworks to see if they might compliment or prove superior to the methods used here in describing the important components of recruitment interventions. Other ontologies, similar to the mode of delivery ontology developed by Marques et al. [19], are currently being developed to further explore the details of other TIDieR elements in greater detail [145, 146]. These ontologies may provide the structure and detail needed to better describe and understand the components of recruitment interventions. This study is part of a larger review examining other factors that may relate to recruitment intervention effectiveness. For example, exploring the use of behavior change techniques in recruitment interventions may provide further insight into what components are most effective in improving recruitment outcomes. Also, while theory use is included as part of TIDieR item 2, we chose to focus the current evaluation on intervention rationale to reduce duplication of work. We will explore whether and how these studies use theory in selecting, developing, and implementing recruitment interventions in detail elsewhere. We will also evaluate the inclusion of participant-centered involvement in these interventions and how these methods differs across studies.

Once a more detailed and comprehensive set of variables is developed, and with the continually growing body of recruitment intervention studies, future research should examine whether these elements can help identify the features most predictive of effective recruitment interventions and in what settings through meta-analyses. This in turn could lead to more efficient trials, thereby reducing costs and bringing new treatments and innovations to patients faster.

Conclusions

We extracted recruitment intervention details and information about reporting from a large, diverse sample of 122 studies evaluating these interventions in randomized and quasi-randomized trials using the TIDieR checklist as a guide. We were able to extract relevant intervention details on important elements of these interventions using a mix of categorical variables and quotations, indicating the TIDieR checklist items fit reasonably well to recruitment interventions. We found that these key components were variably described across studies, with some items being reported more consistently and clearly than others, highlighting areas in which reporting could be improved to facilitate accumulation of knowledge around recruitment practices. Our operationalisations of TIDieR descriptive intervention details were a first-draft attempt to characterize recruitment practices systematically; future research should explore the benefit of additional items (e.g. intervention targets, carbon footprints) or frameworks to improve description of these interventions, and evaluate which components are most related to improved recruitment outcomes. The current findings provide an initial template by which trialists can conceptualise their recruitment efforts as complex interventions for which planning and optimization guidance already exists.

Data availability

All data generated or analysed during this study are included in this published article as Additional file 2.

Abbreviations

ORRCA:

Online Resource for Research in Clinical triAls

PICOS:

Participants, interventions, comparator, outcomes, study designs

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

RoB:

Risk of bias

TIDieR:

Template for Intervention Description and Replication

References

  1. Galea S, Tracy M. Participation rates in epidemiologic studies. Ann Epidemiol. 2007;17(9):643–53.

    Article  PubMed  Google Scholar 

  2. Feldman WB, Kim AS, Chiong W. Trends in Recruitment Rates for Acute stroke trials, 1990–2014. Stroke. 2017;48(3):799–801.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Field KM, Drummond KJ, Yilmaz M, Tacey M, Compston D, Gibbs P, et al. Clinical trial participation and outcome for patients with glioblastoma: multivariate analysis from a comprehensive dataset. J Clin Neurosci. 2013;20(6):783–9.

    Article  PubMed  Google Scholar 

  4. Curtin R, Presser S, Singer E. Changes in telephone survey nonresponse over the past quarter century. Pub Opin Q. 2005;69(1):87–98.

    Article  Google Scholar 

  5. Scoggins JF, Ramsey SD. A national cancer clinical trials system for the 21st century: reinvigorating the NCI Cooperative Group Program. J Natl Cancer Inst. 2010;102(17):1371.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Jacques RM, Ahmed R, Harper J, Ranjan A, Saeed I, Simpson RM, et al. Recruitment, consent and retention of participants in randomised controlled trials: a review of trials published in the National Institute for Health Research (NIHR) Journals Library (1997–2020). BMJ Open. 2022;12(2):e059230.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Williams RJ, Tse T, DiPiazza K, Zarin DA. Terminated trials in the ClinicalTrials.gov results database: evaluation of availability of primary Outcome data and reasons for termination. PLoS ONE. 2015;10(5):e0127242.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Treweek S, Pitkethly M, Cook J, Fraser C, Mitchell E, Sullivan F, et al. Strategies to improve recruitment to randomised trials. Cochrane Database Syst Rev. 2018;2:MR000013.

    PubMed  Google Scholar 

  9. Delaney H, Devane D, Hunter A, Hennessy M, Parker A, Murphy L, et al. Limited evidence exists on the effectiveness of education and training interventions on trial recruitment; a systematic review. J Clin Epidemiol. 2019;113:75–82.

    Article  PubMed  Google Scholar 

  10. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions. MRC Medical Research Council; 2019.

  11. Frew PM, Omer SB, Parker K, Bolton M, Schamel J, Shapiro E, et al. Delivering a dose of hope: a faith-based program to increase older African americans’ participation in clinical trials. JMIR Res Protoc. 2015;4(2):e64.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

    Article  PubMed  Google Scholar 

  13. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Kearney A, Harman NL, Rosala-Hallas A, Beecher C, Blazeby JM, Bower P, et al. Development of an online resource for recruitment research in clinical trials to organise and map current literature. Clin Trails. 2018;15(6):533–42.

    Article  Google Scholar 

  15. Centre for Reviews and Dissemination. Systematic reviews: CRD’s guidance for undertaking reviews in healthcare. York: CRD, University of York; 2009.

    Google Scholar 

  16. Microsoft Corporation. Microsoft Excel. 2010.

  17. Onken LS, Carroll KM, Shoham V, Cuthbert BN, Riddle M. Reenvisioning Clinical Science: Unifying the Discipline to improve the Public Health. Clin Psychol Sci. 2014;2(1):22–34.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Gitlin LN. Introducing a new intervention: an overview of research phases and common challenges. Am J Occup Therapy. 2013;67(2):177–84.

    Article  Google Scholar 

  19. Marques MM, Carey RN, Norris E, Evans F, Finnerty AN, Hastings J et al. Delivering behaviour change interventions: development of a mode of delivery ontology. Wellcome Open Res. 2020;5.

  20. Sterne J, Savović J, Page M, Elbers R, Blencowe N, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898.

    Article  PubMed  Google Scholar 

  21. Abd-Elsayed AA, Sessler DI, Mendoza-Cuartas M, Dalton JE, Said T, Meinert J, et al. A randomized controlled study to assess patients’ understanding of and consenting for clinical trials using two different consent form presentations. Minerva Anestesiol. 2012;78(5):564–73.

    CAS  PubMed  Google Scholar 

  22. Abhyankar P, Bekker HL, Summers BA, Velikova G. Why values elicitation techniques enable people to make informed decisions about cancer trial participation. Health Expect. 2011;14(Suppl 1):20–32.

    Article  PubMed  Google Scholar 

  23. Avenell A, Grant AM, McGeeb M, McPherson G, Campbell MK, McGee MAftRTMG. The effects of an open design on trial participant recruitment, compliance and retention - a randomized controlled trial comparison with a blinded, placebo-controlled design. Clin Trails. 2004;1:490–8.

    Article  Google Scholar 

  24. Welton AJ, Vickers MR, Cooper JA, Meade TW, Marteau TM. Is recruitment more difficult with a placebo arm in randomised controlled trials? A quasirandomised, interview based study. BMJ. 1999;318:1114–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Weston J, Hannah M, Downes J. Evaluating the benefits of a patient information video during the informed consent process. Patient Educ Couns. 1997;30:239–45.

    Article  CAS  PubMed  Google Scholar 

  26. Wong AD, Kirby J, Guyatt GH, Moayyedi P, Vora P, You JJ. Randomized controlled trial comparing telephone and mail follow-up for recruitment of participants into a clinical trial of colorectal cancer screening. Trials. 2013;14:40.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Freer Y, McIntosh N, Teunisse S, Anand KJ, Boyle EM. More information, less understanding: a randomized study on consent issues in neonatal research. Pediatrics. 2009;123(5):1301–5.

    Article  PubMed  Google Scholar 

  28. Weinfurt KP, Hall MA, Friedman JY, Hardy NC, Fortune-Greeley AK, Lawlor JS, et al. Effects of disclosing financial interests on participation in medical research: a randomized vignette trial. Am Heart J. 2008;156(4):689–97.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Nystuen P, Hagen KB. Telephone reminders are effective in recruiting nonresponding patients to randomized controlled trials. J Clin Epidemiol. 2004;57(8):773–6.

    Article  PubMed  Google Scholar 

  30. Liénard J-L, Quinaux E, Fabre-Guillevin E, Piedbois P, Jouhaud A, Decoster G, et al. Impact of on-site initiation visits on patient recruitment and data quality in a randomized trial of adjuvant chemotherapy for breast cancer. Clin Trails. 2006;3:486–92.

    Article  Google Scholar 

  31. Bergenmar M, Johansson H, Wilking N, Hatschek T, Brandberg Y. Audio-recorded information to patients considering participation in cancer clinical trials - a randomized study. Acta Oncol. 2014;53(9):1197–204.

    Article  PubMed  Google Scholar 

  32. Monaghan H, Richens A, Colman S, Currie R, Girgis S, Jayne K, et al. A randomised trial of the effects of an additional communication strategy on recruitment into a large-scale, multi-centre trial. Contemp Clin Trials. 2007;28(1):1–5.

    Article  PubMed  Google Scholar 

  33. Ellis PM, Butow PN, Tattersall MH. Informing breast cancer patients about clinical trials: a randomized clinical trial of an educational booklet. Ann Oncol. 2002;13(9):1414–23.

    Article  CAS  PubMed  Google Scholar 

  34. Ford ME, Havstad SL, Davis SD. A randomized trial of recruitment methods for older African American men in the prostate, lung, colorectal and ovarian (PLCO) cancer screening trial. Clin Trails. 2004;1:343–51.

    Article  Google Scholar 

  35. Jennings CG, MacDonald TM, Wei L, Brown MJ, McConnachie L, Mackenzie IS. Does offering an incentive payment improve recruitment to clinical trials and increase the proportion of socially deprived and elderly participants? Trials. 2015;16:80.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Halpern SD, Karlawish JHT, Casarett D, Berlin JA, Asch DA. Empirical assessment of whether moderate payments are undue or unjust inducements for participation in clinical trials. Arch Intern Med. 2004;164:801–3.

    Article  PubMed  Google Scholar 

  37. DiGuiseppi C, Goss C, Xu S, Magid D, Graham A. Telephone screening for hazardous drinking among injured patients seen in acute care clinics: feasibility study. Alcohol Alcohol. 2006;41(4):438–45.

    Article  PubMed  Google Scholar 

  38. Jeste DV, Palmer BW, Golshan S, Eyler LT, Dunn LB, Meeks T, et al. Multimedia consent for research in people with schizophrenia and normal subjects: a randomized controlled trial. Schizophr Bull. 2009;35(4):719–29.

    Article  PubMed  Google Scholar 

  39. Cockayne S, Fairhurst C, Adamson J, Hewitt C, Hull R, Hicks K, et al. An optimised patient information sheet did not significantly increase recruitment or retention in a falls prevention study: an embedded randomised recruitment trial. Trials. 2017;18(1):144.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Wells KJ, McIntyre J, Gonzalez LE, Lee JH, Fisher KJ, Jacobsen PB, et al. Feasibility trial of a spanish-language multimedia educational intervention. Clin Trials. 2013;10(5):767–74.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Miller NL, Markowitz JC, Kocsis JH, Leon AC, Brisco ST, Garno JL. Cost effectiveness of screening for clinical trials by research assistants versus senior investigators. J Psychiatr Res. 1999;33:81–5.

    Article  CAS  PubMed  Google Scholar 

  42. Litchfield J, Freeman J, Schou H, Elsley M, Fuller R, Chubb B. Is the future for clinical trials internet-based? A cluster randomized clinical trial. Clin Trails. 2005;2:72–9.

    Article  Google Scholar 

  43. Mandelblatt J, Kaufman E, Sheppard VB, Pomeroy J, Kavanaugh J, Canar J, et al. Breast cancer prevention in community clinics: will low-income Latina patients participate in clinical trials? Prev Med. 2005;40(6):611–8.

    Article  PubMed  Google Scholar 

  44. Treschan TA, Scheck T, Kober A, Fleischmann E, Birkenberg B, Petschnigg B, et al. The influence of protocol pain and risk on patients’ willingness to consent for clinical studies: a randomized trial. Anesth Analg. 2003;96:498–506.

    PubMed  Google Scholar 

  45. Trevena L, Irwig L, Barratt A. Impact of privacy legislation on the number and characteristics of people who are recruited for research: a randomised controlled trial. J Med Ethics. 2006;32(8):473–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Paul J, Iveson T, Midgley R, Harkin A, Masterton M, Alexander L, et al. Choice of randomisation time-point in non-inferiority studies of reduced treatment duration: experience from the SCOT study. Trials. 2011;12:S1.

    Article  Google Scholar 

  47. Du W, Mood D, Gadgeel S, Simon MS. An educational video to increase clinical trials enrollment among breast cancer patients. Breast Cancer Res Treat. 2009;117(2):339–47.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Kendrick D, Watson M, Dewey M, Woods AJ. Does sending a home safety questionnaire increase recruitment to an injury prevention trial? A randomised controlled trial. J Epidemiol Community Health. 2001;55:845–6.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  49. Pighills A, Torgerson DJ, Sheldon T. Publicity does not increase recruitment to falls prevention trials: the results of two quasi-randomized trials. J Clin Epidemiol. 2009;62(12):1332–5.

    Article  PubMed  Google Scholar 

  50. Treweek S, Barnett K, Maclennan G, Bonetti D, Eccles MP, Francis JJ, et al. E-mail invitations to general practitioners were as effective as postal invitations and were more efficient. J Clin Epidemiol. 2012;65(7):793–7.

    Article  PubMed  Google Scholar 

  51. Bentley JP, Thacker PG. The influence of risk and monetary payment on the research participation decision making process. J Med Ethics. 2004;30(3):293–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  52. Simel DL, Feussner JR. A randomized controlled trial comparing quantitative informed consent formats. J Clin Epidemiol. 1991;44(8):771–7.

    Article  CAS  PubMed  Google Scholar 

  53. MacQueen KM, Chen M, Ramirez C, Nnko SE, Earp KM. Comparison of closed-ended, open-ended, and perceived informed consent comprehension measures for a mock HIV prevention trial among women in Tanzania. PLoS ONE. 2014;9(8):e105720.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Kerr CEP, Robinson EJ, Lilford RJ, Edwards SJL, Braunholtz DA, Stevens AJ. The impact of describing clinical trial treatments as new or standard. Patient Educ Couns. 2004;53(1):107–13.

    Article  PubMed  Google Scholar 

  55. Foss KT, Kjaergaard J, Stensballe LG, Greisen G. Recruiting to clinical trials on the telephone - a randomized controlled trial. Trials. 2016;17(1):552.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Wadland WC, Hughes JR, Secker-Walker RH, Bronson DL, Fenwick J. Recruitment in a primary care trial on smoking cessation. Fam Med. 1990;22:201–4.

    CAS  PubMed  Google Scholar 

  57. Cooper KG, Grant AM, Garratt AM. The impact of using a partially randomised patient preference design when evaluating alternative managements for heavy menstrual bleeding. Br J Obstet Gynaecol. 1997;104:1367–73.

    Article  CAS  PubMed  Google Scholar 

  58. Larkey LK, Staten LK, Ritenbaugh C, Hall RA, Buller DB, Bassford T, et al. Recruitment of hispanic women to the women’s Health Initiative: the case of Embajadoras in Arizona. Control Clin Trials. 2002;23:289–98.

    Article  PubMed  Google Scholar 

  59. Free C, Hoile E, Robertson S, Knight R. Three controlled trials of interventions to increase recruitment to a randomized controlled trial of mobile phone based smoking cessation support. Clin Trails. 2010;7:265–73.

    Article  Google Scholar 

  60. Lee H, Hubscher M, Moseley GL, Kamper SJ, Traeger AC, Skinner IW, et al. An embedded randomised controlled trial of a Teaser campaign to optimise recruitment in primary care. Clin Trials. 2017;14(2):162–9.

    Article  PubMed  Google Scholar 

  61. Kimmick GG, Peterson BL, Kornblith AB, Mandelblatt J, Johnson JL, Wheeler J, et al. Improving accrual of older persons to cancer treatment trials: a randomized trial comparing an educational intervention with standard information: CALGB 360001. J Clin Oncol. 2005;23(10):2201–7.

    Article  PubMed  Google Scholar 

  62. Coyne C, Xu R, Raich P, Plomer K, Dignan M, Wenzel L, et al. Randomized, controlled trial of an easy-to-read informed consent statement for clinical trial participation: a study of the Eastern Cooperative Oncology Group. J Clin Oncol. 2003;21(5):836–42.

    Article  PubMed  Google Scholar 

  63. Mudano AS, Gary LC, Oliveira AL, Melton M, Wright NC, Curtis JR, et al. Using tablet computers compared to interactive voice response to improve subject recruitment in osteoporosis pragmatic clinical trials: feasibility, satisfaction, and sample size. Patient Prefer Adherence. 2013;7:517–23.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Man MS, Healthlines Study G, Rick J, Bower P, Group M-S. Improving recruitment to a study of telehealth management for long-term conditions in primary care: two embedded, randomised controlled trials of optimised patient information materials. Trials. 2015;16:309.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Du W, Mood D, Gadgeel S, Simon MS. An educational video to increase clinical trials enrollment among lung cancer patients. J Thorac Oncol. 2008;3:23–9.

    Article  CAS  PubMed  Google Scholar 

  66. Fowell A, Johnstone R, Finlay I, Russell D, Russell I. Design of trials with dying patients: a feasibility study of cluster randomisation versus randomised consent. Palliat Med. 2006;20:799–804.

    Article  CAS  PubMed  Google Scholar 

  67. Ives N, Troop M, Waters A, Davies S, Higgs C, Easterbrook P. Does an HIV clinical trial information booklet improve patient knowledge and understanding of HIV clinical trials? HIV Med. 2001;2:241–9.

    Article  CAS  PubMed  Google Scholar 

  68. Weinfurt KP, Hall MA, Dinan MA, DePuy V, Friedman JY, Allsbrook JS, et al. Effects of disclosing financial interests on attitudes toward clinical research. J Gen Intern Med. 2008;23(6):860–6.

    Article  PubMed  PubMed Central  Google Scholar 

  69. Brierley G, Richardson R, Torgerson DJ. Using short information leaflets as recruitment tools did not improve recruitment: a randomized controlled trial. J Clin Epidemiol. 2012;65(2):147–54.

    Article  PubMed  Google Scholar 

  70. Perrone F, De Placido S, Giusti C, Gallo C. Looking for consent in RCTs: a randomised trial with surrogate patients [La Richiesta Del consenso nella Ricerca Clinica: uno studio randomizzato in soggetti sani]. Epidemiol Prev. 1995;19:282–90.

    CAS  PubMed  Google Scholar 

  71. Fleissig A, Jenkins V, Fallowfield L. Results of an intervention study to improve communication about randomised clinical trials of cancer therapy. Eur J Cancer. 2001;37:322–31.

    Article  CAS  PubMed  Google Scholar 

  72. Simes RJ, Tattersall MH, Coates AS, Raghaven D, Solomon HJ, Smartt H. Randomised comparison of procedures for obtaining informed consent in clinical trials of treatment for cancer. BMJ. 1986;293:1065–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  73. Fracasso PM, Goodner SA, Creekmore AN, Morgan HP, Foster DM, Hardmon AA, et al. Coaching intervention as a strategy for minority recruitment to cancer clinical trials. J Oncol Pract. 2013;9(6):294–9.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Myles PS, Fletcher HE, Cairo S, Madder H, McRae R, Cooper J, et al. Randomized trial of informed consent and recruitment for clinical trials in the immediate preoperative period. Anesthesiology. 1999;91:969–78.

    Article  CAS  PubMed  Google Scholar 

  75. Graham A, Goss C, Xu S, Magid DJ, DiGuiseppi C. Effect of using different modes to administer the AUDIT-C on identification of hazardous drinking and acquiescence to trial participation among injured patients. Alcohol Alcohol. 2007;42(5):423–9.

    Article  PubMed  Google Scholar 

  76. Paul C, Courtney R, Sanson-Fisher R, Carey M, Hill D, Simmons J, et al. A randomized controlled trial of the effectiveness of a pre-recruitment primer letter to increase participation in a study of colorectal screening and surveillance. BMC Med Res Methodol. 2014;14:44.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Free CJ, Hoile E, Knight R, Robertson S, Devries KM. Do messages of scarcity increase trial recruitment? Contemp Clin Trials. 2011;32(1):36–9.

    Article  PubMed  Google Scholar 

  78. Hemminki E, Hovi SL, Veerus P, Sevon T, Tuimala R, Rahu M, et al. Blinding decreased recruitment in a prevention trial of postmenopausal hormone therapy. J Clin Epidemiol. 2004;57(12):1237–43.

    Article  PubMed  Google Scholar 

  79. Tehranisa JS, Meurer WJ. Can response-adaptive randomization increase participation in acute stroke trials? Stroke. 2014;45(7):2131–3.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Hutchison C, Cowan C, McMahon T, Paul J. A randomised controlled study of an audiovisual patient information intervention on informed consent and recruitment to cancer clinical trials. Br J Cancer. 2007;97(6):705–11.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  81. Jacobsen PB, Wells KJ, Meade CD, Quinn GP, Lee JH, Fulp WJ, et al. Effects of a brief multimedia psychoeducational intervention on the attitudes and interest of patients with cancer regarding clinical trial participation: a multicenter randomized controlled trial. J Clin Oncol. 2012;30(20):2516–21.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Karunaratne AS, Korenman SG, Thomas SL, Myles PS, Komesaroff PA. Improving communication when seeking informed consent: a randomised controlled study of a computer-based method for providing information to prospective clinical trial participants. MJA. 2010;192:388–92.

    PubMed  Google Scholar 

  83. Llewellyn-Thomas HA, McGreal MJ, Thiel EC. Cancer patients’ decision making and trial-entry preferences: the effects of framing information about short-term toxicity and long-term survival. Med Decis Mak. 1995;15:4–12.

    Article  CAS  Google Scholar 

  84. Chen F, Rahimi K, Haynes R, Naessens K, Taylor-Clarke M, Murray C et al. Investigating strategies to improve attendance at screening visits in a randomized trial. Trials. 2011;12(Suppl 1).

  85. Dear RF, Barratt AL, Askie LM, Butow PN, McGeechan K, Crossing S, et al. Impact of a cancer clinical trials web site on discussions about trial participation: a cluster randomized trial. Ann Oncol. 2012;23(7):1912–8.

    Article  CAS  PubMed  Google Scholar 

  86. Fureman I, Meyers K, McLellan AT, Metzger D, Woody G. Evaluation of a video supplement to informed consent: injection drug users and preventative HIV vaccine efficacy trials. AIDS Educ Prev. 1997;9(4):330–41.

    CAS  PubMed  Google Scholar 

  87. Tilley BC, Mainous AG 3rd, Elm JJ, Pickelsimer E, Soderstrom LH, Ford ME, et al. A randomized recruitment intervention trial in Parkinson’s disease to increase participant diversity: early stopping for lack of efficacy. Clin Trials. 2012;9(2):188–97.

    Article  PubMed  PubMed Central  Google Scholar 

  88. Llewellyn-Thomas HA, Thiel EC, Sem FWC, Harrison Woermke DE. Presenting clinical trial information: a comparison of methods. Patient Educ Couns. 1995;25:97–107.

    Article  CAS  PubMed  Google Scholar 

  89. Ethier JF, Curcin V, McGilchrist MM, Choi Keung SNL, Zhao L, Andreasson A, et al. eSource for clinical trials: implementation and evaluation of a standards-based approach in a real world trial. Int J Med Inf. 2017;106:17–24.

    Article  Google Scholar 

  90. Maxwell AE, Parker RA, Drever J, Rudd A, Dennis MS, Weir CJ, et al. Promoting recruitment using Information Management efficiently (PRIME): a stepped-wedge, cluster randomised trial of a complex recruitment intervention embedded within the REstart or stop Antithrombotics Randomised Trial. Trials. 2017;18(1):623.

    Article  PubMed  PubMed Central  Google Scholar 

  91. Meropol NJ, Wong YN, Albrecht T, Manne S, Miller SM, Flamm AL, et al. Randomized trial of a web-based intervention to address barriers to clinical trials. J Clin Oncol. 2016;34(5):469–78.

    Article  PubMed  Google Scholar 

  92. Bickmore TW, Utami D, Matsuyama R, Paasche-Orlow MK. Improving access to online health information with conversational agents: a randomized controlled experiment. J Med Internet Res. 2016;18(1):e1.

    Article  PubMed  PubMed Central  Google Scholar 

  93. Annett RD, Brody JL, Scherer DG, Turner CW, Dalen J, Raissy H. A randomized study of a method for optimizing adolescent assent to biomedical research. AJOB Empir Bioeth. 2017;8(3):189–97.

    Article  PubMed  Google Scholar 

  94. Brown SD, Partee PN, Feng J, Quesenberry CP, Hedderson MM, Ehrlich SF, et al. Outreach to diversify clinical trial participation: a randomized recruitment study. Clin Trials. 2015;12(3):205–11.

    Article  PubMed  PubMed Central  Google Scholar 

  95. Bobb MR, Van Heukelom PG, Faine BA, Ahmed A, Messerly JT, Bell G, et al. Telemedicine provides noninferior research informed consent for remote study enrollment: a randomized controlled trial. Acad Emerg Med. 2016;23(7):759–65.

    Article  PubMed  PubMed Central  Google Scholar 

  96. Parker A, Knapp P, Treweek S, Madhurasinghe V, Littleford R, Gallant S, et al. The effect of optimised patient information materials on recruitment in a lung cancer screening trial: an embedded randomised recruitment trial. Trials. 2018;19(1):503.

    Article  PubMed  PubMed Central  Google Scholar 

  97. Arundel C, Jefferson L, Bailey M, Cockayne S, Hicks K, Loughrey L, et al. A randomized, embedded trial of pre-notification of trial participation did not increase recruitment rates to a falls prevention trial. J Eval Clin Pract. 2017;23(1):73–8.

    Article  PubMed  Google Scholar 

  98. Crane MM, LaRose JG, Espeland MA, Wing RR, Tate DF. Recruitment of young adults for weight gain prevention: randomized comparison of direct mail strategies. Trials. 2016;17(1):282.

    Article  PubMed  PubMed Central  Google Scholar 

  99. Parker C, Snyder R, Jefford M, Dilts D, Wolfe R, Millar J. A randomized controlled trial of an additional funding intervention to improve clinical trial enrollment. J Natl Compr Canc Netw. 2017;15(9):1104–10.

    Article  PubMed  Google Scholar 

  100. Felicitas-Perkins JQ, Palalay MP, Cuaresma C, Ho RCS, Chen MS Jr., Dang J, et al. A pilot study to determine the effect of an educational DVD in Philippine languages on cancer clinical trial participation among filipinos in Hawai‘i. Hawai’i J Med Public Health. 2017;76(7):171–7.

    Google Scholar 

  101. Witham MD, Band MM, Price RJG, Fulton RL, Clarke CL, Donnan PT, et al. Effect of two different participant information sheets on recruitment to a falls trial: an embedded randomised recruitment trial. Clin Trials. 2018;15(6):551–6.

    Article  PubMed  Google Scholar 

  102. Veerus P, Fischer K, Hemminki E, Hovi SL, Hakama M. Effect of characteristics of women on attendance in blind and non-blind randomised trials: analysis of recruitment data from the EPHT Trial. BMJ Open. 2016;6(10):e011099.

    Article  PubMed  PubMed Central  Google Scholar 

  103. Hughes-Morley A, Hann M, Fraser C, Meade O, Lovell K, Young B, et al. The impact of advertising patient and public involvement on trial recruitment: embedded cluster randomised recruitment trial. Trials. 2016;17(1):586.

    Article  PubMed  PubMed Central  Google Scholar 

  104. Paris A, Deygas B, Cornu C, Thalamas C, Maison P, Duale C, et al. Improved informed consent documents for biomedical research do not increase patients’ understanding but reduce enrolment: a study in real settings. Br J Clin Pharmacol. 2015;80(5):1010–20.

    Article  PubMed  PubMed Central  Google Scholar 

  105. Bracken K, Keech A, Hague W, Kirby A, Robledo KP, Allan C, et al. Telephone call reminders did not increase screening uptake more than SMS reminders: a recruitment study within a trial. J Clin Epidemiol. 2019;112:45–52.

    Article  PubMed  Google Scholar 

  106. Garvelink MM, Freitas A, Menear M, Briere N, Stacey D, Legare F. In for a penny, in for a pound: the effect of pre-engaging healthcare organizations on their subsequent participation in trials. BMC Res Notes. 2015;8:751.

    Article  PubMed  PubMed Central  Google Scholar 

  107. Bishop FL, Greville-Harris M, Bostock J, Din A, Graham CA, Lewith G, et al. Informing adults with back pain about placebo effects: randomized controlled evaluation of a new website with potential to improve informed consent in clinical research. J Med Internet Res. 2019;21:1–15.

  108. Brubaker L, Jelovsek JE, Lukacz ES, Balgobin S, Ballard A, Weidner AC, et al. Recruitment and retention: a randomized controlled trial of video-enhanced versus standard consent processes within the E-OPTIMAL study. Clinical Trials. 2019;16:481–489.

  109. Casey SL. The impact of help-self and help-others appeals upon participation in clinical research trials (Accession No. 10264575) [Doctoral dissertation, Old Dominion University, Norfolk]. ProQuest Dissertations Publishing; 2017.

  110. Chow EJ, Baldwin LM, Hagen AM, Hudson MM, Gibson TM, Kochar K, et al. Communicating health information and improving coordination with primary care (CHIIP): rationale and design of a randomized cardiovascular health promotion trial for adult survivors of childhood cancer. Contemp Clin Trials. 2020;89:105915. https://doi.org/10.1016/j.cct.2019.105915.

  111. Christopher PP, Appelbaum PS, Truong D, Albert K, Maranda L, Lidz C. Reducing therapeutic misconception: a randomized intervention trial in hypothetical clinical trials. PLoS One. 2017;12:1–11.

  112. Cottler LB, Striley CW, Elliott AL, Zulich AE, Kwiatkowski E, Nelson D. Pragmatic trial of a Study Navigator Model (NAU) vs. Ambassador Model (N+) to increase enrollment to health research among community members who use illicit drugs. Drug Alcohol Depend. 2017;175:146–150.

  113. Courtright KR, Halpern SD, Joffe S, Ellenberg SS, Karlawish J, Madden V, et al. Willingness to participate in pragmatic dialysis trials: the importance of physician decisional autonomy and consent approach. Trials. 2017;18:1–10.

  114. Godinho A, Schell C, Cunningham JA. How one small text change in a study document can impact recruitment rates and follow-up completions. Internet Interv. 2019;18:100284. https://doi.org/10.1016/j.invent.2019.100284.

  115. Haynes R, Chen F, Wincott E, Dayanandan R, Lay MJ, Parish S, et al. Investigating modifications to participant information materials to improve recruitment into a large randomized trial. Trials. 2019;20:1–6.

  116. Jefferson L, Fairhurst C, Brealey S, Coleman E, Cook L, Hewitt C, et al. Remote or on-site visits were feasible for the initial setup meetings with hospitals in a multicenter surgical trial: an embedded randomized trial. J Clin Epidemiol. 2018;100:13–21.

  117. Jolly K, Sidhu M, Bower P, Madurasinghe V. Improving recruitment to a study of telehealth management for COPD: a cluster randomised controlled “study within a trial” (SWAT) of a multimedia information resource. Trials. 2019;20:453.

  118. Kamen CS, Quinn GP, Asare M, Heckler CE, Guido JJ, Giguere JK, et al. Multimedia psychoeducation for patients with cancer who are eligible for clinical trials: a randomized clinical trial. Cancer. 2018;124:4504–4511.

  119. Kenerson D, Fadeyi S, Liu J, Weriwoh M, Beard K, Hargreaves MK. Processes in increasing participation of African American women in cancer prevention trials: development and pretesting of an audio-card. J Health Commun. 2017;22:933–941.

  120. Kern-Goldberger AS, Hill-Ricciuti AC, Zhou JJ, Savant AP, Rugg L, Dozor AJ, et al. Perceptions of safety monitoring in CF clinical studies and potential impact on future study participation. Journal of Cystic Fibrosis. 2019;18:530–535.

  121. Kim SC, Cappella JN, Price V. Online discussion effects on intention to participate in genetic research: a longitudinal experimental study. Psychol Health. 2016;31:1025–1046.

  122. Krishnamurti T, Argo N. A patient-centered approach to informed consent: results from a survey and randomized trial. Medical Decision Making. 2016;36:726–740.

  123. Langford AT, Larkin K, Resnicow K, Zikmund-Fisher BJ, Fagerlin A. Understanding the role of message frames on African-American willingness to participate in a hypothetical diabetes prevention study. J Health Commun. 2017;22:647–656.

  124. Massett HA, Hiser M, Atkinson NL, Brittle C, Bailey R, Adler J, et al. A randomized controlled study comparing the national cancer institute’s original and revised consent form templates. IRB Ethics and Human Research. 2017;39:1–7.

  125. McCaffery J, Mitchell A, Fairhurst C, Cockayne S, Rodgers S, Relton C, et al. Does handwriting the name of a potential trial participant on an invitation letter improve recruitment rates? A randomised controlled study within a trial [version 1; peer review: 2 approved]. F1000Res. 2019;8:1–11.

  126. McCormack LA, Wylie A, Moultrie R, Furberg RD, Wheeler AC, Treiman K, et al. Supporting informed clinical trial decisions: Results from a randomized controlled trial evaluating a digital decision support tool for those with intellectual disability. PLoS One. 2019;14:1–21.

  127. Neighbors C, Rodriguez LM, Garey L, Tomkins MM. Testing a motivational model of delivery modality and incentives on participation in a brief alcohol intervention. Addictive Behaviors. 2018;84:131?138

  128. Nickell A, Stewart SL, Burke NJ, Guerra C, Cohen E, Lawlor C, et al. Engaging limited english proficient and ethnically diverse low-income women in health research: a randomized trial of a patient navigator intervention. Patient Educ Couns. 2019;102:1313–1323.

  129. O’Hare F, Flanagan Z, Nelson M, Curtis A, Heritier S, Spark S, et al. Comparing two methods for delivering clinical trial informed consent information to older adults: singular versus stepped approach. Clinical Trials. 2018;15:610–615.

  130. Ortiz AP, Machin M, Soto-Salgado M, Centeno-Girona H, Rivera-Collazo D, González D, et al. Effect of an educational video to increase calls and screening into an anal cancer clinical trial among HIV+ Hispanics in PR: results from a randomized controlled behavioral trial. AIDS Behav. 2019;23:1135–1146.

  131. Peng W, Morgan SE, Mao B, McFarlane SJ, Occa A, Grinfeder G, et al. Ready to make a decision: a model of informational aids to improve informed participation in clinical trial research. J Health Commun. 2019;24:865–877.

  132. Perry B, Geoghegan C, Lin L, McGuire FH, Nido V, Grabert B, et al. Patient preferences for using mobile technologies in clinical trials. Contemp Clin Trials Commun. 2019;15:100399.

  133. Rogers A, Flynn RWV, Mackenzie IS, MacDonald TM. Does the provision of a DVD-based audio-visual presentation improve recruitment in a clinical trial? A randomised trial of DVD trial invitations. BMC Med Res Methodol. 2019;19:1–6.

  134. Skinner JS, Fair AM, Holman AS, Boyer AP, Wilkins CH. et al. The impact of an educational video on clinical trial enrollment and knowledge in ethnic minorities: a randomized control trial. Front Public Health. 2019;7:1–7.

  135. Whiteside K, Flett L, Mitchell A, Fairhurst C, Cockayne S, Rodgers S, et al. Using pens as an incentive for trial recruitment of older adults: an embedded randomised controlled trial. F1000Res. 2019;8:1–10.

  136. McGuinness L, Higgins J. Risk-of-bias VISualization (robvis): an R package and Shiny web app for visualizing risk-of-bias assessments. Res Syn Meth. 2020:1–7.

  137. Gardner HR, Treweek S, Gillies K. Using evidence when planning for trial recruitment: an international perspective from time-poor trialists. PLoS ONE. 2019;14(12):e0226081.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  138. O’Sullivan Greene E, Shiely F. Recording and reporting of recruitment strategies in trial protocols, registries, and publications was nonexistent. J Clin Epidemiol. 2022;152:248–56.

    Article  PubMed  Google Scholar 

  139. Gates A, Caldwell P, Curtis S, Dans L, Fernandes RM, Hartling L, et al. Consent and recruitment: the reporting of paediatric trials published in 2012. BMJ Paediatr Open. 2018;2(1):e000369.

    Article  PubMed  PubMed Central  Google Scholar 

  140. Dijkers MP. Overview of Reviews using the template for intervention description and replication (TIDieR) as a measure of trial intervention reporting quality. Arch Phys Med Rehabil. 2021;102(8):1623–32.

    Article  PubMed  Google Scholar 

  141. Cotterill S, Knowles S, Martindale AM, Elvey R, Howard S, Coupe N, et al. Getting messier with TIDieR: embracing context and complexity in intervention reporting. BMC Med Res Methodol. 2018;18(1):12.

    Article  PubMed  PubMed Central  Google Scholar 

  142. Sustainable Trials Study Group. Towards sustainable clinical trials. BMJ. 2007;334(7595):671–3.

    Article  Google Scholar 

  143. Lyle K, Dent L, Bailey S, Kerridge L, Roberts I, Milne R. Carbon cost of pragmatic randomised controlled trials: retrospective analysis of sample of trials. BMJ. 2009;339:b4187.

    Article  PubMed  PubMed Central  Google Scholar 

  144. Adshead F, Al-Shahi Salman R, Aumonier S, Collins M, Hood K, McNamara C, et al. A strategy to reduce the carbon footprint of clinical trials. Lancet. 2021;398(10297):281–2.

    Article  CAS  PubMed  Google Scholar 

  145. Norris E, Marques MM, Finnerty AN, Wright AJ, West R, Hastings J, et al. Development of an intervention setting ontology for behaviour change: specifying where interventions take place. Wellcome Open Res. 2020;5:124.

    Article  PubMed  PubMed Central  Google Scholar 

  146. Wright AJ, Norris E, Finnerty AN, Marques MM, Johnston M, Kelly MP, et al. Ontologies relevant to behaviour change interventions: a method for their development. Wellcome Open Res. 2020;5:126.

Download references

Acknowledgements

Not applicable.

Funding

This study was funded by the Canadian Institutes of Health Research (CIHR; Grant # PJT – 169055).

Author information

Authors and Affiliations

Authors

Contributions

JCB, JP, JG, DAF, KG IDG, and MT were responsible for the conception of this project and provided guidance and expertise throughout the project. NH, KC, JCB completed study screening. NH, KC, SS, and SV completed data extraction. JCB and NH completed data analysis. JCB and NH drafted the manuscript. All authors reviewed and approved the final version of the manuscript.

Corresponding author

Correspondence to Jamie C. Brehaut.

Ethics declarations

Ethics approval and consent to participate

Ethics approval and consent to participate were not required for this review.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary Material 2

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hudek, N., Carroll, K., Semchishen, S. et al. Describing the content of trial recruitment interventions using the TIDieR reporting checklist: a systematic methodology review. BMC Med Res Methodol 24, 85 (2024). https://doi.org/10.1186/s12874-024-02195-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-024-02195-5

Keywords