This scoping review found ten checklists issued by universities, national research organizations, or health care facilities that are publicly available to assess study level feasibility of clinical trials. We identified 48 distinct items for trial feasibility assessment. The most frequently mentioned individual items were “The target population is available “, "Access to professional support and required facilities is available", “Equipment is appropriate and sufficient “, "Current standard practice at trial site(s) is compatible with trial protocol" and “Adequate staffing is identified and available within the trial period”. The number of items differed considerably across feasibility checklists. Only four of the ten checklists contained about half of the identified 48 items, the other five checklists less than that. For only four of the identified checklists the documentary basis (e.g. trial protocol) for the assessment was specified, and for none of the checklists the choice of items was justified or the way of compiling items explained. None of the available checklists appeared to be user-tested or validated. Thus, the validity, practicability of available trial feasibility checklists, and whether or not the implementation of such checklists indeed leads to more successful trial conduct appears uncertain. No single checklist is likely to cover all the items required to assess feasibility for every trial and is reliant on the user completing the checklist as intended . Furthermore, checking for feasibility during trial planning has to be seen in the context of a comprehensive framework of clinical research that covers all stages of a clinical trial, i.e. concept, planning and feasibility, conduct, analysis and interpretation, and reporting and knowledge translation . Trial success may also depend on these other phases. Thus, equivalent tools are conceivable for the other phases, too. For example, the implementation of a risk- based monitoring tool during trial conduct.
Comparison to other literature
Although there is substantial literature on feasibility studies, reporting guidelines, and since 2015 even an online journal fully dedicated to pilot and feasibility studies exists [6, 12, 28], the actual assessment whether there is sufficient capacity and capability for the successful conduct and delivery of a clinical trial seems to be a neglected topic in the literature. We only found a single article of a publicly available feasibility checklist with our systematic literature search . There are viewpoints, commentaries, or perspectives articles discussing different aspects of clinical trial feasibility without providing a practical checklist or describing scientific work for a systematic tool development [3, 11, 15]. The here mentioned key factors for trial feasibility assessment largely overlap with the domains from our content analysis. Butryn et al., for instance, considered optimal resource allocation, operational efficiency, financial viability, and enrolment success as essential components for trial feasibility; and the success of each component is best achieved through close collaboration between the principal investigator, the research team, information technology specialists, and ancillary departments (e.g. radiology) . As a reaction to another prematurely discontinued RCT due to poor recruitment, an editorial by Maas raised the overdue question about criteria for pre-study feasibility assessment and suggested that clinical trial registries such as clinicaltrials.gov should consider requiring information about trial feasibility assessments . Given the high prevalence of premature trial discontinuations due to recruitment or organisational problems [7, 8], and the associated huge amount of wasted resources, it is surprising that the clinical trial community has not yet adequately responded to the obvious need for more effective trial feasibility assessment.
Our scoping review has the following limitations: First, we might have missed available trial feasibility checklists despite our comprehensive search strategy including an internet search in addition to a literature search of two large electronic databases . We chose this approach, because we assumed that we had to rely on websites and online publications of research institutions. Inherent risks of searching the internet are selection bias (bubble effect) and the issue of limited reproducibility due to the non-transparent and non-consistent search algorithm by Google.com . Second, we focused only on publicly available checklists. Searching for unpublished checklists or tools would have required a different approach (e.g. survey of clinical trial stakeholders). However, in our opinion this is a minor limitation as we aimed to provide an overview of publicly available tools that can also be accessed by any stakeholder. Third, we could not assess the quality of the identified checklists since we did not find any information on how they were developed. A detailed description of advantages and disadvantages of the different checklists would require comprehensive user testing, ideally using a sample of RCTs that are currently in the planning phase. Fourth, the provided overview of suggested feasibility assessment items is not a recommendation for how an ideal checklist should look like (e.g. not all items might be relevant to trial success or some items may be missing) and is not ready to implement – it is rather a first step for systematic and transparent tool development (see Future directions).
These limitations, however, are hardly relevant for our conclusion that user-tested and validated clinical trial feasibility assessment checklists or tools are lacking. We think that our search allowed us to identify the available checklists that an investigator would find who probably conducts less-extensive searches of the internet or literature databases.
Our overview of suggested items for trial feasibility assessment may be used as a starting point for the systematic and transparent development of a reliable, valid, and user-friendly feasibility assessment tool involving relevant stakeholders such as trial investigators, trial support organizations, research ethics committees, and funding agencies. A large international group of stakeholders could first examine whether there are any missing items, more or less important items (grading the importance of items) and bring forward feasibility checklists or tools that are not publicly available. A resultant item list could then undergo a consensus process across stakeholders using the Delphi technique to determine which items need to be considered in an effective trial feasibility checklist and how assessment results should be applied. Subsequently, empirical user testing and validation work is important. A similar tool development process has recently been successfully completed for an assessment of subgroup effect credibility . Finally, evidence needed to be generated (e.g. a cohort of trials either randomised to using a feasibility checklist or not) in order to investigate whether the implementation of a feasibility checklist indeed leads to more successful trial conduct (e.g. measured by enrolment success). Furthermore, empirical research needs to establish how trial success is associated with individual items that appear relevant for study level feasibility. It might well be that some of these items are gatekeepers and, thus, more important than others for trial success (e.g. whether or not a pilot trial was conducted).