Skip to main content

Table 2 Definitions and criteria used to appraise the quality of trial methods and conduct

From: Test-treatment RCTs are susceptible to bias: a review of the methodological quality of randomized trials that evaluate diagnostic tests

1. Did methods of sequence generation adequately protect against selection bias?

Clear description of method for allocating participants to study groups. Quality judged as Adequate , Inadequate or Unclear using criteria recommended by the Cochrane Collaboration [31].

2. Did methods of allocation concealment adequately protect against selection bias?

Clear description of method for preventing knowledge or prediction of group allocation amongst patients and care-providers. Quality judged as Adequate, Inadequate or Unclear using criteria recommended by the Cochrane Collaboration [31].

3. Were participants, care-providers and outcome assessors blinded to test-treatment interventions?

Clear reports of whether participants, care-providers (those responsible for patient management) and outcome assessors were masked to the identity of tests used for decision-making, and a description of methods used to mask.

4. Were primary outcomes comprehensively reported?

Reports considered adequate with clear definition of the primary outcome and description of method and timing of measurement. When the primary outcome was not clearly defined, the outcome used in the power calculation, or if not the outcome stated in the primary study objective was considered as primary. The primary outcome was as ‘not defined’ in the absence of this information [56]. Outcomes were classified as patient based or process.

Method of measurement considered adequate if a validated tool used, if non-validated but fully described tool used, or if rigorous criteria to assess outcome were provided (e.g. the operational definition of a target condition and test methods used to arrive at a diagnosis). Documentation considered complete when the time at which the primary assessment should be conducted was also made explicit.

5. For each group, is it clear whether some participants did not receive the allocated intervention, were lost to follow-up, or were not analyzed?

Clear and complete accounting of participant flow as detailed in CONSORT [30], including use of a CONSORT diagram. Reporting considered adequate if all five elements (Eligibility, Allocation, Receiving intervention, Followed-up, Analyzed) were reported for all study groups, and if these numbers agreed (e.g. if the number analyzed tallied with the numbers randomized and lost to follow up).

Analysis according to allocated group –considered inadequate if patients not analyzed according to allocated study group, regardless of test(s) actually received.

Use of intention-to-treat (ITT)–clear statement that ITT principle was used. Considered adequate if all study patients were analyzed as randomized, and analyses were complete.

Exclusions and missing data –Clear description of numbers and reasons for missing data due to: missing outcome responses, exclusion of participants, and loss to follow-up; Description of methods used to deal with missing data

Complete analysis –Analyses considered complete when no data were missing due to exclusions, missing responses or loss to follow-up for the primary outcome measured at the primary time-point. Magnitude of attrition calculated per group for each trial by comparing numbers randomized to numbers analyzed. Differential attrition considered as ≥5% and ≥20% difference between arms, following the approach advocated by the Centre for Evidence Based Medicine when judging the quality of comparative evidence of effectiveness [57].

6. Was the primary analyses conducted appropriately?

Whole group analysis –Primary outcomes measured in subgroups of the randomized population were considered Inadequate due to risk of selection bias [58].

Inconsistency –Use of different outcome assessment methods in each group considered inadequate unless the outcome was a measure of test performance (e.g. diagnostic yield or therapeutic yield).

7. How did studies determine sample size?

Reporting of a power calculation and outcome variable on which it was based, extraction of target sample size and comparison to achieved sample size.