Skip to main content

Archived Comments for: Systematic reviews, systematic error and the acquisition of clinical knowledge

Back to article

  1. RCTs and Meta-analysis as Knowledge Sources

    Vance W Berger, National Cancer Institute

    3 August 2010

    Mickenautsch’s article [1] provides an interesting endorsement of systematic reviews as a method to unite analytic and synthetic knowledge acquisition. There are a few issues that merit further discussion, such as 1) clarifying the basis for the criticism of evidence-based medicine (EBM), 2) considering whether or not all trials should be grouped together as providing equally compelling information and 3) questioning the merits of meta-analysis itself.

    After quoting sources asserting that EBM through randomized control trials (RCTs) provides the best evidence, Mickenautsch raises the criticism that EBM claims to have “unique access to absolute scientific truth” which “devalues and replaces knowledge sources of other types.” This criticism does not challenge the fundamental methods by which RCTs arrive at their conclusions but rather simply contends that they are not the only good source of information, which seems to be irrelevant to the assertion that RCTs are the best. As far as RCTs are concerned, it is as if the parties were in a beer commercial arguing about whether the beer is best because it is “less filling” or because it “tastes great” – either way, they both agree that it is the best beer. The real disagreement pertains to other kinds of research, such as qualitative and observational studies, and whether they can also contribute valuable information above and beyond what we learn from RCTs. But this issue should not be construed as criticism of RCTs.

    Another potential criticism of RCTs that the article does not raise is that they are commonly all grouped together as the gold standard of evidence, which ignores the significant differences in quality that exist within that category [2]. Randomized studies might be guaranteed to be unbiased in theory, but in practice there are many complications that can threaten internal validity so that the studies results are not credible. The article does touch on this issue when discussing trial selection and weighting for meta-analysis and the importance of relying on only high quality trials with high internal validity, so as to avoid bias.

    We also question whether meta-analysis categorically produces useful results. The article cites an example in which 31 inconclusive studies are combined to find a meaningful treatment effect. There is no question that mathematical formulas exist to allow one to, in effect, salvage blood from a stone, but if we look beyond the mathematics of the situation, then we may stop to ask if this is real progress. It is true that the power of such a study would be high due to the large (combined) sample size, but perhaps it has entered the realm of being too high (pseudo-power). Such an overpowered, or pseudo-powered, study can make small and meaningless effects, or even no effects at all if the biases are large enough, appear significant beyond the statistical sense. It seems paradoxical that a meta-analysis is able to take 31 negatives and find a positive, and we wonder whether the revealed effects would necessarily be relevant to treatment decisions in clinical practice.

    Sincerely,

    Vance W. Berger, PhD
    Biometry Research Group, National Cancer Institute
    Executive Plaza North, Suite 3131
    6130 Executive Boulevard, MSC 7354
    Bethesda, MD 20892-7354
    (301) 435-5303 (voice), (301) 402-0816 (fax), vb78c (at) nih (dot) gov

    Sarah A. Schoenfeldt
    Clinical Trial Training Intern, National Cancer Institute
    Bethesda, MD, USA
    Undergraduate Student, Brown University
    Providence, RI, USA
    sarah_schoenfeldt (at) brown (dot) edu


    References:
    [1]. Mickenautsch S. “Systematic reviews, systematic error and the acquisition of clinical knowledge”, BMC Med Res Methodol 2010, 10(53).
    [2]. Berger, VW, Matthews, JR, Grosch, EN. “On Improving Research Methodology
    in Medical Studies”, Stat Method Med Res 2008, 17:231-242.

    Competing interests

    none

Advertisement