Skip to main content

Methods to support evidence-informed decision-making in the midst of COVID-19: creation and evolution of a rapid review service from the National Collaborating Centre for Methods and Tools

Abstract

Background

The COVID-19 public health crisis has produced an immense and quickly evolving body of evidence. This research speed and volume, along with variability in quality, could overwhelm public health decision-makers striving to make timely decisions based on the best available evidence. In response to this challenge, the National Collaborating Centre for Methods and Tools developed a Rapid Evidence Service, building on internationally accepted rapid review methodologies, to address priority COVID-19 public health questions.

Results

Each week, the Rapid Evidence Service team receives requests from public health decision-makers, prioritizes questions received, and frames the prioritized topics into searchable questions. We develop and conduct a comprehensive search strategy and critically appraise all relevant evidence using validated tools. We synthesize the findings into a final report that includes key messages, with a rating of the certainty of the evidence using GRADE, as well as an overview of evidence and remaining knowledge gaps. Rapid reviews are typically completed and disseminated within two weeks. From May 2020 to July 21, 2021, we have answered more than 31 distinct questions and completed 32 updates as new evidence emerged. Reviews receive an average of 213 downloads per week, with some reaching over 7700. To date reviews have been accessed and cited around the world, and a more fulsome evaluation of impact on decision-making is planned.

Conclusions

The development, evolution, and lessons learned from our process, presented here, provides a real-world example of how review-level evidence can be made available – rapidly and rigorously, and in response to decision-makers’ needs – during an unprecedented public health crisis.

Peer Review reports

Background

Coronavirus disease 2019 (COVID-19) is an urgent public health crisis requiring prompt decision-making due to rapidly evolving policy and practice needs. Public health decision-makers are always challenged with integrating research into decision-making [1]. This has been further exacerbated with an explosion of COVID-19 evidence due to the increased availability of pre-prints that have not yet undergone peer review, and as publishers expedite steps in the peer-review process to make evidence available in a timely manner [2, 3]. An analysis of Web of Science and Scopus found 23,634 COVID-19-related documents from January-June 2020 [4]; this, compared to a PubMed search revealing 28,300 cardiovascular disease-related publications in all of 2019.

Knowledge syntheses (e.g. best practice guidelines, systematic reviews) represent the highest levels of research evidence [5], summarizing and interpreting results of individual studies and contextualizing them within a larger body of knowledge [6]. An up-to-date guideline based on high-quality systematic reviews is considered the best source of evidence for decision-making [7]. The time to conduct a full systematic review and guideline (> 1–2 years [8]) vastly exceeds the time available to make urgent decisions during public health crises [8]. As a result, several global evidence synthesis organizations [9,10,11] pivoted to producing COVID-19-related rapid reviews (RR). RRs can be defined as “a form of knowledge synthesis that accelerates the process of conducting a traditional systematic review through streamlining or omitting a variety of methods to produce evidence in a resource-efficient manner” [12]. A number of different methodological approaches to conducting a RR exist in order to ‘streamline’ the approach, including limiting the number of databases or timeframe searched, using only a single reviewer for screening, data extraction and/or critical appraisal, or omitting steps such as critical appraisal, meta-analysis, and fulsome write-up [13]. As RRs may have a greater likelihood of bias due to expedited processes, transparency in method is important, with explicit identification of departures from systematic review methods [14,15,16]. A systematic and rigorous process should be maintained with respect to searching, study selection, data extraction, and quality assessment [17]. The production of high-quality syntheses, including critical appraisal of included studies, is particularly important in the current COVID-19 “infodemic” [18, 19].

The six National Collaborating Centres for Public Health were created by the Public Health Agency of Canada (PHAC) in 2005 to strengthen public health in response to the 2003 Severe Acute Respiratory Syndrome (SARS) epidemic [20]. They exist to support the timely use of scientific evidence and other knowledge in public health practice, programs, and policies. The National Collaborating Centre for Methods and Tools’ (NCCMT) vision is for stronger public health, driven by the best-available evidence, to improve the health and well-being of Canadians [21]. The NCCMT acts as an evidence intermediary, curating trustworthy scientific evidence and building capacity for individuals and organizations in public health to find, interpret, adapt, and implement evidence. Ongoing collaboration with a broad network ensures NCCMT’s agility and responsiveness to evolving public health needs, which have been vital in supporting the pandemic response.

As COVID-19 unfolded, the NCCMT heard from public health decision-makers at all levels of government (local, regional, provincial/territorial, federal) about the lack of time and human resources to find answers to key questions. These decision-makers ranged from managers responsible for front line public health staff at local health departments, to members of federal government advisory committees responsible for key recommendations and policies related to various aspects of the pandemic response. To address this need – and with encouragement from our funder to focus and reallocate resources – the NCCMT pivoted to completing RRs within 5–10 business days based on priority questions from public health decision-makers.

In reviewing NCCMT’s established RR protocol [22], the team realized modifications were required given the emergence of a unique evidence ecosystem for COVID-19. The expanded evidence-base [23], new COVID-19-dedicated databases, and increased use of preprint servers [24] complicated established searching, screening, and quality appraisal processes. This presented new challenges for conducting reviews that were timely, efficient, and rigorous.

Here we describe in detail the methods the NCCMT has used to conduct RRs as part of our Rapid Evidence Service (RES), including how these have evolved to ensure feasibility, accuracy, and efficiency as the evidence landscape changed. Our process (Fig. 1) may be used as a guide for other organizations conducting RRs, in response to COVID-19 and other emerging public health issues, now and in the future.

Fig. 1
figure 1

Overview of NCCMT’s Rapid Evidence Service process

Evolution of the RES

The initial protocol built upon the five steps in the NCCMT’s RR Guidebook [22]: defining the research question, searching for, critically appraising, and synthesizing evidence, and assessing applicability and transferability. In the early phases of the pandemic, our team answered questions in as few as five business days. As needs evolved and evidence volume grew, the time to complete reviews was approximately 7–10 business days, and up to three or four weeks for complex, multi-question topics with a large amount of available evidence.

The RES requires a team with methodological and organizational expertise (Table 1) in: conducting rigorous systematic reviews; articulating answerable research questions; creating a search strategy and searching databases, including COVID-19 specific databases; critically appraising different study designs; synthesizing evidence for key findings and actionable messages; and using Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology to rate certainty of evidence [25].

Table 1 Overview of NCCMT Rapid Evidence Service team and key responsibilities

Defining the research question

Limited resources were initially allocated to the RES; questions were restricted to two per week. Questions were prioritized from a list of urgent topics by the NCCMT Scientific Director, Operational Lead, and RES Scientific Lead, primarily based on team expertise and capacity. As demand grew, 60 % of NCCMT staff resources were re-allocated to the RES. Questions initially came from PHAC. As awareness of the service grew, requests came from decision-makers at all levels across Canada and indirectly through involvement in the COVID-19 Evidence Network to support Decision-Makers (COVID-END) [26]. As requests increased, the RES coordination role was formalized to exclusively plan workflow, assign staff tasks, and monitor completion. Requests were formally received through a direct email to the NCCMT’s central email account; through a key contact at our funder, the PHAC, who coordinates evidence synthesis requests from a number of decision-makers and committees within the agency; and through the COVID-END evidence network. Requests were also received informally from NCCMT colleagues, and sometimes suggested internally by NCCMT staff.

A weekly team meeting was implemented to assess progress on current reviews and assess capacity for new reviews, review the week’s schedule for review completion, assign staff tasks, and prioritize new questions and updates to previously completed reviews. Decisions as to which questions to accept were made through team discussion at the weekly team meeting and were based on several factors including: the urgency and relevance to Canada; in-house content expertise and capacity; and availability of evidence, determined by a preliminary scan of databases. Once accepted, an RR Lead is assigned. The Search Lead first looks to see if a recently completed review on the topic exists by scanning COVID-19 RR repositories and websites of organizations known to conduct rigorous syntheses. This step was introduced after we identified duplication of efforts between our team and others (Additional File 1). At first, we considered a synthesis with a search completed within the last week to be ‘up-to-date’. If another synthesis meeting this criterion was identified, we would not proceed with the question and informed the requestor of the other review. As the volume of evidence has grown, and the urgency of decision-making has slowed, as of May 2021 a search completed in the two months is considered ‘up-to-date’.

If a recent review is not available, the RR Lead and RES Coordinator meet to refine the question using PICO/PECO (Population, Intervention/Exposure, Comparison, Outcome) or PS (Population, Situation) format and identify inclusion/exclusion criteria and preliminary search terms. Unless the question is very clear or is an update to a previously completed review, the team will confirm the refined question and proposed criteria with the requestor. This usually involves narrowing the scope of the review to a question that is feasible to answer in the given time frame.

Searching for evidence

Searching for evidence involves developing a search strategy and screening results for inclusion. The RR Lead, RES Coordinator, and RR Search Lead/Staff collaboratively develop a strategy for each question including databases to search, search terms and parameters for each database, and whether grey literature will be included. Our initial search strategy included 14 databases or websites, ten of which were developed specifically for COVID-19 (Table 2). The search may involve an advanced keyword string (e.g., The World Health Organization’s COVID-19 Global Literature on Coronavirus Disease) or a manual site scan (e.g., Public Health England’s completed reviews). Over time, the list of databases evolved as some collections were duplicative of others or were no longer updated frequently (e.g., Cochrane COVID Review bank), others were developed (e.g., L·OVE) and decisions to change to different databases were made to take search sophistication into account (e.g., switching from LitCovid to Medline) (Additional File 1). For searches where a very large number of results are identified that may not be feasible to screen, a very small number of results are identified, or the results appear largely irrelevant, a sample of search results is sent to the RR Lead to revise the search strategy; a health sciences librarian may also be contacted for guidance.

Table 2 NCCMT Rapid Evidence Service search strategy: COVID-19-relevant databases

Searches are conducted in English; peer-reviewed, preprints, and non-peer reviewed reports are included. When titles and abstracts for non-English publications are available in English, and are sufficient to determine eligibility for inclusion, we use in-house expertise (French, Portuguese) or Google Translate. Depending on the question, we may consider a search for data from various public health jurisdictions (e.g., policy documents, regional surveillance data), to supplement scientific evidence or when this may enhance our ability to answer a question. For example, in a review on COVID-19’s role on substance use, overdoses, and substance-related deaths, we supplemented the search of scientific literature with available surveillance data across Canada [27].

Initially, most COVID-19-specific databases and repositories did not have functions to export all references into reference management software. To accommodate this, RR Search Staff entered potentially eligible studies, based on title and abstract screening, into an Excel spreadsheet for full text screening by the RR Lead. As the functionality for many databases evolved, we began to export references into Rayyan – an open access systematic review screening software [28] – for screening. Rayyan enhances efficiency by facilitating removal of duplicates and allowing simultaneous screening by multiple team members. We now use DistillerSR software (which requires a paid subscription) to facilitate deduplication of references, which is particularly helpful for review updates.

Title and abstract screening are done by a single reviewer, as per other RR guidelines [23, 29]. Full texts of potentially relevant articles are screened by the RR Lead to determine final inclusion. The DAISY AI feature in DistillerSR is used as a mechanism to ‘double check’ single reviewer screening, as it suggests references that may have been wrongly excluded.

For feasibility and timeliness, we prioritize guidelines and/or high-quality syntheses, when available. If a recent high-quality synthesis is available, we will consider excluding single studies or only including single studies after the last search date. To gauge quality, we look for whether a comprehensive search strategy is described and included evidence is critically appraised. If both criteria are met, we appraise the synthesis using AMSTAR 1 (A MeaSurement Tool to Assess systematic Reviews) [30]. A review that scores six or higher is deemed sufficient.

For questions where there were few systematic or rapid reviews or single studies identified, we may consider expert opinion or opinion-based guidance documents. These may include interim guidance documents from reputable organizations (such as the World Health Organization) that provide policy recommendations for a given aspect of pandemic response that was created through an expert panel for example. Although these are typically considered at the bottom of the hierarchy of evidence, given the novelty of the SARS-CoV-2 virus, and speed at which a response was required, these were sometimes considered the best available, or only available, evidence. These documents also underwent data extraction and critical appraisal as detailed below.

The RES first focused exclusively on new questions. As evidence continued to evolve, there was a need to update completed reviews and a strategy for identifying evidence that moved from preprint to publication stage was required. While some updated preprints were easily identified during searches, there were instances where substantial changes to the paper (e.g., authors, titles) made it difficult to identify when a preprint had been modified. While many entries into preprint servers are updated within four weeks of publication of peer-reviewed versions, this occurs inconsistently. We now systematically review previously included evidence to determine if it has been updated and if results have changed when completing an update. This includes a targeted search for: (1) updated versions to other included RRs (checking where the RR was published, searching via Google); (2) publication of preprint manuscripts and associated changes to data or interpretation (checking for duplicate first authors and/or titles in our search, checking the preprint server entry, searching via Google); and (3) updates to surveillance data or grey literature sources (checking original webpages). While this adds an additional step to the search process, it ensures we are including the most current evidence.

Extracting data

We created a standard template to build the RR document for data extraction. Key information (question, search strategy, table of eligible studies) are added and sent to staff for data extraction. A single team member extracts data and summarizes key findings relevant to the specific research question; this is double checked by the RR Lead. Specific information depends on the research question, but typically includes study design, quality of included single studies (syntheses only), setting, population characteristics, interventions/exposure, and key outcomes. Any results that are not relevant to the research question are not extracted. Study limitations are noted to inform key findings and recommendations.

Critically appraising the evidence

We critically appraise evidence using AMSTAR 1 for systematic reviews and Joanna Briggs Institute critical appraisal tools for other study designs [30, 31]. Some of our first RRs used the Health Evidence Quality Assessment tool [32]; we changed to AMSTAR 1 to contribute to a repository of critically appraised COVID-19 syntheses [33]. Critical appraisal is completed by one reviewer (internal staff, external contractor) and verified by a second. Conflicts are resolved through discussion or by the RES Coordinator. We assign an overall quality rating (strong, moderate, low) based on the total score. For example, the Joanna Briggs Institute tool for prevalence studies has a total of 9 items: ratings of 1–3 are assigned low quality, 4–6 moderate quality, and 7–9 high quality. Only the overall ratings are included in the RR; full critical appraisal is available upon request.

“GRADE-ing” the evidence

In initial RRs, we reported on the number of studies of low, moderate, and high quality to report overall quality of evidence. But we were concerned that, although studies were appraised as high methodological quality, they were based on designs that had inherently high risk of bias (e.g., case reports, cross-sectional); thus, overall confidence in the evidence was low. In response, we adapted the GRADE approach [25, 34]: an assessment of the certainty in findings based on eight domains. In the GRADE approach, observational studies, for example, provide low quality evidence. This assessment can be further reduced based on: risk of bias; inconsistency in effects; indirectness of interventions/outcomes; imprecision in effect estimate; and publication bias [25]. The assessment can be upgraded based on a large effect, evidence of a dose-response relationship, and properly accounting for confounding. The overall certainty of the evidence (strong, moderate, low, very low) for each outcome is determined [25]. GRADE is completed by the RR Lead after reviewing the data extraction and results summaries from all included studies and is reviewed by the NCCMT Scientific Director and RES Scientific Lead.

Synthesizing the evidence

Results are synthesized narratively due to variation in methodology and outcomes across included studies. Following data extraction, critical appraisal, and GRADE, the RR Lead completes the final synthesis for the Executive Summary. Early RR versions did not include an overall synthesis; this was added in response to requests from decision-makers for a high-level summary of key points, overview of evidence, and knowledge gaps, to be presented first. This revised layout more closely aligns with recommendations for communicating evidence to policymakers, including using a “graded entry” approach (1:3:25 page format), which allows users to access their preferred level of detail (e.g., from key points to full data) [35,36,37].

Formatting and approving the final review

RRs are reviewed internally by the RR Lead, RES Scientific Lead, and NCCMT Scientific Director. For partnered RRs (e.g. a RR related to Indigenous health, partnered with the National Collaborating Centre for Indigenous Health [38]), partner organizations review the Executive Summary and results tables. Final formatting then ensures included evidence sources are appropriately cited and the document’s appearance conforms to the NCCMT’s style guide (Table 3).

Table 3 Structure of an NCCMT COVID-19 rapid review

Disseminating the review

A tailored knowledge translation plan is developed for each RR depending on the topic and target audiences. When the review is requested by an organization, it is shared immediately upon completion. All RRs are freely available to download from the NCCMT website [39]. In September 2020, we created an RES email subscription, which notifies subscribers each time a new RR is posted. We alert our larger NCCMT subscriber-base (> 15,900 as of May 2021) by including new reviews and updates in our monthly newsletter. Reviews are included in monthly spotlights through the McMaster Health Forum and COVID-END [40]. We conduct targeted outreach via email to senior Canadian public health decision-makers and content-specific experts, as appropriate. We may reach out to media outlets via our institution’s public relations and communications department. Finally, we notify our social media followers via Twitter.

Evaluating impact

From May 2020 to July 2021, the NCCMT’s RES team has answered more than 31 distinct questions and completed 32 updates to previously completed reviews (ranging from 1 to 16 updates per question) as new evidence emerged. Preliminary data from web analytics to assess engagement shows our RRs are accessed by all Canadian provinces and territories and 99 countries worldwide. Metrics collected from September 2020 to July 2021 indicate that each review is typically accessed 216 times within the first week, with wide variation in access across reviews. For example, our review on the role of schools and daycares in COVID-19 has been our mostly highly accessed [41]. Between May 2020 and April 2021, the living review has been updated 16 times, and has been viewed on our website over 7,700 times across 58 countries. The review has been cited and indexed in at least 79 sources, including key governmental and non-governmental reports and guidelines. This review has also been picked up by over 40 local, national, and international media outlets, and cited in other guidance documents [42]. Further evaluation of dissemination of this review indicates the international reach of the NCCMT’s reviews. Anecdotal feedback from both local and senior decision-makers within Canada reinforces that the RRs are helpful and informative to Canadian decision-makers. A more fulsome evaluation of the impact of the NCCMT’s RES on decision-making in Canada is planned, which will include a survey of senior decision-makers, key informant interviews, and a comprehensive analysis of web analytics and citation tracking. .

Challenges, lessons learned, and limitations

A primary challenge to any RR is balance between speed and rigor. This issue is even more pronounced in the context of COVID-19 given the massive amount of data and the urgency with which evidence is needed to inform decisions. The streamlined approach of RRs (e.g., single reviewer screening) will always introduce some degree of bias, so it is important to establish and follow a transparent process. The evolving evidence landscape has necessitated many changes to our typical RR methodology; we anticipate further changes may be needed. For example, in addition to accepting and addressing new and emerging public health questions, we have committed to maintaining a living RR of the role of schools and daycares in COVID-19 transmission [41], given the ongoing importance of this question.

We are aware of other Canadian and international organizations conducting RRs on a number of topics related to COVID-19; many report adaptations to the RR process that are similar to ours [17, 23, 43,44,45,46]. For example, many include preprints and grey literature, when previously, only published sources were included [23]. Cochrane emphasizes involving stakeholders throughout the process to better tailor the RR for decision-making [17] and established a question identification and prioritization approach [45]. The Usher Network for COV(id) Evidence Reviews (UNCOVER) recommends against restricting to English-language, as a large volume of COVID-19 research has emerged from non-English-speaking countries [46]. All groups reiterate the importance of speed and critical appraisal, some adopting new software to achieve this [17, 23, 45, 46].

Although wide variations in RR methodology have been reported both in response to the COVID-19 pandemic, and in general [16], overall those conducting COVID-19 specific RRs appear to have distinct methodological features across groups. A 2015 scoping review of RR methods, of those that reported the duration of conduct of the RR, most were completed in 1–6 months [16]; while many COVID-19 RRs are produced more quickly. Within general rapid reviews, the most common approaches to streamlining include: presenting results as a narrative summary (78 %), limiting criteria by date (68 %), liming the search to published literature (24 %), and having one person extract data and a second person verify (23 %) [16].

From January 1-April 30, 2020, there were > 6,000 COVID-19-related manuscripts posted across preprint servers with > 250 preprints posted weekly [47]. While this rapid response is impressive, there is the potential that quick production of poor quality, non-peer-reviewed evidence may later be retracted or substantially alter its findings before publication [48]. Critical appraisal is therefore imperative, but also challenging as our team noted that methods often included minimal details. Like the NCCMT, many groups have recognized the need to assess the certainty in the findings and have incorporated GRADE [17].

Our organization is fortunate to have been well-prepared, in terms of staff expertise and funder support, to rapidly respond to public health needs in this time of crisis. Given specialized skills and a dedicated, nimble team, it was possible to pivot from previous workplans and use rigorous methods to develop RRs in a much faster timeframe than has been reported pre-pandemic (average 3.2 months [15], range 1–12 months [16]). However, we faced a number of challenges from both a human and financial resources perspective. While some reviews fell neatly within our planned time frames, some required greater resource allocation due to the number of eligible studies (e.g. a RR on transmission risk in acute care settings [49]). Although our team has expertise in searching, critical appraisal, and synthesis, we do not have content area expertise in all fields related to infectious diseases. For questions focused on basic science, laboratory, or mathematical modelling studies, we connected with modelling and infectious disease experts at McMaster Univesity and the National Collaborating Centres for Indigenous Health, Infectious Disease, and Environmental Health. Over time, we have shifted our response to these questions by either partnering with other oganizations with specific content expertise to support completion or recommending other organizations that could complete the review.

While many international RR groups focus specifically on clinical or treatment-related questions [50,51,52], few focus exclusively on public-health relevant topics. In our topic selection process, we regularly scan relevant websites and repositories to decrease the chance of duplication, but due to the time lag in agreeing to take on a question and having the final product available online, we are aware of instances where duplication of efforts has occurred. RRs may be considered outdated soon after completion due to the speed at which evidence is available. While we have integrated RR updates into our workflow, it is not possible to update all topics. An ongoing challenge is how to handle reviews that may no longer be based on the most recent and highest quality evidence. There is a need to combine forces and identify mechanisms for effective communication and sharing of resources to ensure that timely and rigorous reviews can be completed and shared amongst organizations to contribute to the global pandemic response. We are actively working on developing strategies to collaborate with provincial, national, and international organizations conducting public health relevant reviews to avoid duplication. Participation in COVID-END [26], funded by the Canadian Institutes of Health Research (CIHR) [53], is one strategy that helps to reduce duplication, as well as the NCCMT’s RR repository of ongoing and recently completed RRs related to COVID-19 [54].

Future directions for the RES

Most of our efforts have been to conduct knowledge syntheses on priority public health questions and to broadly share findings with decision-makers. As the urgency to complete reviews has diminished somewhat, there is opportunity to expand our knowledge translation to diverse audiences and to identify new ways to support implementation of evidence into policy and program decisions. We also now seek to formally evaluate our process and its impact on public health decision-making.

Conclusions

This overview provides a real-world example of how internationally accepted RR methods can be modified to meet the emergent needs of public health decision-makers in the unprecedented context of the COVID-19 pandemic. As countries around the world continue to grapple with ongoing issues including vaccine rollout, variants of concern, public distrust, fatigue with pandemic-related restrictions, and social and economic inequalities, there has never been a more important time to work collaboratively and in partnership with decision-makers to ensure the best available evidence is available to inform policy decisions and program planning.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Abbreviations

AMSTAR :

A MeaSurement Tool to Assess systematic Reviews

CDC :

Centers for Disease Control and Prevention

CIHR :

Canadian Institutes of Health Research

COVID-19 :

Coronavirus Disease 2019

COVID-END :

COVID-19 Evidence Network to support Decision-Makers

GRADE :

Grading of Recommendations, Assessment, Development and Evaluation

NCCMT :

National Collaborating Centre for Methods and Tools

NIHR :

National Institute for Health Research

PHAC :

Public Health Agency of Canada

PICO/PECO :

Population, Intervention/Exposure, Comparison, Outcome

PS :

Population, Situation

RES :

Rapid Evidence Service

RR :

Rapid Review

SARS :

Severe Acute Respiratory Syndrome

UNCOVER :

Usher Network for COV(id) Evidence Reviews

WHO :

World Health Organization

References

  1. Bowen S, Erickson T, Martens PJ, Crockett S. More than “using research”: the real challenges in promoting evidence-informed decision-making. Healthc Policy. 2009;4(3):87–102.

    PubMed  PubMed Central  Google Scholar 

  2. Majumder MS, Mandl KD. Early in the epidemic: impact of preprints on global discourse about COVID-19 transmissibility. The Lancet Global Health. 2020;8(5):e627–30.

    Article  Google Scholar 

  3. Palayew A, Norgaard O, Safreed-Harmon K, Andersen TH, Rasmussen LN, Lazarus JV. Pandemic publishing poses a new COVID-19 challenge. Nature Human Behaviour. 2020;4(7):666–9.

    Article  Google Scholar 

  4. Teixeira da Silva JA, Tsigaris P, Erfanmanesh M: Publishing volumes in major databases related to Covid-19. Scientometrics 2020.

  5. Djulbegovic B, Guyatt GH. Progress in evidence-based medicine: a quarter century on. Lancet. 2017;390(10092):415–23.

    Article  Google Scholar 

  6. Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J Clin Epidemiol. 2011;64(1):11–20.

    Article  Google Scholar 

  7. Dicenso A, Bayley L, Haynes RB. Accessing pre-appraised evidence: fine-tuning the 5S model into a 6S model. Evid Based Nurs. 2009;12(4):99–101.

    Article  Google Scholar 

  8. Borah R, Brown AW, Capers PL, Kaiser KA. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open. 2017;7(2):e012545.

    Article  Google Scholar 

  9. Cochrane COVID Reviews [https://covidreviews.cochrane.org/]

  10. Oxford COVID-19 Evidence Service [https://www.cebm.net/covid-19/current-questions-under-review/]

  11. COVID-19 Knowledge Translation [https://knowledgetranslation.net/expertise/covid-19/]]

  12. Hamel C, Michaud A, Thuku M, Skidmore B, Stevens A, Nussbaumer-Streit B, Garritty C. Defining rapid reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews. J Clin Epidemiol. 2020;129:74–85.

    Article  Google Scholar 

  13. Tricco AC, Zarin W, Antony J, Hutton B, Moher D, Sherifali D, Straus SE. An international survey and modified Delphi approach revealed numerous rapid review methods. J Clin Epidemiol. 2016;70:61–7.

    Article  Google Scholar 

  14. Reviews: Rapid! Rapid! Rapid! … and systematic. Systematic Reviews 2015, 4(1):4.

  15. Abou-Setta AM, Jeyaraman M, Attia A, Al-Inany HG, Ferri M, Ansari MT, Garritty CM, Bond K, Norris SL. Methods for Developing Evidence Reviews in Short Periods of Time: A Scoping Review. PLoS One. 2016;11(12):e0165903.

    Article  Google Scholar 

  16. Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, Perrier L, Hutton B, Moher D, Straus SE. A scoping review of rapid review methods. BMC Med. 2015;13:224.

    Article  Google Scholar 

  17. Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, Affengruber L, Stevens A. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2020;130:13–22.

    Article  Google Scholar 

  18. The Lancet Infectious D. The COVID-19 infodemic. Lancet Infect Dis. 2020;20(8):875.

    Article  Google Scholar 

  19. Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation [https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation]

  20. Medlar B, Mowat D, Di Ruggiero E, Frank J. Introducing the National Collaborating Centres for Public Health. CMAJ. 2006;175(5):493–4.

    Article  Google Scholar 

  21. About the National Collaborating Centre for Methods and Tools [https://www.nccmt.ca/about/vision-mission-goals]

  22. Rapid Review Guidebook [https://www.nccmt.ca/tools/rapid-review-guidebook]

  23. Tricco AC, Garritty CM, Boulos L, Lockwood C, Wilson M, McGowan J, McCaul M, Hutton B, Clement F, Mittmann N, et al. Rapid review methods more challenging during COVID-19: commentary with a focus on 8 knowledge synthesis steps. J Clin Epidemiol. 2020;126:177–83.

    Article  Google Scholar 

  24. Preprints and Rapid Communication of COVID-19 research [https://asapbio.org/preprints-and-covid-19]

  25. Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, deBeer H, et al. GRADE guidelines: 1. Introduction—GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.

    Article  Google Scholar 

  26. COVID-19 Evidence Network to support Decision-making (COVID-END) [https://www.mcmasterforum.org/networks/covid-end]

  27. National Collaborating Centre for Methods and Tools: What is the effect of the COVID-19 pandemic on opioid and substance use and related harms? In.; 2020.

  28. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.

    Article  Google Scholar 

  29. Tricco AC, Langlois EV, Straus SE. Rapid reviews to strengthen health policy and systems: a practical guide. In. Geneva: World Health Organization; 2017.

    Google Scholar 

  30. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM: Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 2007, 7(1):10.

    Article  Google Scholar 

  31. Critical Appraisal Tools [https://joannabriggs.org/critical-appraisal-tools]

  32. Our Appraisal Tools [https://www.healthevidence.org/our-appraisal-tools.aspx]

  33. COVID-19 evidence from HSE and SSE [https://www.mcmasterforum.org/find-evidence/covid-19-evidence/covid-19-evidence-from-hse-and-sse]

  34. Schünemann HJ, Santesso N, Vist GE, Cuello C, Lotfi T, Flottorp S, Davoli M, Mustafa R, Meerpohl JJ, Alonso-Coello P, et al. Using GRADE in situations of emergencies and urgencies: certainty in evidence and recommendations matters during the COVID-19 pandemic, now more than ever and no matter what. J Clin Epidemiol. 2020;127:202–7.

    Article  Google Scholar 

  35. Lavis JN, Permanand G, Oxman AD, Lewin S, Fretheim A. SUPPORT Tools for evidence-informed health Policymaking (STP) 13: Preparing and using policy briefs to support evidence-informed policymaking. Health Res Policy Syst. 2009;7(Suppl 1):S13.

    Article  Google Scholar 

  36. Wallace J, Byrne C, Clarke M. Making evidence more wanted: a systematic review of facilitators to enhance the uptake of evidence from systematic reviews and meta-analyses. Int J Evid Based Healthc. 2012;10(4):338–46.

    Article  Google Scholar 

  37. Petkovic J, Welch V, Jacob MH, Yoganathan M, Ayala AP, Cunningham H, Tugwell P. The effectiveness of evidence summaries on health policymakers and health system managers use of evidence from systematic reviews: a systematic review. Implement Sci. 2016;11(1):162.

    Article  Google Scholar 

  38. National Collaborating Centre for Methods and Tools, National Collabroating Centre for Indigenous Health: Rapid review: what factors may help protect Indigenous peoples and communities in Canada and internationally from the COVID-19 pandemic and its impacts? In.; 2020.

  39. COVID-19 Rapid Evidence Service [https://www.nccmt.ca/covid-19/covid-19-rapid-evidence-service]

  40. Canadian spotlights [https://www.mcmasterforum.org/networks/covid-end/resources-specific-to-canada/keep-current/canadian-spotlights]

  41. National Collaborating Centre for Methods and Tools: Living Rapid Review Update 12: What is the specific role of daycares and schools in COVID-19 transmission? In.; 2021.

  42. Science M, Bitnun S, al. e: COVID-19: Guidance for School Operation during the Pandemic. In. Toronto, Canada: SickKids; 2021.

  43. Rapid evidence profiles addressing challenges related to COVID-19 [https://www.mcmasterforum.org/stay-connected/new-at-the-forum/news-item/2020/05/21/rapid-evidence-profiles-addressing-challenges-related-to-covid-19]

  44. About the Rapid Response Service [https://www.cadth.ca/about-cadth/what-we-do/products-services/rapid-response-service]

  45. Biesty L, Meskell P, Glenton C, Delaney H, Smalle M, Booth A, Chan XHS, Devane D, Houghton C. A QuESt for speed: rapid qualitative evidence syntheses as a response to the COVID-19 pandemic. Systematic Reviews. 2020;9(1):256.

    Article  Google Scholar 

  46. McQuillan R, Dozier M, Theodoratou E, Nair H, McSwiggan E, Fowkes G, Campbell H: UNCOVER Rapid Review Group - What methodology should we use? In.; 2020.

  47. Fraser N, Brierley L, Dey G, Polka JK, Pálfy M, Nanni F, Coates JA: Preprinting the COVID-19 pandemic. bioRxiv 2020:2020.2005.2022.111294.

  48. Ioannidis JPA: Coronavirus disease 2019: The harms of exaggerated information and non-evidence-based measures. Eur J Clin Invest 2020, 50(4):e13222.

    Article  CAS  Google Scholar 

  49. National Collaborating Centre for Methods and Tools: Rapid Review: What is the evidence for COVID-19 transmission in acute care settings? In.; 2020.

  50. Siemieniuk RA, Bartoszko JJ, Ge L, Zeraatkar D, Izcovich A, Kum E, Pardo-Hernandez H, Rochwerg B, Lamontagne F, Han MA et al: Drug treatments for covid-19: living systematic review and network meta-analysis. BMJ 2020, 370:m2980.

  51. The COVID-NMA initiative: A living mapping and living systematic review of COVID-19 trials [https://covid-nma.com/]

  52. Copenhagen Trial Unit: Centre for Clinical Intervention Research [https://ctu.dk/]

  53. Government of Canada invests $1 M in a COVID-19 evidence network to support decision-making [https://www.canada.ca/en/institutes-health-research/news/2021/01/government-of-canada-invests-1m-in-a-covid-19-evidence-network-to-support-decision-making.html]

  54. COVID-19 Rapid Evidence Reviews [https://www.nccmt.ca/covid-19/covid-19-evidence-reviews]

Download references

Acknowledgements

The authors wish to acknowledge the efforts of all current and past members of the NCCMT Rapid Evidence Service team: Becky Blair, Donna Ciliska, Taylor Colangeli, Stephanie Hopkins, Heather Husson, Rachel Jansen, Izabelle Siqueira, Susan Snelling, Heidi Turon, and Alison van der Wal.

Funding

The National Collaborating Centre for Methods and Tools is funded by the Public Health Agency of Canada. The views expressed herein do not necessarily represent the views of the Public Health Agency of Canada.

Author information

Authors and Affiliations

Authors

Contributions

SENS: Conceptualization, Methodology, Writing - original draft, Writing - review & editing. EB: Writing - original draft, Writing - review & editing. RLT: Writing - original draft, Writing - review & editing. EC: Writing - review & editing, Project administration. LH: Writing - review & editing. MD: Conceptualization, Writing - review & editing, Supervision. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Maureen Dobbins.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Neil-Sztramko, S.E., Belita, E., Traynor, R.L. et al. Methods to support evidence-informed decision-making in the midst of COVID-19: creation and evolution of a rapid review service from the National Collaborating Centre for Methods and Tools. BMC Med Res Methodol 21, 231 (2021). https://doi.org/10.1186/s12874-021-01436-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-021-01436-1

Keywords