Skip to main content

Advancing methodology for scoping reviews: recommendations arising from a scoping literature review (SLR) to inform transformation of Children and Adolescent Mental Health Services

Abstract

Background

There is consensus that health services commissioning and clinical practice should be driven by scientific evidence. However, workload pressures, accessibility of peer reviewed publications and skills to find, appraise, and synthesise relevant evidence are often cited as barriers to uptake of research evidence by practitioners and commissioners alike. In recent years a growing requirement for rapid evidence synthesis to inform commissioning decisions about healthcare service delivery and provision of care contributed to an increasing popularity of scoping literature reviews (SLRs). Yet, comprehensive guidelines for conducting and reporting SLRs are still relatively scarce.

Methods

The exemplar review used as a worked example aimed to provide a readily available, comprehensive, and user-friendly repository of research evidence for local commissioners to help them make evidence-informed decisions about redesigning East of England Children and Adolescent Mental Health Services. In conducting the review, we were broadly guided by Arksey and O’Malley’s framework, however some modifications were made at different stages to better reflect the largely pragmatic objective of this review. This paper compares the methodology used with existing methodological frameworks for scoping studies, to add to the existing knowledge base.

Results

We proposed the following advancements to the existing SLR frameworks: (i) Assemble a research team with complementary skills and expertise; (ii); Draw on expertise of external partners, particularly practitioners, decision-makers and commissioners who will be translating findings into practice; (iii) Pre-register the review protocol. Keep a detailed record of all steps and decisions and consider how they would impact on generalisability and utility of review findings; (iv) Use systematic procedures for literature searchers, selection of studies, data extraction and analysis; (v) If feasible, appraise the quality of included evidence; (vi) Be transparent about limitations of findings.

Conclusions

Despite some methodological limitations, scoping literature reviews are a useful method of rapidly synthesising a large body of evidence to inform commissioning and transformation of CAMHS. SLRs allow researchers to start with a broader questions, to explore the issue from different perspectives and perhaps find more comprehensive solutions that are not only effective, but also accounted for their feasibility and acceptability to key stakeholders.

Peer Review reports

Background

Evidence-based policy requires evidence, yet the policy and research cycles operate on vastly different timescales. There is consensus that health services commissioning and clinical practice should be driven by scientific evidence, but decision-makers often feel that gathering evidence may slow down innovation and overwhelm frontline staff [1]. Workload pressures and accessibility of peer reviewed publications are often cited as barriers to uptake of research evidence by commissioners [1, 2]. In addition to time constraints and accessibility, practitioners may lack the skills to find, appraise and synthesise relevant evidence [3]. The role of evidence-briefing services and information professionals is to find evidence relevant to local context and needs, and present them to commissioners as concise, actionable points [4, 5]. However, it is crucial that evidence guiding services and care provision comes from reliable sources, is critically appraised, and interpreted in the light of the overall body of available evidence [6].

Traditionally, systematic reviews are considered the most robust and reliable evidence synthesis method to inform evidence-based healthcare, answer clinically meaningful questions about treatments [7, 8], and particularly guide the development of trustworthy clinical guidelines [9]. Systematic reviews generally include evidence from randomised controlled trials, and observational epidemiological studies, however, recently the importance of including qualitative evidence has become increasingly recognised [7, 10]. Systematic reviews, although thorough, are also very resource-intensive and may take a long time to complete [11]. A growing requirement for rapid evidence synthesis in response to the needs of decision-makers has increased the popularity of rapid literature reviews [11]. Rapid reviews are designed to expedite the review [12, 13] through a number of compromises including reducing the scope, reducing the number of sources searched (often by including only systematic reviews or economic evaluations), exclusion of grey literature and foregoing double screening and data extraction [11, 12]. Systematic and rapid reviews are usually very narrow in focus and aim to answer a small number of narrowly defined questions. In recent years Scoping Literature Reviews (SLRs) have become an increasingly popular method of knowledge synthesis. SLRs offer a rapid method of mapping key concepts in a research area and identifying the main sources and types of evidence available [14]. One of the aims of SLRs is to synthesise and disseminate research results to audiences that otherwise would not have time or resources to conduct a review themselves [15,16,17]. Peters et al. (2015) stress the utility of SLR in informing evidence-based practice through “( …) examination of a broader area to identify key gaps in the research knowledge base, clarify key concepts and report on types of evidence that address and inform practice in the field”. Table 1 below outlines the key differences between systematic, rapid and scoping literature review.

Table 1 A comparison of systematic, rapid and scoping literature review

Despite growing popularity of SLRs [18] our searches identified only three complete guidelines for conducting and reporting these studies [15,16,17]. The first comprehensive SLR framework was published by Arksey and O’Malley (2005) [15] (see Table 2), and has since been advanced [16, 17, 19,20,21] (see Table 3). However, only two enahnced frameworks proposed by Levac et al. [17] and The Joanna Briggs Institute [16] outline a step-by step process of conducting SLRs. Other advancements comment and expand on particular stages of the process [19,20,21, 24] without offering a comprehensive guide. As a result researchers, particularly those not having much experience of reviewing evidence, have to collate existing guidelines and decide which modifications to include. This is likely to result in methodological inconsistencies across scoping studies, and varied quality of outputs. This paper presents a comprehensive guideline for conducting SLRs based on previous frameworks and its’ advancements, as well as our team’s additions.

Table 2 Arksey and O’Malley SLR framework
Table 3 SLR frameworks advancing Arksey’s and O’Malley’s methodology

First, we describe and discuss the methodology used to conduct an SLR. To this end we provide an exemplar SLR and describe how the specific purpose of the exemplar SLR defined and shaped its’ subsequent stages. We describe the steps taken from defining the research questions, charting the evidence, and data analysis to the final presentation of the findings. We highlight additional steps we have taken and describe how our methodology deviated from the original Arksey’s and O’Malley’s framework [15] to aid the pragmatic purpose of our SLR. This paper focuses only on methodological aspects of the SLR; findings from the review are published elsewhere [25,26,27].

Method

The exemplar

The SLR used as a worked example in this paper aimed to inform the redesign of East of England Children and Adolescent Mental Health Services (CAMHS) to create more accessible, user-friendly and efficient service for young people and their families. It was conducted as part of a multifaceted research initiative that was undertaken to support local operationalisation of the recommendations outlined in “Future in Mind” – a taskforce report outlining the improvements needed in the way CAMHS are organised, commissioned and provided, to optimise children’s and young people’s mental wellbeing in the UK [28]. The pragmatic objective was to examine how, considering the local context, the services offered in the region could be redesigned to best support children and young people who require mental health provision at different treatment intensity levels. The outputs from this SLR informed the development of the Local Transformation Plan [29] that guided the subsequent local CAMHS redesign.

In conducting the review, we were broadly guided by Arksey and O’Malley’s framework [15], however some modifications were made at different stages to better reflect the largely pragmatic objective of this review. These included:

  1. (1)

    Predefining research questions, breadth and coverage, and inclusion/exclusion criteria to reflect the key priorities for local CAMHS redesign selected by stakeholders. Rather than being informed by research objectives [15], this process was guided by the outcomes of consultations with stakeholders, and our understanding of the local context as described by expert practitioners.

  2. (2)

    Keeping a detailed record of every decision made about breadth and coverage of the SLR, inclusion/exclusion criteria and data analysis, noting the rational for it, and considering how it impacts on generalisability and practical applications of SLR findings.

  3. (3)

    Consultations with a broad range of experts at every stage of the review. We primarily relied on guidance from practitioners, commissioners and other decision-makers to ensure that the review is ‘fit for purpose’ i.e. to inform decisions about redesigning existing services and commissioning new ones, and to provide a practical guide on how to implement and evaluate them.

  4. (4)

    We applied rigorous methods used for systematic reviews to maximise robustness and ensure replicability of our methodology. These include using predefined search strategy and inclusion/exclusion criteria, and having two reviewers preform study selection and data extraction.

  5. (5)

    Although we did not undertake it in the exemplar review, we strongly recommend appraising included studies for quality.

  6. (6)

    Include a detailed description of limitations of findings and of their practical applications.

Results

SLR stage 1: defining research questions, coverage and breadth, and inclusion/exclusion criteria

Defining aims and research questions

Aims and research questions were formulated to reflect the pragmatic aim of the SLR i.e. guiding the operationalisation of key priorities for local CAMHS transformation. These priorities were identified through Delphi-method consultations [30] with stakeholders including children and young people (CYP), parents/carers, clinicians/other professionals working with CYP, and local healthcare commissioners [30]. The SLR aimed to: (i) provide a readily available repository of evidence for effectiveness, acceptability and feasibility of implementing the service improvements identified by stakeholders as key priorities; (ii) enable commissioners to make evidence-informed decisions about redesigning currently provided services and implementing new ones; (iii) guide change processes; (iv) measure outcomes. These objectives led to the following, broad research questions:

  1. 1.

    What are the effective ways of delivering services that were prioritised for CAMHS transformation?

  2. 2.

    What ways of delivering services that were prioritised for CAMHS transformation are feasible and acceptable for CAMHS users?

  3. 3.

    What are the effective procedures for implementing services that were prioritised for CAMHS transformation?

  4. 4.

    What are the effective ways of measuring transformation outcomes?

Defining coverage, breadth and inclusion/exclusion criteria

The pragmatic purpose of the SLR determined coverage corresponding with key priorities for local CAMHS transformation identified through the three-stage Delphi study [30]. The process is outlined in Fig. 1 below.

Fig. 1
figure1

Process of defining coverage of the SLR

Delphi phase 1 (qualitative): Following completion of the first round of the Delphi, JKA conducted an “evidence charting” search to understand the amount and type of evidence available for each key priority identified. A systematic search strategy was then developed to reflect key priorities identified through the first round of the Delphi (see Table 4).

Table 4 Coverage and breath of the SLR based of the areas identified through the Delphi study

Relevant Medical Subject Headings (MeSH) terms and key words in titles and abstracts were searched (see Additional file 1, Table 1). Search results for each key priority were sorted according to relevance (using EBSCO Host results sorting tool) and the first 2000 most relevant abstracts were retrieved and screened using preliminary inclusion/exclusion criteria [15] (See Additional file 1, Table 2). Full texts of included abstracts were retrieved and screened, and full text publications that met inclusion criteria were assigned to categories corresponding with key priorities for CAMHS transformation identified through the first round of the Delphi (See Additional file 1, Table 3).

Delphi round 1: Outcomes of the evidence charting searches, together with findings from the second round of the Delphi study guided the research team’s discussion about the breadth and depth of the definitive literature searches, inclusion of any additional areas, and defining the final inclusion/exclusion criteria. The research team compared Delphi statements for which the highest consensus was achieved in regard to their particular importance for local CAMHS transformation, against areas identified through evidence charting searches to define the coverage and depth of definitive literature search. Additionally, the team consulted external experts about the completeness of the list of key priorities for CAMHS transformation and areas important for designing and delivering effective CAMHS that were not featured in the Delphi results, but in their opinion should be included. (See Table 4).

Delphi round 2: Following the third and final round of the Delphi, the research team selected the themes within each prioritised area for data extraction and reporting (See Table 5).

Table 5 Themes identified within each key priority area

Defining the final inclusion/exclusion criteria

The methodological framework for conducting scoping studies recommends that initial inclusion/exclusion criteria are developed at the onset of the study, however these are reviewed and if necessary revised post hoc, in light of search results and researchers’ increased familiarity with evidence [15, 17]. We predefined inclusion/exclusion criteria to exclude papers that were not empirical, not reviews of other studies or not policy documents or guidelines (e.g. commentaries, letters, book reviews). To further define inclusion/exclusion criteria we applied Population, Concept and Context criteria (PCC) as suggested by Joanna Briggs Institute guidelines [16]. Once again, our inclusion/exclusion criteria reflected the primarily pragmatic purpose of the review. For example, we only include studies conducted in developed countries where the contexts and settings are likely to be similar to the UK. We decided to exclude studies that did not report CYP mental health (MH) or wellbeing outcomes, and to include studies in which the intervention was delivered to adults, as long as the aim was to influence CYP’s MH outcomes. Due to time restrictions and the large number of retrieved publications, we excluded studies focusing on emotional or behavioural symptoms associated with non-mental health disorders (e.g. autism spectrum disorders) and studies focusing solely on single therapeutic approaches (e.g. cognitive behavioural therapy (CBT) for depression). We decided that the focus must include service delivery and be relevant to community care, regardless of persons or organisation delivering the intervention, and severity or duration of mental health condition, given the purpose of the review and the large number of published reviews of the effectiveness of different therapeutic approach in addressing various MH problems (e.g. CBT for depression). We included studies of any design. See Table 6.

Table 6 Scoping literature review inclusion/exclusion criteria

SLR stage 2: identifying relevant studies

To develop search terms for definitive database searches JKA re-read papers included after screening publications identified through evidence charting searches (see Additional file 1, Table 2), and listed key words and index terms used in each priority area. This list guided the development of terms for electronic bibliographic databases searches (see Additional file 1, Table 4). Search strategies for all areas were discussed with a subject librarian and agreed by the research team, and each strategy was trialled to see if it produces accurate results. Databases and other sources searched are listed elsewhere [26]. To ensure that the search strategy was replicable, each search was stored exactly as run, together with search set numbers, and the number of records retrieved. For each search we separately recorded search date and the period searched, and any language or publication status restrictions.

SLR stage 3: study selection

As recommended for systematic reviews, prior to commencing the selection process initial inclusion/exclusion criteria were pilot tested by JKA on a sample of 20 papers from each identified area. Results of pilot screening were recorded in a table including reference, key priority area, theme, inclusion/exclusion decision, reason for exclusion (if excluded), and comments on paper’s relevance if there were doubts whether it should be included or excluded. Results of pilot screening were discussed within the research team with particular focus on applying refined inclusion/exclusion criteria (Table 5) and relevance. The studies’ selection process comprised the following stages:

  1. (1)

    All search results from both electronic database searches and hand searches were merged using the reference management software (EndNote). Duplicate records of the same report were removed. Documents retrieved in grey literature searches were filed separately in Excel database and separately screened for relevance.

  2. (2)

    Titles and abstracts of all papers and grey literature documents were examined to remove obviously irrelevant papers (e.g. not in English, not concerning MH services).

  3. (3)

    Abstracts/executive summaries of remaining papers were screened against inclusion/exclusion criteria.

  4. (4)

    Full texts of remaining, potentially relevant papers were retrieved and multiple reports of the same study were linked.

  5. (5)

    Full texts were examined for compliance with inclusion/exclusion criteria. After completing abstracts and full text screening another member of the team double screened 10% of papers in each review area. Team discussions allowed disagreements and boundary papers to be included/excluded via consensus. We kept a detailed record of the outcomes of each stage, including rejected papers and reasons for rejection. Additionally, each screening researcher identified and recorded main themes for each paper in the key priority area. (See Table 5).

SLR stage 4: data extraction

After completing the last round of the Delphi study, we collated a final list of key priorities for local CAMHS redesign [30]. This list informed the final selection of themes within each identified area for detailed extraction and reporting (see Table 5).

Before data extraction the lead researcher developed an extraction table to be trialled by members of the research team. Because the review included a variety of evidence types i.e. papers describing qualitative and quantitative studies, process evaluation studies, theory and framework papers, discussion and opinion papers, policy documents, we developed different forms to extract different type of evidence. This aimed to maximise the amount of meaningful information extracted, and to facilitate further narrative synthesis process. Data for each key priority area identified through the Delphi study were organised in a separate table(s). Extracted data were organised under the key themes identified in the previous stage (See Table 5). Identifying key themes was an iterative process with some themes added, deleted, or replaced by a new, more relevant ones as the analysis progressed.

SLR stage 5: collating, summarizing and reporting results

The pragmatic purpose of SLR guided the selection of frameworks to summarise and report results. We needed a ‘fit for purpose’ method of analysing and synthesising findings that would be both robust, systematic and scientifically driven, yet presented in a way that is useful and accessible for commissioners and other decision-makers.

Firstly, we provided a numerical account of findings for each key priority area identified. We reported years of publication, countries where studies were conducted and study designs or type of publications in both numerical and graphical form. This was to enable readers to establish whether evidence sources were up to date and relevant in UK context; for example, in some areas only a small fraction of reported studies were conducted in the UK, or most studies were dated before the year 2000. Although we were not assessing the quality of evidence, we noted that in some areas majority of publications were opinion and discussion papers, rather than primary research or systematic reviews. It was important to highlight, as any conclusions or recommendations made in regard to these priority areas reflected experts’ opinions, rather than research evidence.

Before commencing narrative data analysis, we created a logic map to represent all themes in each key priority area, to illustrate how some themes from the same or different areas overlap or complement each other, and to help us organise the synthesis to provide the most comprehensive picture (see Additional file 2). To summarise available evidence, draw conclusions, make clear recommendations regarding most appropriate, feasible and acceptable ways of delivering services, effective procedures for implementing these services and measuring outcomes, we carried out a narrative synthesis of evidence for each key priority for local service transformation. The narrative synthesis process was broadly guided by the framework proposed by Popay et al. [31]. This framework, applicable both to effectiveness and implementation reviews, comprises four main phases that are iterative rather than linear. The framework was developed mainly to synthesis data gathered through systematic literature review, thus we adjusted it to synthesis broader and more varied evidence.

Phase I: Developing theory of change. Explaining a theoretical model of how, why and for whom an intervention works is a recommended rather that mandatory stage of narrative synthesis [31]. When conducting a systematic review, a theory of change is usually developed in its initial stages to understand the theory behind the intervention and to inform the decision about review questions, and what type of studies to include. The theory of change can guide an interpretation of findings an assessment of its applicability. However, in systematic reviews, review questions are well defined and usually narrow, regarding only interventions with particular focus or design. In contrast, the described SLR had a much broader objective with research questions that reflected different stages of service transformation including development of new services or redesigning existing ones, service implementation and measuring transformation outcomes. The review focused on a number of areas of service delivery identified as key priorities for local CAMHS transformation. In some priority areas (mainly regarding service delivery model, service redesign and implementation), a majority of publications were discussion and opinion papers and case studies, while in other areas (e.g. technology delivered MH interventions, education and rising awareness) we identified a large number of empirical evaluations of the effectiveness of interventions targeting various outcomes. For the latter, we attempted to outline a theory of change post hoc to describe mechanisms and factors that determined intervention effectiveness in achieving intended outcomes, and contributed to its successful or failed implementation.

Phase II: Developing a preliminary synthesis of evidence. Evidence synthesis was conducted separately for each key priority area, however we identified some overlaps between themes in different areas (see Fig. 2). Where themes overlapped, data from different areas were merged to provide a comprehensive summary. To develop a preliminary synthesis of evidence for each priority area we organised evidence on a number of levels, to identify patterns in factors that contributed to the effectiveness of services or interventions. Initially, we grouped papers in each area based on key themes we assigned them to (see Table 5). In some instances it was sufficient to carry out preliminary data synthesis (e.g. development and psychometric properties of outcome measures). If further organising of evidence was required, we grouped the papers as follows, adjusting the grouping (depending on the nature of evidence available in each area reviewed):

Fig. 2
figure2

Grouping of papers for preliminary data synthesises

From the analysis of information included in the extraction tables, we identified some initial, broad themes and patterns to explore further in the next stage. For each priority area we also considered potential limitations resulting from both the nature of available evidence (e.g. lack of UK studies) and review process (e.g. decisions about search strategy), to consider when translating findings from the review into practice.

Phase III: Exploring relationships in data. The aim of this is to identify factors explaining differences in direction and size of effects across included studies, and factors that might explain differences in facilitators and barriers to successful implementation. The SLR aimed to provide an evidence-base to guide the transformation of local CAMHS in each identified priority area. In practice we were aiming to develop a robust, credible, yet concise and user-friendly summary of evidence (See Additional file 3) readily available for commissioners to support them in making decisions about developing, implementing and evaluating services most appropriate in different contexts and for different user groups. We explored relationships between data in each key priority area to determine which ways of delivering services are the most feasible, clinically and cost-effective, and acceptable for users in different contexts, and to identify common factors contributing to effectiveness and acceptability. Additionally, we described factors that may serve as barriers and facilitators for effective implementation. Finally, we explored optimal ways of measuring potential transformation outcomes that would provide rich, reliable data, while minimizing burden on service users and clinicians.

Phase IV: Assessing the robustness and strength of evidence for drawing and generalising conclusions. SLR guidelines explicitly state that scoping studies do not aim to assess the quality of available evidence, but only report extent, range and nature of research activities undertaken in a field of interest [13, 16]. We accordingly provided a narrative description of types of available evidence, identified gaps for each key priority area, and elucidated what impact this may have on the robustness of any recommendations made and the translating findings into practice (See Additional file 3). We have not assessed the quality of evidence since our SLR included grey literature. However, where feasible, we would recommend assessing the quality of included studies to increase the reliability of findings and recommendations for practice.

Discussion

Only in recent years scoping literature reviews have become an increasingly popular method of evidence synthesis, particularly in relation to healthcare service delivery and provision of care [22]. Earlier SLR frameworks described evidence mapping, identifying key concepts and research gaps in the field as primary objectives of scoping studies [14, 16, 19, 20, 24]. However, more recently, authors have placed more emphasis on conducting SLRs to inform evidence-based provision of services and disseminate findings to audiences who could benefit from access to research evidence, but lack capacity or skills necessary to collate them themselves as [7, 19, 20, 23, 32]. Scoping reviews generally aim to answer broader questions than systematic and rapid reviews, and leave room for iterations introduced during the review process. This may be an advantage when answering questions about provision of services, as healthcare systems are dynamic and often have to rapidly respond to changing circumstances. Most recent advancements in SLRs’ methodology have introduced more rigour to the review process [19, 24], increasing replicability, reliability of findings, and recommendations for practice. These advancements bridge the gap between SLRs and systematic and rapid reviews, i.e. allowing to rapidly review a large body of diverse evidence without sacrificing methodological rigour or scope.

The main aim of the exemplar SLR was to develop an evidence-base to inform and guide transformation of local CAMHS. Our objective was to provide a readily available, comprehensive, and user-friendly repository of research evidence for local commissioners to help them make evidence-informed decisions about redesigning currently available CAMHS services and commissioning new ones. Our decision to conduct a scoping rather than a systematic or rapid literature review was dictated by a number of factors. Firstly, we have started with a broad scope that we then gradually narrowed down based on the outcomes of the Delphi study. Secondly, our focus was not on the effectiveness of certain therapeutic approaches or interventions delivered in CAMHS. If that was the objective, a systematic review would have been far more appropriate [9]. Yet, our focus was on effective, feasible, and acceptable ways of delivering services prioritised by stakeholders, as well as on implementation strategies that would facilitate new service’s adoption and spread, and on measuring outcomes. Our initial charting searches showed that if we only included peer-reviewed publications (as recommended for a systematic review), we would not have been able to present a full picture and make comprehensive recommendations since a large body of relevant evidence were published in grey literature (i.e. policy documents, government bodies’, third sector organisations’ and think-tanks’ reports). Moreover, the majority of these documents did not adhere to research reporting guidelines, and would not pass the quality appraisal. Nevertheless, they offered valuable information that could support commissioners in making decisions about redesigning existing services and commissioning new ones. Finally, unlike in systematic reviews, our questions and methods were driven by practice rather than research objectives. Although we identified knowledge gaps, our primary focus was to collate evidence to support CAMHS transformation, i.e. evidence that could be rapidly translated into practice. We only reported on areas that were relevant from the practice point of view and the manner of reporting was driven by practical requirements, thus the review was unlikely to be exhaustive.

Our methodology was broadly guided by Arksey and O’Malley’s framework [15], however, some modifications were made at different stages to better reflect the largely pragmatic objective of this review. We needed a ‘fit for purpose’ method of analysing and synthesising findings that would be both robust, systematic and scientifically driven, yet presented a way that is useful and accessible for commissioners and other decision-makers.

The most complex aspect of the review was to define its coverage and breadth. Unlike in most reviews, it was an iterative process primarily driven by practical requirements, rather than research purpose. It was informed by outcomes of each round of the Delphi consultations to determine key priorities for local CAMHS redesign. We have relied not only on the expertise of the research team members, but also drawn on the expertise of other mental health researchers, CAMHS clinicians, services managers, commissioners and other decision-makers. We kept a very detailed record of the process, noting all sources of information and factors that guided our decisions about the review’s coverage and breadth.

Inclusion and exclusion criteria were also defined with practical purpose in mind; however, it is important to note that decisions about which studies to include created some limitations to generalisability and utility of our findings. For example, we decided to only include studies conducted in developed countries. The rationale for this decision was that we needed to focus on contexts and settings similar to the UK ones, as the findings were meant to inform local practice. We assumed that service implementation and sustainability in developing countries is likely to be influenced by different factors that may not be so prominent in the UK (i.e. very limited resources, remote geographical locations etc.).

We applied rigorous processes used in systematic reviews to define search strategy, select papers for inclusion, extract data and analyse data. We kept detailed records documenting every step to ensure our methods are systematic, verifiable and replicable. Initially, we extracted and data and synthesised evidence separately for each key priority area. Subsequently, we explored relationships between data in each area to determine which ways of delivering services were the most feasible, clinically and cost-effective, and acceptable for users in different contexts. We also identified common factors contributing to effectiveness, feasibility and acceptability. Finding these commonalities allowed us to synthesise a very large amount of data relatively quickly an in a succinct manner. Our outputs were concise summaries of findings and recommendations, focusing primarily on practical implications, foregoing methodological considerations (see Additional file 3). We did not conduct a critical appraisal of evidence, since we included grey literature. We noted limitations to our findings which mainly resulted from the lack of UK studies, lack of empirical evidence in some areas (i.e. retrieved publication were predominantly discussion and opinion papers), as well as methodological decisions we made.

Limitations of proposed methodology

It is important to highlight that described methodology has a number of limitations. A SLR informed by practical requirements is unlikely to be exhaustive. Research driven SLR are usually comprehensive and cover a distinct topic, while practical questions are likely to be focused on a narrow area and not necessarily considering broader context. Practical purpose will also dictate the coverage and breadth of the review, inclusion/exclusion criteria, and to some extent the data analysis methods. In most cases this would mean focusing on evidence relevant to a specific, usually narrow context resulting in significant constraints with regard to the generalisability and applications of the findings. Focusing on a narrow, specific context restricts conclusions about knowledge gaps thus caution is needed when making conclusions about the need for conducting a full systematic review, or recommending direction for future research.

SLR aiming to inform practice often includes varied types of evidence, some of them not meeting the quality standards for research and reporting. While findings from those reviews can inform broadly defined service provision, implementation or redesign, they are not a trustworthy evidence-base to determine an effectiveness of a particular service or intervention; in other words they are not design to support decisions about using specific interventions or treatments. These decisions are more reliably guided by findings from systematic reviews.

Although conducting a SLR is definitely quicker than a full systematic review, it is still quite time- and resource-intensive process. Particularly with SLRs aiming to inform practice, the initial stages (i.e. the process of defining research questions, coverage and inclusion criteria) can be quite lengthy, since they require a coordinated input from different groups of experts and stakeholders. If resources are an issue, it might be more feasible to conduct a rapid review, however, considering literature searches restrictions and stringent inclusion/exclusion criteria, the findings are likely to only partially reflect available evidence.

Finally, although involving commissioners and decision-makers in defining research questions and scope of a review is very beneficial and useful when it comes to informing practice, there are also downsides that need to be considered. Different decision-makers may have different, often competing priorities that do not always align with those of other stakeholders (e.g. service users and clinicians). Designing a review that addresses all different priorities may be very challenging, and foregoing some to give precedence to others may result in limited practical applications and usefulness of findings.

Conclusions

Despite some methodological limitations, SLRs are a useful method of rapidly synthesising a large body of evidence to inform commissioning and transformation of CAMHS. Unlike systematic reviews, SLRs allow researchers to start with broader questions, to explore the issue from different perspectives and perhaps find more comprehensive solutions that are not only effective, but also accounted for their feasibility and acceptability to key stakeholders. Unlike for systematic reviews, there is no strict requirement to rigorously adhere to SLR process and reviewers can skip or modify certain steps as they see fit. However, non-adherence to guidelines is likely to result in significant bias and compromise the reliability of findings. The framework we propose brings together numerous advancements made to SLR methodology, as well as introduces additional protocols to enhance methodological robustness, minimise bias, and ensure reliability of findings.

Recommendations

▪ Assemble a research team with a range of skills and expertise.

▪ Draw on expertise of external partners, particularly practitioners, decision-makers and commissioners who will be translating findings into practice.

▪ Document in the review process in detail. Note rationale for each decision and consider how it would impact on generalisability and utility of findings.

▪ Use systematic procedures for literature searchers, selection of studies, data extraction and reporting. Keep detailed records to ensure replicability.

▪ If feasible, appraise the quality of included evidence.

▪ Be transparent about limitations of findings.

▪ Commit to and embrace open science principles.

Availability of data and materials

All data and materials relating to the exemplar systematic review can be requested from Dr. Joanna Anderson.

Abbreviations

SLR:

Scoping literature review

CAMHS:

Children and Adolescent Mental Health Services

MESH:

Medical subject headings

CYP:

Children and young people

PCC:

Population, concept and context criteria

MH:

Mental health

CBT:

Cognitive-behavioural therapy

References

  1. 1.

    Curtis K, Fulton E, Brown K. Factors influencing application of behavioural science evidence by public health decision-makers and practitioners, and implications for practice. Prev Med Rep. 2018;12:106–15.

    PubMed  PubMed Central  Article  Google Scholar 

  2. 2.

    Bach-Mortensen AM, Lange BC, Montgomery P. Barriers and facilitators to implementing evidence-based interventions among third sector organisations: a systematic review. Implement Sci. 2018;13(1):103.

    PubMed  PubMed Central  Article  Google Scholar 

  3. 3.

    Barr-Walker J. Evidence-based information needs of public health workers: a systematized review. J Med Libr Assoc. 2017;105(1):69.

    PubMed  PubMed Central  Google Scholar 

  4. 4.

    Van Der Graaf P, Cheetham M, McCabe K, Rushmer R. Localising and tailoring research evidence helps public health decision making. Health Inf Libr J. 2018;35(3):202–12.

    Article  Google Scholar 

  5. 5.

    Wilson P, Farley K, Bickerdike L, Booth A, Chambers D, Thompson C, et al., editors. Effects of a Demand-Led Evidence Briefing Service on the Uptake and Use of Research Evidence By Commissioners of Health Services. 9th Annual Conference on the Science of Dissemination and Implementation; 2016: AcademyHealth.

  6. 6.

    Andermann A, Pang T, Newton JN, Davis A, Panisset U. Evidence for health II: overcoming barriers to using evidence in policy and practice. Health Res Policy Syst. 2016;14(1):17.

    PubMed  PubMed Central  Article  Google Scholar 

  7. 7.

    Barker J, Ortega M. Systematic reviews and evidence-based practice. Evidence-based Practice for Nurses and Healthcare Professionals. 2016;126.

  8. 8.

    Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into systematic reviews. JBI Reports. 2004;2(2):45–64.

    Article  Google Scholar 

  9. 9.

    Guidelines IoMCoSfDTCP, Graham R, Mancher M. Clinical practice guidelines we can trust. Washington, DC: National Academies Press; 2011.

    Google Scholar 

  10. 10.

    Munn Z, Peters MD, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.

    PubMed  PubMed Central  Article  Google Scholar 

  11. 11.

    Hamel C, Michaud A, Thuku M, Affengruber L, Skidmore B, Nussbaumer-Streit B, et al. Few evaluative studies exist examining rapid review methodology across stages of conduct: a systematic scoping review. J Clin Epidemiol. 2020;126:131–40.

  12. 12.

    Haby MM, Chapman E, Clark R, Barreto J, Reveiz L, Lavis JN. What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision making in health policy and practice: a rapid review. Health Res Policy Syst. 2016;14(1):83.

    PubMed  PubMed Central  Article  Google Scholar 

  13. 13.

    Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5(1):56.

    PubMed  PubMed Central  Article  Google Scholar 

  14. 14.

    Mays N, Roberts E, Popay J. Synthesising research evidence. Studying the organisation and delivery of health services: Research methods 2001;220.

  15. 15.

    Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

    Article  Google Scholar 

  16. 16.

    Institute.. TJB. Joanna Briggs Institute Reviewers’ Manual 2015 edition, Methodology for JBI scoping reviews.: The Joanna Briggs Institute. ; 2015.

  17. 17.

    Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69.

    PubMed  PubMed Central  Article  Google Scholar 

  18. 18.

    Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synth Methods. 2014;5(4):371–85.

    PubMed  PubMed Central  Article  Google Scholar 

  19. 19.

    Colquhoun HL, Levac D, O'Brien KK, Straus S, Tricco AC, Perrier L, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol. 2014;67(12):1291–4.

    PubMed  Article  Google Scholar 

  20. 20.

    Daudt HM, van Mossel C, Scott SJ. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. BMC Med Res Methodol. 2013;13(1):48.

    PubMed  PubMed Central  Article  Google Scholar 

  21. 21.

    Khalil H, Peters M, Godfrey CM, McInerney P, Soares CB, Parker D. An evidence-based approach to scoping reviews. Worldviews Evid-Based Nurs. 2016;13(2):118–23.

    PubMed  Article  Google Scholar 

  22. 22.

    Anderson S, Allen P, Peckham S, Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Res Policy Syst. 2008;6(1):7.

    PubMed  PubMed Central  Article  Google Scholar 

  23. 23.

    Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13(3):141–6.

    PubMed  Article  Google Scholar 

  24. 24.

    Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

    PubMed  Article  Google Scholar 

  25. 25.

    Anderson JK, Ford T, Soneson E, Coon JT, Humphrey A, Rogers M, et al. A systematic review of effectiveness and cost-effectiveness of school-based identification of children and young people at risk of, or currently experiencing mental health difficulties. Psychol Med. 2019;49(1):9–19.

    PubMed  Article  Google Scholar 

  26. 26.

    Anderson JK, Howarth E, Vainre M, Jones P, Humphrey A. A scoping literature review of service-level barriers for access and engagement with mental health services for children and young people. 2017.

  27. 27.

    Soneson E, Howarth E, Ford T, Humphrey A, Jones PB, Coon JT, et al. Feasibility of school-based identification of children and adolescents experiencing, or at-risk of developing, mental health difficulties: a systematic review. Prev Sci. 2020;15:1–23.

  28. 28.

    Department of Health. Future in mind. Promoting, protecting and improving our children and young people’s mental health and wellbeing. In: Health Do, editor.: gov.uk; 2015.

  29. 29.

    NHS England. Local transformation plans for children and young People's mental health and wellbeing: guidance and support for local areas. 2015.

  30. 30.

    Howarth E, Vainre M, Anderson Kosmala J, Humphrey A, Lombardo C, Hanafiah A, Jones PB. A Delphi study to identify key features of community based child and adolescent mental health services in the East of England. BMJ Open. 2019.

  31. 31.

    Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews. Prod ESRC Methods Programme Version. 2006;1:b92.

    Google Scholar 

  32. 32.

    Peterson J, Pearce PF, Ferguson LA, Langford CA. Understanding scoping reviews: definition, purpose, and process. J Am Assoc Nurse Pract. 2017;29(1):12–6.

    PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Ms. Jessica Fritz from Cambridge University Department of Psychiatry for reviewing the manuscript and providing valuable feedback.

Funding

NIHR ARC East of England funded the exemplar scoping review and Delphi study described. The funding covered salaries of JKA, EH and MV.

Author information

Affiliations

Authors

Contributions

JKA wrote the manuscript. She led the scoping review described in the paper. EH lead the Delphi study, contributed to the described scoping review (screening) and review and commented on the manuscript. MV and AH contributed to the Delphi study and scoping review (MV conducted screening and data extraction) and review and commented on the manuscript. PBJ and TJF reviewed and commented on subsequent manuscript versions. All authors read and approved the final manuscript and supplemental materials.

Corresponding author

Correspondence to Joanna K. Anderson.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1: Appendix 1.

Table 1: Search terms used in evidence charting searches. Table 2. Initial inclusion/exclusion criteria for evidence charting searches. Table 3: Results of evidence charting search. Table 4: Search terms used in final SLR searches.

Additional file 2: Appendix 2.

Figure 1: CAMHS delivery models – logic map.

Additional file 3: Appendix 3.

Executive summaries of findings for the exemplar review.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Anderson, J.K., Howarth, E., Vainre, M. et al. Advancing methodology for scoping reviews: recommendations arising from a scoping literature review (SLR) to inform transformation of Children and Adolescent Mental Health Services. BMC Med Res Methodol 20, 242 (2020). https://doi.org/10.1186/s12874-020-01127-3

Download citation