Skip to main content
  • Research article
  • Open access
  • Published:

Confirmatory factor analysis of the Evidence-Based Practice Attitude Scale (EBPAS) in a large and representative Swedish sample: is the use of the total scale and subscale scores justified?



There is a call for valid and reliable instruments to evaluate implementation of evidence-based practices (EBP). The 15-item Evidence-Based Practice Attitude Scale (EBPAS) measures attitude toward EBP, incorporating four lower-order factor subscales (Appeal, Requirements, Openness, and Divergence) and a Total scale (General Attitudes). It is one of a few measures of EBP attitudes evaluated for its psychometric properties. The reliability of the Total scale has been repeatedly supported, but also the multidimensionality of the inventory. However, whether all of the items contribute to the EBPAS Total beyond their subscales has yet to be demonstrated. In addition, the Divergence subscale has been questioned because of its low correlation with the other subscales and low inter-item correlations. The EBPAS is widely used to tailor and evaluate implementation efforts, but a Swedish version has not yet been validated. This study aimed to contribute to the development and cross-validation of the EBPAS by examining the factor structure of t a Swedish-language version in a large sample of mental health professionals.


The EBPAS was translated into Swedish and completed by 570 mental health professionals working in child and adolescent psychiatry settings spread across Sweden. The factor structure was examined using first-order, second-order and bifactor confirmatory factor analytic (CFA) models.


Results suggested adequate fit for all CFA models. The EBPAS Total was strongly supported in the Swedish version. Support for the hierarchical second-order model was also strong, while the bifactor model gave mixed support for the subscales. The Openness and Requirements subscales came out best, while there were problems with both the Appeal (e.g. not different from the General Attitudes factor) and the Divergence subscales (e.g. low reliability).


Overall, the psychometric properties were on par with the English version and the total score appears to be a valid measure of general attitudes towards EBP. This is the first study supporting this General Attitudes factor based on a bifactor model. Although comparatively better supported in this Swedish sample, we conclude that the use of the EBPAS subscale scores may result in misleading conclusions. Practical implications and future directions are discussed.

Peer Review reports


Health practitioner’s attitudes and values play an important role in implementing evidence-based practice (EBP) in community care settings [1]. Positive attitudes along with subjective norms and a person’s self-efficacy may influence the practitioner’s decision whether or not to implement a new practice [2, 3]. According to Rogers, the innovation-decision process starts with knowledge and persuasion when the provider [practitioner] forms a favorable or unfavorable attitude toward the innovation [4]. Both general and specific constructs play an important role in understanding and predicting behavior. General predictor variables are suggested as the best to predict general outcomes and specific predictor variables the best to predict specific outcomes [5, 6]. Assessing both specific and general attitudes towards EBP may help in the tailoring and evaluation of an implementation program [7].

Psychometrically sound and theory-based measurements, including those measuring attitude towards EBP, are essential for the field of implementation science [8, 9]. A good scale consists of a heterogeneous set of items that capture the entire breadth of a given construct while providing acceptable reliability. A scale’s dimensionality can have important consequences for scale scoring and interpretation [6]. Although only valid and reliable measures can confidently and consistently measure what they are intended to measure, psychometric information is absent in about half of all articles using various scales as part of innovation implementation programs in healthcare settings [8, 10]. Other problems are the lack of short and pragmatic instruments and instruments with a broad application used across studies [11].

The Evidence-Based Attitude Practice Scale (EBPAS) is an instrument of high overall psychometric quality [12]. The EBPAS was developed by Aarons on the basis of a comprehensive literature review and consultation with mental health service providers and researchers. Fifteen items generate a total score and subscale scores covering four important domains of attitudes toward EBP: the intuitive Appeal of EBP to the provider; response to organizational Requirements; Openness to new/manualized interventions; and perceived Divergence [1, 13, 14]. Subscale scores are used to obtain information about the specific domains of attitude toward EBP and the total score (where the Divergence items are reversed) is used to estimate a common dimension of global attitude to EBP. Previous studies have suggested good internal consistency for the total scale and the subscale scores except for the Divergence subscale [1, 13, 14].

The convergent validity has been examined by correlation with theoretically related constructs, supporting the total scale and to some extent the subscales [15,16,17,18]. Consistent with expectation, EBPAS total and subscales scores positively correlate with provider education levels, leadership quality and attitudes towards change, and the implementation climate [14, 19,20,21,22]. In respect of concurrent validity, attitudes toward EBP, measured by the EBPAS, are expected to have a relationship to service delivery [22]. Previous studies find that scores on the Openness subscale are positively correlated with self-reported use of manuals and cognitive-behavioral therapy [23, 24], while higher scores on the Divergence subscale are associated with the use of non-evidence-based therapy strategies [24]. However, no such associations were found between usage and scores on the Appeal and Requirements subscales; and correlations between practice indicators and the EBPAS total score were not tested [23, 24]. In further support of the concurrent validity of the EBPAS, one study found that psychotherapists who scored higher on the Openness scale prior to training in EBP reported more fidelity consistent modifications at follow-up and those who scored higher on the Appeal subscale reported more fidelity inconsistent modifications [25]. Subsequent studies have also found that higher Requirement scores, but not scores on the other subscales, are associated with non-adherence, non-skillful usage, and non-usage of EBP techniques after training [26, 27]. Staff turnover is a major problem in mental health settings and may jeopardize implementation and sustainment of EBP [28]. Higher scores on the Openness subscale predicted greater workplace retention [28]. Practitioners who scored higher on the Divergence subscale had a higher likelihood of EBP discontinuation, another threat to EBP sustainment [29]. The latter study used only the Divergence and Openness subscales and no association between Openness subscale scores and EBP discontinuation was found.

The construct validity of the EBPAS has been investigated in several studies using Confirmatory factor analysis (CFA) [1, 13, 14] (Additional file 1). The fit of first-order models support the placement of the EBPAS items in four separate subscales, with moderate to strong factor loadings for all subscales except Divergence [1, 13, 14]. The factor loadings have generally been moderate to strong with the weakest loadings for the Divergence scale. Correlations between the factors have generally been moderate, suggesting that they in part measure the same overall construct, but also here, the Divergence scale has shown weak correlations with some of the other scales [1, 13]. To remedy the lack of support for the Divergence scale, a more complicated five-factor model has been tested wherein the Divergence subscale is split in to two factors [30], but subsequent research did not support this model [31]. The hypothesis of a general attitude factor using all 15 items (i.e. a total score) from the EBPAS has been supported by acceptable model fit for a second-order model [14]. The first-factor loadings were strong and loadings on the general factor moderate to strong. Again the Divergence scale has been least supported. Items from this subscale had inconsistent loadings on the Divergence factor itself and the Divergence factor had a weak loading on the general factor [14]. Results from one bifactor model study provided preliminary support for the hypothesis that the variance in the individual EBPAS items can be attributed to a general factor and uniquely to the four domain-based factors (Appeal, Requirements, Openness, Divergence) [32] (Additional File 1). In that study, the bi-factor model had a slightly better fit compared to the second-order model with significant first-factor loadings in the moderate to strong range (with an exception for the Appeal subscale). Factor loadings or the general attitudes (total) scale were moderate except for items from the Divergence subscale, which were weak or non-significant.

In sum, the available research provides preliminary support for the proposed factor structure (four subscales and a total scale) for the EBPAS, but several issues remain. The model has been revised such that two items in the Appeal subscale were allowed to correlate, reflecting the possibility that these two items have more in common than can be accounted for by the subscale itself (e.g. they may suggest an additional specific factor) [13, 14]. Furthermore, across studies, there are clear difficulties with the factor loadings for the Divergence subscale [1, 13, 14, 30, 32].

The EBPAS has been translated into different languages and cross-validated in various settings, however a Swedish version has not yet been validated [30,31,32,33,34,35,36]. The present study aimed to help fill this gap in the literature. Studies assessing the factorial validity of translated scales are important because they provide further evidence of the cross-cultural validity of the constructs assessed by that scale. New problems can be the results of a translation and it is important to show that the translation does not result in weaker support for the scale’s validity.

To summarize, the EBPAS, or parts of it, is widely used in implementation research. Previous studies give some support for its construct validity especially the general factor. There is less support for the EBPAS as a multidimensional scale with four subscales contributing uniquely to the general attitude construct. In other words, is it meaningful to include all of the sub-factors in the EBPAS general factor? Is it wise to add items/subscales together into a total scale or use the subscales independently as indicators of specific attitude towards EBP or predictors for other implementation outcomes?

The present study aimed to address these gaps in the literature. The construct validity of a Swedish version of the EBPAS was examined in a large and representative sample of practitioners working in child mental health settings across Sweden thus following previous studies by the scale developer [1, 13, 14]. In addition we conducted a confirmatory bifactor analysis to evaluate the plausibility of scoring and using the subscales [32]. Specifically, this study aimed to investigate: 1) the reliability of a Swedish translation of the EBPAS; 2) its factorial structure relative to previous (first-, second-order and bifactor models) tested in English and non-English version of this scale and 3) whether sub-factors are uniquely supported in the Swedish version of the inventory, in other words, whether the subscale domains are supported when the general attitude domain has been accounted for?


Design and setting

Data from the current cross-sectional study were obtained as part of a large prospective, multi-center implementation study of evidence-based interventions for depressed youth. All publicly (state) funded child and adolescent mental health services (CAMHS) in Sweden were invited to participate in the Swedish Association of Child and Adolescent Psychiatry “Deplyftet” implementation program for youth depression. Individual Swedish CAMHS serves about 64,000 children annually (range = 29,000- 450,000) children. The current study uses a subsample of data drawn from providers working at 11 of 31 eligible CAMHS who collectively serve about 712,000 youth (36% of all Swedish children). The individual CAMHS from which the current data are drawn represent all types of publicly owned and funded CAMHS, serving similar-sized catchment areas as the remaining CAMHS (i.e., average = 65,000 youth, range = 41,000- 125,000).


The validation of the Swedish version of the EBPAS was conducted in two stages. First, the EBPAS was translated from English into Swedish by an expert group of mental health professionals following recommendations for the translation of measures (see below) [36]. Second, the EBPAS was administered to 925 professionals working in Swedish CAMHS via a web-based survey. Data were collected as a baseline assessment from October 2014 to February 2017 with 2–5 reminders sent (if necessary) and the resulting data used to examine the psychometric properties of the EBPAS.


The web-based survey included questions about the respondent’s age, gender, and professional background, followed by the EBPAS.


The EBPAS consists of 15 items rated on a Likert scale (0 = not at all to 4 = to a very great extent) and is comprised of four subscales: 1) Appeal (four items) measures the intuitive appeal of the EBPs; 2) Requirements (three items) measures the extent to which the provider would adopt a new practice if it were required; 3) Openness (four items) measures the extent to which the provider is generally willing to try new interventions; and 4) Divergence (four items) measures the extent to which the provider perceives research based treatments as not clinically useful and/or less important than their own clinical experience [1]. Requirement differs from Openness in that the former assesses how employees respond to organizational rules and regulations, while the latter measures the extent to which the provider is generally willing to try new interventions. Previous studies report Cronbach alphas ranging from .76 to .91, except for the Divergence scale (.59 -to .66) [1, 13, 14].

Cross cultural adaptation and translation

Permission to translate the EBPAS was obtained from the scale’s developer [1]. Item 13 (Requirement subscale) was adapted for the Swedish context while preserving the integrity of the original item. The word “State” in item 13 was replaced with “National Board of Health and Welfare.” All other items were simple translations of the original. A step-wise forward-backward translation approach was utilized [37]. The EBPAS was translated (separately) into Swedish by the first and last authors (AS, HJ), who are native Swedish speakers and fluent in English. The two translations were compared, discrepancies identified, and any discrepancies or deviations from the original item were resolved and a final Swedish version produced. This Swedish version was then back-translated into English by a professional translator. The first two authors compared the back-translated version to the English language original and the final Swedish version. No further changes were necessary. The final back-translated version was reviewed and approved by the scale developer.

Psychometric testing

A series of CFAs were conducted to test the factor structure as the EBPAS has been thoroughly examined in previous studies using both exploratory and confirmatory methods [13, 14]. We tested three models, all specified a priori, using the entire sample for each model: 1) a four factor model based on the suggested subscales of the inventory; 2) a higher- order model, with one General Attitudes factor on the level above the four subscales; and 3) a bifactor model measuring a General Attitudes factor defined to be unrelated to the sub-factors (this model was used to test for unique variance of the four scales to the general attitudes to EBP construct). The problem with first-order models is that they do not explicitly support the general factor, e.g. a sum score of all included scales [6]. In second-order models, each item loads on their specific factors, and all sub-factors load on a higher-order construct that accounts for the commonality between sub-factors. Bifactor models are an alternative to the second-order models with the advantage that it is possible to test for unique contribution of the sub-factors. In bifactor models individual items load on both a general factor and a specific factor. In other words, bifactor models test whether there is support for a specific factor after accounting for the general factor [38, 39].


A total of 570 (62%) of the 925 outpatient practitioners working in the 11 CAMHS responded to the online survey. Of these, five were excluded; three because of missing data for all items, one because all responses were the same, and one for being a multivariate outlier (Mahalanobis distance with p < 0.001), leaving 565 participants for analysis. Two univariate outliers (extremely high z scores > 3.0) were replaced with the same value as their closest neighbors [40]. The typical participant was female (84%), 35–45 years old (28%) and a psychologist (38%).

Statistical analyses

Descriptive statistics, item-total correlation and internal consistency reliability (Cronbach α) were analyzed using SPSS (Version 24) [41]. CFA models were estimated using MPLUS 8 [42]. The weighted least squares -robust mean and variance adjusted (WLSMV) estimator was used since the items were ordinal (Likert scale). Cases with missing data were included in the CFA because the WSLW estimator permits their inclusion. Several different model fit indices were used:; chi-squared index (χ2), the comparative fit index (CFI), the root mean square error of approximation (RMSEA), and the standard root mean square residual (SRMR). As an alternative estimation we tested the models with robust maximum likelihood (MLR), using this estimation the RMSEA revealed much better fit. RMSEA has been shown to be problematic together with WLSMV estimation [43]. For categorical models Yu has recommended .95 for CFI, .05 for RMSEA and for SRMR a good model should have lower than .08 [44]. …. In addition, we evaluated the explained common variance (ECV) for the bifactor model to evaluate the importance of the General Attitudes factor in comparison to the four subscale -factors [6]. We also estimated the variance for the EPBAS scales that could be attributed to the services using the intra class coefficient (ICC). All the ICC was found to be low, the highest was .021 for the Requirement subscale, and the ICC for the total scale was .020. A sample of 300 is sufficient for conducting CFAs [40].


Descriptive item analysis

Table 1 presents the descriptive statistics for the individual items, subscales and total scale. Mean values were high for most of the positively phrased EBPAS items and generally lower for negatively worded items. Skewness was generally negative (J-shaped) for positively phrased items, but above 1.0 only for one item. Three items had a kurtosis value > 1. Item 15 (enough training) was extremely kurtotic (2.6) with a positive skew > 1.0. This item also had a very obvious ceiling effect with 54% of the ratings in the highest response category.

Table 1 EBPAS subscale and item means, standard deviations, Cronbach’s alpha and item-total correlation


Internal consistency was .81 for the total scale and ranged from α = .60 to .88 for the subscales (Table 1). Item-total correlations ranged from .30 to .58 for the total scale and from .27 to .88 for the subscales. No improvements in Cronbach’s alphas occurred with removal of individual items.

Construct validity

The model fit indices are presented in Table 2, the factor inter-correlations in Table 3 and the factor loadings in Table 4. For the second-order and bifactor models, the factor loadings are also presented in Figs. 1 and 2, respectively.

Table 2 EBPAS domain correlations
Table 3 Model fit information for five alternative models of the EBPAS (N = 565)
Table 4 Standardized factor loadings from model results
Fig. 1
figure 1

Second-Order Confirmatory factor analysis model. Standardized factor loadings for model 2b, n = 565, χ2 (86) =558.5, CFI = .973, RMSEA = .098, SRMR = 0.006. Estimation of residuals between Appeal subscale items is indicated by a double-headed arrow. All factor loadings are significant at the p < .001 level

Fig. 2
figure 2

Bifactor Model Standardized factor loadings for model 3, n = 565, χ2 (75) =450.5, CFI = .978, RMSEA = .094, SRMR = 0.058. Estimation of residuals between Appeal subscale items is indicated by a double-headed arrow. All factor loadings are significant at the p < .001 level

First-order models

We tested the four-factor model first and found adequate fit (see Model 1a, Table 2); with the CFI and SRMR suggesting that most of the covariance was represented in this model. To further improve model fit, we fixed items 11 (“Supervisor required”) and 12 (“Agency required”) in the indicators of the Requirements factor to 1. The strongest correlation (r = .65) was between items #9 (“Intuitively appealing”) and #10 (“Making sense”) from the Appeal subscale, and adding this error correlation increased fit significantly (see Model 1b, Table 2). Factor inter-correlations were in the moderate range (.44). Generally loadings were high (above .5); with the exception of Divergence’s loading on item #3 (“Know better than researcher”) (.38) (see Tables 3 and 4).

Second-order models

A second-order model with a general factor above the four factors was tested next (see Table 2 and Fig. 1), both without (model 2a) and with the correlated items in the Appeal subscale (Model 2b). Model 2b had two more degrees of freedom than Model 1b and is therefore more parsimonious (Table 2). The CFI and SRMR suggested a good fit for Model 2b, even if the fit for this model was not as good as for Model 1b. The standardized factor loadings for the subscales were similar to the factor loadings in first order model (see Table 4). The average loading on the General Attitudes factor was also strong (.68) ranging from .48 to .86 (see Table 4 and Fig. 1).

Overall, the modification indices were difficult to interpret. For example, they suggested additional correlation between Requirements and Appeal, but also between Openness and Divergence. Adding one of these correlations increased the model fit to a level comparable to Model 1b. However, as this was not an expected correlation, we refrained from adding it to the final model.

Bifactor model

The bifactor model converged when the error correlation between item 9 and 10 was added and an additional item (12), from the Requirements factor was fixed to 1 (Model 3, Table 2). The item loadings on the General attitude and four subscales were significant (p < 0.001) and generally moderate (~.5): ranging from .24- to .72 for the General Attitudes scale; and from .18 to .87 for the subscales (see Table 4 and Fig. 2).

The explained common variance (ECV) for the General Attitudes factor was .46. The ECV for the sub-factors were: Requirement = .22; Appeal = .07; Openness = .13; and Divergence = .11. These ECVs suggest that a substantial proportion of the overall variance in the model was explained by the General Attitudes factor and the four sub-scale factors [45].


The present study aimed to evaluate the psychometric properties of a Swedish translation of the Evidence-Based Practice Attitude Scale (EBPAS). Overall, the Swedish version showed acceptable levels of internal consistency for the scale overall (General Attitudes) and three of the four subscales. The factor structure of the Swedish version of the EBPAS closely mimicked the structure shown for the English language original, thereby suggesting that the translation from English to Swedish did not alter how the items were interpreted and rated by the participants. The present findings provides additional support for the validity of the EBPAS in in a child and adolescent mental health contexts broadly and specifically in a Swedish context. Confirmatory factor analysis in this large and representative sample, utilizing first-order, second-order and bifactor models, provided preliminary for the proposed structure of a higher-order factor and at least three of the four domain-specific factors. The rather strong general factor supports the use of the EBPAS as a single measure of attitudes toward evidence-based practice. Most items contributed to this general factor as well as to their domain-specific factor. However, the Divergence subscale revealed a somewhat weaker correlation to the General Attitudes factor.

With respect to reliability, the Swedish version was on par with the English original, with good internal consistency for the total scale and subscales with the exception of Divergence [1, 13, 14]. One of the obvious problems with the Divergence scale is its low homogeneity and the implications for its use. Correlations between a scale with low reliability and some objective criterion (or outcome variable) will be attenuated and the statistical power will be decreased [46]. Another problem is that the items of the Divergence subscale are phrased in an opposite direction to the items of the other subscales and items #5 (“Researcher-based not useful”) and # 7 (“Won’t use manuals”) are negatively phrased. It has been suggested that self-rating scales should include items that are phrased in a positive and a negative way, but this may be difficult with the EPBAS. It is well-known that such differences in phrasing are associated with lower correlations between items and subscales [47]. It is important to note that the Requirement scale has consistently demonstrated strong internal consistency but consists of only three items with similar phrasing, differing by a single word [1, 14]. Similarities in item wording can impact participant responses, subsequent factor structure and may inflate a scale’s reliability. Overall the EBPAS is a reliable scale and there are obvious benefits of briefer scales when carrying out research with already over-burdened health professionals. Nevertheless, we suggest that additional items be added to all of the subscales to improve the reliability.

With respect to the factor structure, the factor loadings from the first-order model were highly similar to those found for the English language original, providing support for the validity for this Swedish version and for the four-factor model and subscale scoring more broadly [1, 13]. The Requirements subscale had the strongest loadings, followed by Openness, Appeal and Divergence.

The factor loadings in the second-order model were also similar to the English language original but loadings on the General Attitudes factor were somewhat stronger, especially for the Divergence scale [14]. Item #3 (“Know better than researchers”) had the lowest loading in this Swedish and the English language original.

One of the problems with hierarchical models is that even if they are supported they do not address whether there is unique systematic variance in the subscales [38]. Bifactor models allow a direct exploration of the extent to which subscales reflect a common target domain and the extent to which they reflect a primary subdomain. Comparing the bifactor models of the Dutch and Swedish versions of the EBPAS reveal similar loadings, but also some notable differences [32]. In line with the Dutch bifactor CFA model, the Requirements, Openness and Divergence subscales where clearly supported, but not the Appeal subscale [14]. In contrast to the Dutch study, all EBPAS items contributed to the General Attitudes factor and the unique information in the Divergence subscale was better supported. This is the first study investigating the factor structure of the EBPAS to find support for the General Attitudes factor based on a bifactor model of the scale.

Aarons has suggested that the EBPAS can be used prior to implementation efforts that aim to target specific organizational and leadership activities to enhance buy-in, subsequent uptake and sustainment of an EBP program [14]. Our findings clearly support the use of the EBPAS total score for assessing general attitudes toward EBP and the presence of multidimensionality does not handicap the ability to interpret the EBPAS as one scale. The total score includes more items than each of the four domain-specific subscales and therefore provides a larger breadth in terms of content validity and increased reliability. Previous studies find stronger evidence of convergent validity for the EBPAS total score than for the subscales but somewhat weaker evidence in relation to the total score’s predictive validity [15,16,17,18]. It is important to point out that in any study, a drawback of a using a total score is that you lose information about the relation between the individual facets measured by the scale and a criterion variable [46]. In addition, if only some of the facets predict an expected outcome, the total scale will result in weaker predictions than the subscale themselves. If the different facets are related to an implementation outcome in an opposite direction, it can also result in a misleading null-finding. Our results suggest that the Requirement, Openness, but possibly not the Appeal and Divergence subscales can be scored and used as indicators of specific facets of attitudes towards EBP when a more precise and specific analysis is needed. However, implementation project planners and researcher should be aware that a strong relationship between the subscales and any implementation outcome may be due to the common variance (i.e., general attitude) measured by the subscale and not its unique contribution to the implementation outcome. More importantly, until unique contribution is better supported by adding items to increase the reliability of the EBPAS subscales, we cannot recommend that the subscales be used as predictors of implementation outcomes independently of the total scale.

Additional studies are needed that examine the factor structure of the Swedish version of the EBPAS in different healthcare contexts (e.g., with adult mental health professionals or healthcare professionals more broadly). Likewise further studies are needed that examine the relationship between the EBPAS (subscales and total scale) and different outcome variables, including the EBPAS’ ability to predict the adoption, fidelity to and sustainment of EBP in a Swedish context.

Nevertheless, the present results should be taken into consideration in any future revisions of the EBPAS. The original (English language) EBPAS has previously been modified, retaining the 15 original items and subscales and adding 35 items covering eight new domains of attitudes towards EBP [47]. This 50-item version has been shortened to 36 items; keeping the 12 domain-specific subscales but with one item removed from each of the four-item subscales, including Items #9 (Appeal),#8 (Openness), and #3 (Divergence) [48]. The subscales in these revised versions have received preliminary support but use of the total score as a single scale requires further validation [47,48,49]. Based on the results from the present and previous research with the 15-item EBPAS, we suggest a somewhat different approach to further revision, namely the development and evaluation of the briefer version: replacing the problematic Item #3 from the Divergence scale, rewording items in the Requirement subscales to make them less similar, replacing items from the Appeal subscale that are too similar, and adding a few items to the four original domain-specific subscales to increase their reliability. In this way it may be possible to retain a relatively brief scale that is valid and reliable, and practical for use in routine care settings where clinicians often complain of being overburdened with forms to complete.

An issue of relevance to the further development of measures of EBP broadly, and the EBPAS specifically, concerns the content validity of the individual items. Conceptions of EBP have developed over time in the literature and so it is likely that what practitioners understand as the “behavioral” components of EBP, as well as attitudes, is likely to change over time and vary between different healthcare contexts. In their effort to develop a theory- and data-driven -focused approach to item development for a measure of EBP that could be used in an implementation context, Burgess et al. suggested that inclusion of items that measure the importance of clinical experience over EBPs, clinician openness to change and problems with EBPs would increase the pragmatic utility of future measures [15].

Findings from the present study should be viewed within the context of certain methodological strengths and limitations. The sample represents front-line practitioners from a geographically diverse area covering more than a third of Sweden’s child and adolescent mental health services and representing more than half of the Swedish regions (counties). The sample’s characteristics were similar to available national data describing the child mental health service workforce and sufficiently robust in size for the purposes of estimating internal reliability and carrying out confirmatory factor analyses [50]. We were unable to obtain data from non-respondents to the survey and that would have allowed us to examine potential bias. Also, the likelihood that non-responders should have influenced the correlation between the items, investigated here, is less likely; than it should have influenced the mean levels.

The data originated from 11 different services, but the difference between them were rather small and could not have had any decisive influence on the estimations. Finally, there was insufficient sample size to permit creation of meaningful subgroups of participants to allow testing for invariance between groups. Invariance tests should be conducted in future validation studies of the Swedish EPBAS.


The present study provides support for the reliability and construct validity of the Swedish version of the EBPAS. The internal consistency coefficients for the subscales and scale overall, and the observed factor structures of the Swedish version were comparable to those reported for the English language original and other language translations. The EBPAS total score can be used as a measure of global attitudes toward EBP in implementation project planning and research, but the subscales should only be used in conjunction with the total score. Further revision of the EBPAS is warranted in order to improve the reliability and validity of the subscales.

Availability of data and materials

The dataset supporting the conclusions of this article is available in the Halland Hospital Halmstad repository. The dataset used and/or analyzed during current study are available from the corresponding author on reasonable request.



Evidence-based practice


Evidence-based practice attitude scale


Child and adolescent mental health services


Confirmatory factor analysis


Weighted least squares -robust mean and variance adjusted


Root mean square error of approximation


Comparative fit index


Tucker-lewis index


Standardized root mean square residual


Weighted root mean square residual


  1. Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: the evidence-based practice attitude scale (EBPAS). Ment Health Serv Res. 2004;6(2):61–74.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

    Article  Google Scholar 

  3. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev. 1977;84(2):191–215.

    Article  CAS  PubMed  Google Scholar 

  4. Rogers E. Diffusion of innovation. New York: Free Press; 1995.

    Google Scholar 

  5. Swann WB Jr, Chang-Schneider C, Larsen McClarty K. Do people's self-views matter? Self-concept and self-esteem in everyday life. Am Psychol. 2007;62(2):84.

    Article  PubMed  Google Scholar 

  6. Reise SP, Bonifay WE, Haviland MG. Scoring and modeling psychological measures in the presence of multidimensionality. J Pers Assess. 2013;95(2):129–40.

    Article  PubMed  Google Scholar 

  7. Aarons GA, Glisson C, Hoagwood K, Kelleher K, Landsverk J, Cafri G. Psychometric properties and U.S. National norms of the Evidence-Based Practice Attitude Scale (EBPAS). Psychol Assess. 2010;22(2):356–65.

  8. Chaudoir SR, Dugan AG, Barr CHI. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Sci. 2013;8:22.

    Article  Google Scholar 

  9. Rabin BA, Lewis CC, Norton WE, Neta G, Chambers D, Tobin JN, Brownson RC, Glasgow RE. Measurement resources for dissemination and implementation research in health. Implement Sci. 2016;11:42.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, Martinez RG. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci. 2015;10(1):155.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Martinez RG, Lewis CC, Weiner BJ. Instrumentation issues in implementation science. Implement Sci. 2014;9(1):118.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Lewis CC, Stanick CF, Martinez RG, Weiner BJ, Kim M, Barwick M, Comtois KA. The Society for Implementation Research Collaboration Instrument Review Project: a methodology to promote rigorous evaluation. Implement Sci. 2015;10:2.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Aarons GA, McDonald EJ, Sheehan AK, Walrath-Greene CM. Confirmatory factor analysis of the evidence-based practice attitude scale in a geographically diverse sample of community mental health providers. Admin Pol Ment Health. 2007;34(5):465–9.

    Article  Google Scholar 

  14. Aarons GA, Glisson C, Hoagwood K, Kelleher K, Landsverk J, Cafri G. Psychometric properties and U.S. national norms of the evidence-based practice attitude scale (EBPAS). Psychol Assess. 2010;22(2):356–65.

    Article  PubMed  Google Scholar 

  15. Burgess AM, Okamura KH, Izmirian SC, Higa-McMillan CK, Shimabukuro S, Nakamura BJ. Therapist attitudes towards evidence-based practice: a joint factor analysis. J Behav Health Serv Res. 2017;44(3):414–27.

    Article  PubMed  Google Scholar 

  16. Nakamura BJ, Higa-McMillan CK, Okamura KH, Shimabukuro S. Knowledge of and attitudes towards evidence-based practices in community child mental health practitioners. Adm Policy Ment Health Ment Health Serv Res. 2011;38(4):287–300.

    Article  Google Scholar 

  17. Mah AC, Hill KA, Cicero DC, Nakamura BJ. A Psychometric Evaluation of the Intention Scale for Providers-Direct Items. J Behav Health Serv Res. 2020;47(2):245–63.

  18. Ashcraft RG, Foster SL, Lowery AE, Henggeler SW, Chapman JE, Rowland MD. Measuring practitioner attitudes toward evidence-based treatments: a validation study. J Child Adolesc Subst Abuse. 2011;20(2):166–83.

    Article  Google Scholar 

  19. Aarons GA, Glisson C, Green PD, Hoagwood K, Kelleher KJ, Landsverk JA, Weisz JR, Chorpita B, Gibbons R, Glisson C, et al. The organizational social context of mental health services and clinician attitudes toward evidence-based practice: a United States national study. Implement Sci. 2012;7:56.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Powell BJ, Mandell DS, Hadley TR, Rubin RM, Evans AC, Hurford MO, Beidas RS. Are general and strategic measures of organizational context and leadership associated with knowledge and attitudes toward evidence-based practices in public behavioral health settings? A cross-sectional observational study. Implement Sci. 2017;12(1):64.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Saldana L, Chapman JE, Henggeler SW, Rowland MD. The organizational readiness for change scale in adolescent programs: criterion validity. J Subst Abus Treat. 2007;33(2):159–69.

    Article  Google Scholar 

  22. Aarons GA, Green AE, Miller E. Researching Readiness for Implementation of Evidence-Based: A comprehensive review of the evidence-based practice attitude scale (EBPAS). In: PD KB, editor. Handbook of Implementation Science for Psychology in Education. Cambridge and New York: Cambridge University Press; 2012. p. 150.

    Chapter  Google Scholar 

  23. Becker EM, Smith AM, Jensen-Doss A. Who's using treatment manuals? A national survey of practicing therapists. Behav Res Ther. 2013;51(10):706–10.

    Article  PubMed  Google Scholar 

  24. Beidas R, Skriner L, Adams D, Wolk CB, Stewart RE, Becker-Haimes E, Williams N, Maddox B, Rubin R, Weaver S, et al. The relationship between consumer, clinician, and organizational characteristics and use of evidence-based and non-evidence-based therapy strategies in a public mental health system. Behav Res Ther. 2017;99:1–10.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Wiltsey Stirman S, Gutner CA, Crits-Christoph P, Edmunds J, Evans AC, Beidas RS. Relationships between clinician-level attributes and fidelity-consistent and fidelity-inconsistent modifications to an evidence-based psychotherapy. Implement Sci. 2015;10:115.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Becker-Haimes EM, Okamura KH, Wolk CB, Rubin R, Evans AC, Beidas RS. Predictors of clinician use of exposure therapy in community mental health settings. J Anxiety Disord. 2017;49:88–94.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Beidas RS, Edmunds J, Ditty M, Watkins J, Walsh L, Marcus S, Kendall P. Are inner context factors related to implementation outcomes in cognitive-behavioral therapy for youth anxiety? Admin Pol Ment Health. 2014;41(6):788–99.

    Article  Google Scholar 

  28. Beidas RS, Marcus S, Wolk CB, Powell B, Aarons GA, Evans AC, Hurford MO, Hadley T, Adams DR, Walsh LM, et al. A prospective examination of clinician and supervisor turnover within the context of implementation of evidence-based practices in a publicly-funded mental health system. Admin Pol Ment Health. 2016;43(5):640–9.

    Article  Google Scholar 

  29. Lau AS, Lind T, Crawley M, Rodriguez A, Smith A, Brookman-Frazee L. When do therapists stop using evidence-based practices? Findings from a mixed method study on system-driven implementation of multiple EBPs for children. Admin Pol Ment Health. 2020;47(2):323–37.

    Article  Google Scholar 

  30. Wolf DAPS, Dulmus CN, Maguin E, Fava N. Refining the evidence-based practice attitude scale: an alternative confirmatory factor analysis. Soc Work Res. 2014;38(1):47–58.

    Article  PubMed Central  Google Scholar 

  31. Keyser D, Harrington D, Ahn H. A confirmatory factor analysis of the evidence-based practice attitudes scale in child welfare. Child Youth Serv Rev. 2016;69:158–65.

    Article  Google Scholar 

  32. van Sonsbeek MA, Hutschemaekers GJ, Veerman JW, Kleinjan M, Aarons GA, Tiemens BG. Psychometric properties of the Dutch version of the evidence-based practice attitude scale (EBPAS). Health Res Policy Syst. 2015;13:69.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Melas CD, Zampetakis LA, Dimopoulou A, Moustakis V. Evaluating the properties of the evidence-based practice attitude scale (EBPAS) in health care. Psychol Assess. 2012;24(4):867–76.

    Article  PubMed  Google Scholar 

  34. Cook CR, Davis C, Brown EC, Locke J, Ehrhart MG, Aarons GA, Larson M, Lyon AR. Confirmatory factor analysis of the evidence-based practice attitudes scale with school-based behavioral health consultants. Implement Sci. 2018;13(1):116.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Ringle JL, James S, Ross JR, Thompson RW: Measuring youth residential care provider attitudes: a confirmatory factor analysis of the evidence-based practice attitude scale. 2017.

  36. Egeland KM, Ruud T, Ogden T, Lindstrom JC, Heiervang KS. Psychometric properties of the Norwegian version of the evidence-based practice attitude scale (EBPAS): to measure implementation readiness. Health Res Policy Syst. 2016;14(1):47.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Sousa VD, Rojjanasrirat W. Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: a clear and user-friendly guideline. J Eval Clin Pract. 2011;17(2):268–74.

    Article  PubMed  Google Scholar 

  38. Reise SP. The rediscovery of bifactor measurement models. Multivar Behav Res. 2012;47(5):667–96.

    Article  Google Scholar 

  39. Chen FF, West SG, Sousa KH. A comparison of bifactor and second-order models of quality of life. Multivar Behav Res. 2006;41(2):189–225.

    Article  Google Scholar 

  40. Tabachnick BG, Fidell LS, Ullman JB. Using multivariate statistics, vol. 5: Pearson Boston, MA; 2007.

  41. IBM Corp. Released 2016. IBM SPSS statistics for windows VA. NY: IBM Corp; 2016.

    Google Scholar 

  42. Muthén LK, Muthén BO. The comprehensive modelling program for applied researchers: user’s guide. Mplus User’s Guide. Seventh (Ed). Los Angeles: Muthén & Muthén; 1998-2015.

  43. Sass DA, Schmitt TA, Marsh HW. Evaluating model fit with ordered categorical data within a measurement invariance framework: a comparison of estimators. Struct Equ Model Multidiscip J. 2014;21(2):167–80.

    Article  Google Scholar 

  44. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model Multidiscip J. 1999;6(1):1–55.

    Article  Google Scholar 

  45. Rodriguez A, Reise SP, Haviland MG. Evaluating bifactor models: calculating and interpreting statistical indices. Psychol Methods. 2016;21(2):137.

    Article  PubMed  Google Scholar 

  46. Chen FF, Hayes A, Carver CS, Laurenceau JP, Zhang Z. Modeling general and specific variance in multifaceted constructs: a comparison of the bifactor model to other approaches. J Pers. 2012;80(1):219–51.

    Article  PubMed  Google Scholar 

  47. Hughes FPITBDJ. The Wiley handbook of psychometric testing : a multidisciplinary reference on survey, scale, and test development. First ed. Hoboken, NJ ; Chichester, West Sussex: John Wiley & Sons; 2018.

    Google Scholar 

  48. Rye M, Torres EM, Friborg O, Skre I, Aarons GA. The Evidence-based Practice Attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples. Implementation Science. 2017;12(1):44.

  49. Rye M, Friborg O, Skre I. Attitudes of mental health providers towards adoption of evidence-based interventions: relationship to workplace, staff roles and social and psychological factors at work. BMC Health Serv Res. 2019;19(1):110.

  50. Uppdrag Psykisk Hälsa S. Kartläggning Barn- och ungdomspsykiatrin 2016. In. SALAR Swedish Association of Local Authorities and Regions/ SKL Sveriges kommuner och landsting; 2017.

Download references


Not applicable.


This study was funded by grants from the Halland county council. Open Access funding provided by Lund University.

Author information

Authors and Affiliations



HJ is the principal investigator, responsible for the design of the overall project, including this validation study of the Swedish version of the EBPAS. AS and MB conceptualized the specific research questions and the analytic approach for this manuscript. AS, HJ, RH and SP were responsible for the translation of the EBPAS into Swedish. Analysis were conducted by AS and MB. AS wrote the first draft of the manuscript; all authors (AS, HJ, RH, SP and MB) edited and revised the manuscript and provided critical commentary. All authors (AS, MB, RH, SP and HJ) approved the final manuscript.

Corresponding author

Correspondence to Anna Helena Elisabeth Santesson.

Ethics declarations

Ethics approval and consent to participate

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study was approved by the Regional Ethical Review Board in Umeå, dep for medical research, (Regionala etikprövningsnämnden i Umeå avdelningen för medicinsk forskning); EPN 2015/186–31 and EPN 2016/502–32. In Sweden the Regional ethical boards used to be a part of the faculty of medicine of the regional University until 2004. Between 2004 and 2019 the Regional ethical boards were independent authorities. From 1 January 2019, applications for ethical examination of research are scrutinized by the new Swedish Ethical Review Authority.

Informed consent was obtained from all individual participants included in the study. The respondents were informed about the research project and that completion of the web-based survey was accepted as consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Results of Prior Confirmatory Factor Analyses. Aaron’s previous factor analytic results and van Sonsbeek bifactor model result (sample size, model fit and factor loadings)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Santesson, A.H.E., Bäckström, M., Holmberg, R. et al. Confirmatory factor analysis of the Evidence-Based Practice Attitude Scale (EBPAS) in a large and representative Swedish sample: is the use of the total scale and subscale scores justified?. BMC Med Res Methodol 20, 254 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: