Advancing the argument for validity of the Alberta Context Tool with healthcare aides in residential long-term care
© Estabrooks et al; licensee BioMed Central Ltd. 2011
Received: 31 January 2011
Accepted: 18 July 2011
Published: 18 July 2011
Organizational context has the potential to influence the use of new knowledge. However, despite advances in understanding the theoretical base of organizational context, its measurement has not been adequately addressed, limiting our ability to quantify and assess context in healthcare settings and thus, advance development of contextual interventions to improve patient care. We developed the Alberta Context Tool (the ACT) to address this concern. It consists of 58 items representing 10 modifiable contextual concepts. We reported the initial validation of the ACT in 2009. This paper presents the second stage of the psychometric validation of the ACT.
We used the Standards for Educational and Psychological Testing to frame our validity assessment. Data from 645 English speaking healthcare aides from 25 urban residential long-term care facilities (nursing homes) in the three Canadian Prairie Provinces were used for this stage of validation. In this stage we focused on: (1) advanced aspects of internal structure (e.g., confirmatory factor analysis) and (2) relations with other variables validity evidence. To assess reliability and validity of scores obtained using the ACT we conducted: Cronbach's alpha, confirmatory factor analysis, analysis of variance, and tests of association. We also assessed the performance of the ACT when individual responses were aggregated to the care unit level, because the instrument was developed to obtain unit-level scores of context.
Item-total correlations exceeded acceptable standards (> 0.3) for the majority of items (51 of 58). We ran three confirmatory factor models. Model 1 (all ACT items) displayed unacceptable fit overall and for five specific items (1 item on adequate space for resident care in the Organizational Slack-Space ACT concept and 4 items on use of electronic resources in the Structural and Electronic Resources ACT concept). This prompted specification of two additional models. Model 2 used the 7 scaled ACT concepts while Model 3 used the 3 count-based ACT concepts. Both models displayed substantially improved fit in comparison to Model 1. Cronbach's alpha for the 10 ACT concepts ranged from 0.37 to 0.92 with 2 concepts performing below the commonly accepted standard of 0.70. Bivariate associations between the ACT concepts and instrumental research utilization levels (which the ACT should predict) were statistically significant at the 5% level for 8 of the 10 ACT concepts. The majority (8/10) of the ACT concepts also showed a statistically significant trend of increasing mean scores when arrayed across the lowest to the highest levels of instrumental research use.
The validation process in this study demonstrated additional empirical support for construct validity of the ACT, when completed by healthcare aides in nursing homes. The overall pattern of the data was consistent with the structure hypothesized in the development of the ACT and supports the ACT as an appropriate measure for assessing organizational context in nursing homes. Caution should be applied in using the one space and four electronic resource items that displayed misfit in this study with healthcare aides until further assessments are made.
Organizational context refers to "...the environment or setting in which people receive healthcare services, or in the context of getting research evidence into practice, the environment or setting in which the proposed change is to be implemented"  (page 299). Health services researchers are increasingly aware of the central role that organizational context plays in knowledge translation (the uptake of research evidence) by healthcare providers, and the potential role of context in improving patient, staff, and system outcomes. As a result, a growing body of knowledge on organizational context that crosses multiple disciplines and sectors is emerging [2–9]. Despite the advances in understanding the theoretical base of organizational context, its measurement has not been adequately addressed. This limits our ability to quantify and assess context in healthcare settings and thereby hinders the development and assessment of context-based interventions designed to improve patient care, and staff and system outcomes. The Alberta Context Tool (the ACT) was developed in 2006 to address this concern.
The ACT measures organizational context in complex healthcare settings by assessing care providers' and/or care managers' perceptions of context related to a specific patient/resident care unit or organization (e.g., hospital or nursing home) . The instrument is premised on knowledge translation theory, specifically: (1) the Promoting Action on Research Implementation in Health Services (PARiHS) framework of research implementation, which asserts that successful implementation of research evidence is a function of interplay between three factors: context, facilitation and evidence [11, 12] and (2) related literature in the fields of organizational science, research implementation, and knowledge translation [4, 5, 13]. Principles that informed the development of the ACT included brevity (it could be completed in 10 minutes or less) and a focus on potentially modifiable elements of context. Further details on the development of the ACT are published elsewhere .
Concepts in the ACT Survey
The actions of formal leaders in an organization (unit) to influence change and excellence in practice, items generally reflect emotionally intelligent leadership
H1: Care providers who perceive more positive (emotionally intelligent) unit leadership report higher research use
The leader calmly handles stressful situations
The way that "we do things' in our organizations and work units; items generally reflect a supportive work culture
H2: Care providers who perceive a more positive unit culture report higher research use
My organization effectively balances best practice and productivity
The process of using data to assess group/team performance and to achieve outcomes in organizations or units (i.e., evaluation)
H3: Care providers who perceive a larger number of unit feedback mechanisms report higher research use
Our team routinely monitors our performance with respect to the action plans
Social Capital 1
The stock of active connections among people. These connections are of three types: bonding, bridging, and linking
H4: Care providers who perceive more positive unit social capital activities report higher research use
People in the group share information with others in the group
Informal Interactions 2
Informal exchanges that occur between individuals working within an organization (unit) that can promote the transfer of knowledge
H5: Care providers who perceive a larger number of informal unit interactions report higher research use
How often do you interact with people in the following roles or positions?
- Someone who champions research and its use in practice
Formal exchanges that occur between individuals working within an organization (unit) through scheduled activities that can promote the transfer of knowledge
H6: Care providers who perceive a larger number of formal unit interactions report higher research use
How often do these activities occur?
Structural/Electronic Resources 3
The structural and electronic elements of an organization (unit) that facilitate the ability to assess and use knowledge
H7: Care providers who perceive a larger number of unit structural and electronic resources report higher research use
How often do you use/attend the following?
- Notice Boards
The cushion of actual or potential resources which allows an organization (unit) to adapt successfully to internal pressures for adjustments or to external pressures for changes
H8: Care providers who perceive sufficient unit staffing levels report higher research use
Enough staff to deliver quality care
H9: Care providers who perceive having sufficient space on their unit report higher research use
Use of designated space
H10: Care providers who perceive having sufficient time on their unit report higher research use
Time to do something extra for patients
Initial validation of the ACT was conducted on scores obtained using the 56-item instrument in a national, multi-site study of pediatric nurse professionals (N = 752 responses). In that study, a principal components analysis indicated a 13-factor solution (accounting for 59.26% of the variance and covariance in 'organizational context') . Initial construct validity was further supported with statistically significant correlations between the ACT factors and instrumental research utilization (i.e., the concrete application of research findings in practice, for example, use of guidelines). Adequate internal consistency reliability with Cronbach's alpha coefficients ranging from 0.54 to a 0.91 for the 13 factors was also reported. The purpose of the present study is to advance a validity argument for the ACT by assessing the reliability, acceptability, and validity of scores obtained with the instrument when completed by a somewhat different population, namely healthcare aides in residential long-term care settings (nursing homes).
The data analyzed in this paper are from the Translating Research in Elder Care (TREC) study . TREC is a multi-level longitudinal descriptive study aimed at identifying modifiable characteristics of organizational context in nursing homes that are associated with the uptake of research evidence by care providers and care managers, and the subsequent impact of this uptake on resident health (e.g., number of falls) and staff outcomes (e.g., burnout). TREC is situated in 36 nursing homes in the three Canadian Prairie Provinces of Alberta, Saskatchewan, and Manitoba, and is comprised of two interrelated projects and a series of pilot studies . The two major projects are: (1) TREC Project One - Building context, an organizational monitoring system in long-term care , and (2) TREC Project Two-Building context, a case study program in long-term care . Analyses in this paper utilize data from TREC Project One.
We drew two nursing home samples. The first sample consisted of 30 urban nursing homes, and the second of six rural nursing homes. We selected the 30 urban nursing homes using stratified random sampling. All urban nursing homes meeting the TREC inclusion criteria (see Additional File 1) were stratified according to three factors: (1) healthcare region (within province), (2) operational model (public, private, voluntary), and (3) size (small: 35 to 149 beds, large: ≥150 beds), producing six lists of eligible nursing homes per region. We then used stratified random sampling to select the 30 nursing homes. The analyses presented here use data from 25 of the 30 urban nursing homes. We excluded the six rural nursing homes in the sample (which were a convenience sample) because of urban-rural differences in context (as assessed by the ACT) and smaller facility size. In addition, the rural nursing homes tended to have only one unit. We also excluded five urban nursing homes that had only one unit, as more than one unit is required to run the aggregation statistics reported here. The team used a volunteer, census-like sampling technique to recruit individual participants within the nursing homes.
We collected data in TREC Project One at three levels: (1) facility (nursing home), (2) unit, and (3) individual (care providers, care managers, and residents). Facility- and unit-level structural data were collected from facility administrators and care managers respectively using standardized profile forms developed for the TREC study. Individual resident-level data came from the Resident Assessment Instrument-Minimum Data Set Version 2.0 (RAI-MDS 2.0) administrative databases. We collected individual data from healthcare aides, nurses, physicians, allied health providers, practice specialists, and care managers, using the TREC survey which contains the ACT instrument as its first component. The TREC survey also contains components that measure: organizational context, knowledge translation (defined as uptake of research evidence or best practices), and staff outcomes (e.g., burnout, job satisfaction). We invited all individuals in the identified respondent groups who met the TREC study inclusion criteria (see Additional File 1) and who could be contacted to participate by completing the TREC survey. Research assistants administered the survey to healthcare aides (the dominant direct care provider group in Canadian nursing homes) using computer-assisted, structured personal interviews. The remaining staff groups completed the survey online. The core of the survey is the Alberta Context Tool (ACT); we used data from individual healthcare aides in the analyses reported here.
Ethical approvals for the TREC study were obtained from the appropriate universities in the respective Canadian Prairie Provinces (University of Alberta Health Research Ethics Board, University of Calgary Conjoint Health Research Ethics Board, University of Saskatchewan Behavioural Research Ethics Board, University of Manitoba Fort Garry Campus Research Ethics Board). Operational approvals were obtained from all relevant healthcare organizations.
To assess the reliability of individual scores obtained from the healthcare aides, we calculated Cronbach's alpha for each concept contained in the ACT. Coefficients can range from 0 to 1; a coefficient of 0.70 is considered acceptable for newly developed scales, 0.80 or higher is preferred [17, 18].
We assessed acceptability of the ACT with the healthcare aides in our sample by evaluating: (1) missing response rates for all ACT items combined, and (2) the average length of time it took to complete the ACT portion of the TREC survey.
Our approach to assessing validity builds on the perspective of construct validity outlined by Cronbach and Meehl , which has been incorporated into the Standards for Educational and Psychological Testing (the Standards) . Its use is considered best practice in psychometrics . Using this approach, validation is a process that involves accumulating evidence to provide a strong scientific basis for proposed score interpretations. Evidence for validity in the Standards comes from four sources: (1) content-the extent to which items represent the content domain of the concept of interest; (2) response processes-how respondents interpret, process, and elaborate on item content and whether this is in accordance with the concept; (3) internal structure-associations among items and whether the data supports the relevant dimensionality; and (4) relations to other variables-the nature and extent of the relationships between scores obtained for the concept and other variables to which it is/is not expected to relate. In previous research we established: (1) content validity of the ACT [10, 22], (2) response processes evidence [10, 22, 23] and (3) early internal structure (principal components analysis) evidence in different sectors [10, 22, 23], including the nursing home sector . In this paper we focused on: validity evidence type 3 - advanced aspects of internal structure, and validity evidence type 4 - relations with other variables, when completed by healthcare aides in nursing homes.
We examined the internal structure of the ACT concepts using: (1) item-total statistics (using PASW Version 18.0 ) and (2) confirmatory factor analysis (CFA) (using LISREL ). From the item-total statistics, we considered items for further assessment if: (1) they correlated with the total scale (concept) score below 0.3, and (2) they caused a substantial rise or fall in Cronbach's alpha for the concept if removed [17, 26]. We used a confirmatory approach to factor analysis to validate the latent structure of the ACT, which was refined in our previous work conducted in the pediatric setting . The items included under each ACT conceptual dimension were designed to tap similar yet explicitly non-redundant contextual features, and hence the factor-structured models traditionally employed to assess internal structure are not precisely correct, though the similarity of items within the ACT conceptual dimensions renders the factor structure the most appropriate of the available model structures. We ran three factor models. Model 1 was comprised of all ACT items, the structure of which had been refined in our previous work in the pediatric setting. When Model 1 failed to function as anticipated, we did a more detailed investigation by setting up separate factor-structured models for the 7 scaled ACT concepts (Model 2) and the 3 non-scaled or count-based ACT concepts (Model 3).
Recent discussions on structural equation model testing [27, 28] argue that the χ2 statistic is the only reliable test of model fit, and question the use of commonly accepted fit indices such as the root mean square error of approximation (RMSEA), the standardized root mean squared residual (SRMSR), and the comparative fit index (CFI). While we tend to agree with the critiques of the fit indices, we are hesitant to entirely disregard them due to their previous common use (e.g., [29–31]). Consequently, we report the χ2 test of model-data fit and the fit indices indicated above, though we are mindful that none of these are definitive for our current analyses given the intentional inclusion of non-redundant items within each ACT concept.
Relations to Other Variables
We assessed relations to other variables validity by providing the bivariate associations (Pearson's correlation coefficient) between the 10 ACT concepts and instrumental research utilization (which the ACT should predict). To permit correlating of the 10 ACT concepts with instrumental research use, we created a single score for each of the ACT dimensions by averaging the relevant items if the items were scaled (leadership, culture, evaluation, social capital, organizational slack-staff, organizational slack-time, organizational slack-space), or recoding the items as existing and non-existing and then summing the number existing if the items were part of a count-based measure (informal interactions, formal interactions, structural and electronic resources). As a second (related but more detailed) test of relations to other variables validity, we examined whether the mean values for each ACT concept increased with increasing levels of instrumental research utilization, and we assessed the mean differences for statistical significance using one-way analysis of variance (ANOVA).
Instrumental research utilization refers to a direct and concrete use of research evidence in practice (e.g., use of guidelines). In the TREC survey we defined instrumental research use as 'use of best practices' for the healthcare aides, and measured it with a single item scored on a 5-point frequency scale from 1 (never use) to 5 (use almost always). In a recent systematic review of the psychometric properties of self-report research utilization instruments, Squires et al  reported that this specific measure of instrumental research utilization has been used in eight published studies (reported in 10 articles) with professional nurses (n = 8 articles, [33–40]), healthcare aides (n = 1 article, ), and allied professionals (n = 1 article, ) across a variety of healthcare settings. Validity evidence from all three applicable sources of validity (content, response processes, and relations to other variables) outlined in the Standards for Educational and Psychological Testing  was reported in one or more of these 10 articles. In addition to this validity evidence from past studies, we also pre-tested the Instrumental Research Utilization item alongside the ACT before using it in the larger TREC study reported in this paper . The sample for the pre-test included 73 healthcare aides and 18 licensed practical nurses from two nursing home units in one Canadian province.
Interclass correlation ICC(1) is a measure of agreement about the group mean. It is calculated as follows: (BMS - WMS)/(BMS + [K - 1] WMS), where BMS is the between-group mean square, WMS is the within-group mean square, and K is the number of subjects per group. The average K for unequal group size was calculated as K = (1/[N - 1]) (∑K - [∑K2/∑K]). Values greater than 0.00 indicate some degree of agreement among group members; values greater than 0.10 indicate strong agreement .
Interclass correlation ICC(2) is a measure of reliability. It is calculated as follows: (BMS - WMS)/BMS. Aggregated data are considered reliable when the ICC(2) is greater than 0.60 and/or the F value from ANOVA is significant .
η 2 is a measure of validity; it is an indicator of effect size and refers to the proportion of variation in the concept accounted for by group membership . It is calculated as follows: SSB/SST, where SSB is the sum of squares between groups and SST is the sum of squares total.
ω 2 is a measure of validity; it measures the relative strength of the aggregated data (or score) at the group level  and indicates how much information is carried up from the individual level to the group level when the data (or scores) are aggregated. It is calculated as follows: (SSB - [N - 1] WMS)/(SST + WMS).
Larger values of η2 and ω2 indicate stronger validity of the aggregated data.
Characteristics of the Healthcare Aide Sample (n = 645)
< 20 years
> 70 years
Shift worked most of the time
Number of Years Worked as HCA
Number of Years Worked on Unit
Hours worked in 2 weeks
Item Characteristics (n = 645)
No. Completed Responses
Item-total statistics (Alpha after an item deleted)
0.565 - 0.700
0.82 - 0.85
0.370 - 0.516
0.66 - 0.71
0.213 - 0.641
0.66 - 0.78
0.357 - 0.520
0.66 - 0.71
0.260 - 0.554
0.68 - 0.74
0.092 - 0.308
0.23 - 0.43
0.126 - 0.469
0.65 - 0.72
0.788 - 0.877
0.85 - 0.92
0.134 - 0.680
0.21 - 0.87
0.581 - 0.696
0.77 - 0.82
We determined acceptability by assessments of: (1) missing values on the ACT items, and (2) time to complete the survey. The percentage of healthcare aides providing complete data on all 58 ACT items (i.e., with no missing data) was high at 93.5% (n = 603 of 645 healthcare aides). The mean time for completion of the ACT instrument section of the TREC survey in the sample reported in this paper was 11.08 minutes (standard deviation: 2.93 minutes), close to our goal of 10 minutes. Combined, these findings make the ACT an acceptable instrument for health services researchers wishing to obtain quantitative measurement of organizational context in nursing homes.
Item Total Correlations and Statistics
The ranges of corrected item-total correlations and item-total statistics, along with the means (and standard deviations), for each ACT concept, are displayed in Table 3. Most (51 of 58) corrected item-total correlations were greater than the predetermined cut-off of 0.3 indicating that in general, item scores within each concept were related to the overall score for that concept. The seven items that did not meet this minimal cut-off represented five ACT concepts: (1) evaluation (item discuss data informally, 0.213); (2) informal interactions (item hallway talk, 0.260); (3) formal interactions (item change of shift report, 0.092; item team meetings, 0.257); (4) structural and electronic resources (item use of a computer hooked to the internet, 0.264; item attending in-services, 0.126); and (5) organizational slack-space (item adequate space for resident care, 0.134). Item-total statistics (alpha after item deletion) for each concept remained relatively unchanged, with the exception of one concept: item adequate space for resident care (concept of organizational slack-space); if this item was deleted, alpha increased substantially from 0.64 to 0.87. Based on the item analysis summarized above, we retained all 58 ACT items for entry into the initial factor model (Model 1).
Confirmatory Factor Analysis
Degrees of Freedom
All ACT Concepts
ACT Scale Concepts
ACT Non-Scale Concepts
Completely Standardized Factor Loadings for ACT Concepts (Models 2 and 3)
ACT Scaled Concepts
Looks for feedback
Focuses on successes
Calmly handles stress
Listens, acknowledges, responds
Actively mentors and coaches
Control over work
Clear on what patients want
Supportive work group
Routinely receive information
Discusses data informally
Formulates action plans
Monitors our performance
Compares our performance
Share information with others
Group participation is valued Information is shared
Aim is to help others
Observations are taken seriously
Comfortable talking in authority
Organizational Slack - Staff
Get the necessary work done
Deliver best possible care
Have best day
Organizational Slack - Space
Use of private space
Organizational Slack - Time
Do something extra for patients
Look something up
Talk about best practices
Talk to someone about care plan
ACT Non-Scaled (Count-Based) Concepts
Other healthcare aides
Other healthcare providers
Someone who brings new ideas
Informal bedside teaching
Policies and procedures
Clinical practice guidelines
Relations to Other Variables
The correlations among the latent factors in Models 2 and 3 provide evidence that the variables corresponding to the various ACT concepts are functioning appropriately. The 10 ACT concepts are supposed to be distinct or non-redundant and hence they should not correlate overly highly with one another, though it is reasonable to presume that these dimensions might be somewhat coordinated due to real (but currently un-researched) causal forces operating in nursing home settings. Thus, appropriately functioning items should result in factor correlations that might vary substantially between the ACT concepts but that should not be extremely high. In Model 2 the latent (concept) level correlations ranged between 0.082 and 0.735, and in Model 3 from 0.398 to 0.615, providing evidence that the items appropriately differentiated between the intended conceptual dimensions.
Validity Assessment for Relations with Other Variables: Correlation of ACT Concepts with Instrumental Research Utilization (IRU) and Increasing Mean Values of the ACT Concepts by Increasing Levels of IRU
Mean value (and relative % change1) of ACT concepts by level of instrumental research utilization
n = 5
n = 11
n = 59
n = 263
n = 332
Organizational Slack -
Organizational Slack -
Organizational Slack - Time
Table 6 also presents the means of each ACT dimension for respondents reporting various levels of instrumental research use. Too few respondents reported low levels of instrumental research use for the corresponding means to be statistically stable but the 97.6% of the responses having stable (in columns labeled 3, 4, and 5 in Table 6) means displayed clear and systematic increases for all 10 ACT concepts. These trends are most easily seen if expressed as the relative percent difference in mean scores (from the sample average); one-way ANOVA's showed these differences were significant for the same 8 of 10 ACT concepts displaying significant correlations. This analysis shows a positive incremental coordination between ACT dimensions and one important likely consequence of superior ACT context scores; namely, increasing levels of instrumental research utilization.
Unit-Level Aggregation† of the ACT Concepts
Organizational Slack - Staff
Organizational Slack - Space3
Organizational Slack - Time
Facility-Level Aggregation† of the ACT Concepts
Organizational Slack - Staff
Organizational Slack - Space3
Organizational Slack - Time
This study represents the first reported assessment of the ACT in either residential long-term care settings or with data provided by healthcare aides. We assessed reliability, acceptability, and validity of the ACT when completed by healthcare aides in nursing homes. To frame our validity assessment, we used the Standards for Educational and Psychological Testing which builds on Cronbach and Meehl's  perspective on construct validity. We focused on evidence from two of the Standards' four sources of validity evidence: internal structure and relations to other variables. In addition, we assessed the performance of the ACT concepts with individual responses aggregated to the level of the resident care unit; we did this because we developed the ACT as a unit-focused measure.
English as a First Language
In line with previous studies [47, 48], a substantial number (48%) of the healthcare aides who participated in the TREC study did not speak English as their first language. This provides challenges from a psychometric perspective because a homogenous sample is preferred for psychometric assessments such as confirmatory factor analysis. There is evidence to suggest that healthcare aides differ on several psychological concepts; for example, conceptual research utilization , job satisfaction and burnout [50, 51], and by ethnicity (of which first language spoken is a component). We, therefore, limited this initial assessment of the ACT with healthcare aides in nursing homes to individuals who spoke English as their first language. In future research we will conduct additional psychometric assessments with healthcare aides who do not speak English as their first language.
Reliability and Acceptability
The internal consistency of the ACT, in terms of Cronbach's alpha coefficients, was for the most part consistent with usual practice for measures intended to be used at the level of the group, or in our case, the resident care unit [46, 52]. Only two concepts had unacceptably low reliabilities: organizational slack-space and formal interactions. Both of these ACT concepts have few items (3 and 4 respectively). Within the organizational slack-space concept, 1 of the 3 items showed substantial misfit in the item-total statistics and CFA. When this item was removed from the scale, however, alpha increased substantially from 0.64 to 0.87. The low alpha found with the formal interactions concept can be explained by the fact that the items contained within this concept represent a 'list' of items. The items were purposefully selected to be non-redundant with each other and therefore, we expected lower reliability, as the item set were not developed as a 'true factor model'.
At just over 10 minutes to complete and with few missing data, the ACT met our criteria of acceptability. The low missing data values may also be attributed to our administration method (computer assisted structured personal interview). Pilot testing conducted prior to the study demonstrated that missing data would have been much higher if we had used traditional paper and pencil survey administration . Currently, we are conducting a study to further compare the computer assisted structured personal interview to the paper/pencil administration of the survey in nursing homes.
We originally selected the items comprising the ACT to cluster within basic conceptual domains. We also intentionally designed the items to be non-redundant so that each item focused on a slightly different feature of the respondent's work environment. The clustering of items within conceptual domains renders the factor model appropriate for assessing the ACT but the purposefully non-redundant nature of items within conceptual domains guaranteed that the ACT would not function perfectly as a factor model. In fact, the factor models we estimated functioned unexpectedly helpfully. We employed three factor models: Model 1 with the entire set of items, and Models 2 and 3 with just the scale and non-scale (or count-based) items, respectively. Model 1 pointed to four electronic resource items as being inconsistent with the other resource items. Electronic resources and structural resources may reflect two separate concepts in the nursing home environment. Alternatively, the electronic resource items may have performed poorly as items due to the uniformly low availability of, and access to, electronic resources for healthcare aides in nursing homes in general, and in the sampled nursing homes in particular.
Model 1 also clearly reported that one organizational slack-space item (adequate space for resident care) did not function consistently with the other items of organizational slack-space (availability of private space to discuss care and knowledge, and use of private space to discuss care and knowledge). It correlated negatively with these other items, had a low item-total correlation, alpha increased if this item was deleted, and this item displayed substantial misfit in the standardized residuals in Model 1. This suggests that this particular item may not be appropriate for use with healthcare aides-possibly due to the nature of their daily tasks. In our first report on the ACT in which we used data from pediatric acute care facilities and registered nurses, this item performed much better . As predicted, Model 2 for the scaled concepts (with the space item on 'adequate space for resident care' removed from the organizational slack-space concept) performed better than either Model 1 (all items) or Model 3 (count-based concepts with the 4 electronic resource items removed from the structural and electronic resources concept).
A model appropriately acknowledging the non-redundancy of the items would require use of single-item indicated latent concepts, but such a model does not provide the kind evidence required by the Standards. A better model would be to simultaneously assess both measurement and latent structures using structural equation modeling. We are, however, missing some elements that our theoretical framework stipulates would be required to undertake a full assessment in this manner. The PARiHS framework developers argue that optimal implementation of research is achieved when optimal levels of context, facilitation and evidence are present. A full assessment of construct validity would then include measures of evidence and facilitation, in addition to context. In this study, we are focusing on organizational context and its direct and indirect effects on research uptake and resident and staff outcomes and do not have the needed measures of facilitation or evidence to test the full PARiHS model. While an assessment of the influence of context on research uptake is the next planned analysis, the PARiHS framework a priori suggests that we will have low explained variance and fit problems with a structural equation model because we have only a partial set of the essential components of the framework. A confirmatory factor analysis was therefore our next best choice at this stage with which to assess the internal structure of the ACT.
Relations to Other Variables
To test relations to other variables, we conducted two correlational analyses. First, we examined the correlation coefficients between the 10 ACT latent concepts produced in the confirmatory factor analyses. Model 2 (scaled ACT concepts with the space item on 'adequate space for resident care' removed from the organizational slack-space concept) and Model 3 (count-based concepts with the 4 electronic resource items removed from the structural and electronic resources concept) were used in this assessment. The latent (concept-level) correlations between the ACT concepts were low to moderate in magnitude, providing evidence that the variables corresponding to the 10 concepts were functioning appropriately. That is, they are functioning as distinct (non-redundant) concepts.
As a second test of relations to other variables, we examined bivariate correlations between the 10 ACT concepts and instrumental research use (which the ACT was designed to predict). The five items (one organizational slack-space item and four electronic resource items) showing misfit in the confirmatory factor Model 1 and removed from Models 2 and 3 were also removed from this analysis. We found statistically significant relationships between 8 of 10 ACT concepts and instrumental research use. That is, higher levels of research utilization were associated with more positive contextual conditions. Further analyses also showed a trend for each of the 10 ACT concepts, of increasing mean values from low to high levels of instrumental research use, commencing at scale point 3. These findings are consistent with the PARiHS framework's assertions about the role of a positive context in promoting greater uptake of research findings and provide additional empirical support for the construct validity of the ACT.
Our aggregation statistics indicate that in nursing homes healthcare aide responses on the ACT can be reliably aggregated to obtain a unit-level assessment of organizational context. This is consistent with our previous report in the context of pediatric nurses' scores . As with the registered nurses in our pediatric sample, healthcare aides perform most of their work on a single unit, are aligned with that unit, and therefore are able to assess and report on common practices and experiences of the unit - causing them to respond similarly on items within the ACT (i.e., items asking about their unit). Support for aggregating healthcare aide responses on the ACT to the nursing home level was, as expected, weaker than the care unit level. This is consistent with healthcare aides' work practices and experiences being aligned more with the unit than the larger facility. The statistics were also to be expected given that larger aggregates of people are expected to vary less than smaller aggregations, and much less than individuals' responses.
The ACT scores can be used individually or they can be aggregated to at least the care unit level. Healthcare aides constitute the majority of direct care providers in nursing homes and as such are the individuals who spend the most direct care time with residents. Thus, if our intent is to plan to develop and implement interventions that influence resident care, the healthcare aide perspective is the most germane. We are collecting assessments from other providers (e.g., registered nurses, licensed practical nurses and managers closely aligned with the resident care unit), but we are aware these will provide differing perspectives and work remains to describe these and to hypothesize their existence.
Validation of a newly developed instrument such as the ACT is a longitudinal and multi-step process requiring numerous positive findings across a variety of applications and settings. The report here represents only the second stage of our validation efforts; additional validation studies are needed to establish the reliability and validity of the ACT in other samples and settings. A stronger assessment of construct validity will be possible when future studies, implementing measures of evidence and facilitation, enable us to simultaneously assess the measurement and latent structure of the ACT using structural equation modeling; these are planned.
We developed the ACT to have three characteristics: (1) a theoretical basis, namely the PARIHS framework, (2) parsimony, using the fewest number of items possible to reduce completion time, and (3) items that reflected modifiable features of context. The characteristic of parsimony has an impact on performance using traditional psychometric criteria. The validation process in this study demonstrated additional empirical support for construct validity of the ACT. This is the first assessment of the ACT in residential long-term care settings or with healthcare aides, and our findings support the ACT as an acceptable measure of context in this sector. The overall pattern of the data was consistent with the structure hypothesized in the development of the ACT. Our findings add to early evidence for its generalizability, but should still be interpreted with caution. These results support the ACT as an appropriate measure for assessing context in nursing homes at the individual healthcare provider (healthcare aide) level, as well as at the unit level by aggregating healthcare aide responses to the level of the care unit. Caution should be used in including the five items showing misfit (i.e., the space item in the organizational slack-space ACT concept and four electronic resource items in the structural and electronic resources ACT concept) with healthcare aides until further assessments are made.
Within the Standards approach, validity is not derived from any one source at a point in time; rather, it is accumulated over time and across studies. In this study, we offer internal structure validity evidence and relations to other variables validity evidence, adding to the existing validity evidence from content (the extent to which items represent the content domain) and response processes (how respondents interpret, process, and elaborate on item content and whether this is in accordance with the construct) reported previously . Follow-up studies are in progress in which we are assessing the ACT with a wide array of healthcare workers-nurses, allied healthcare providers and professionals, physicians, and specialists (e.g., educators), and care managers in long-term care (nursing home) settings. Additional information on the ACT is available from the lead author of this paper.
The authors also acknowledge the Translating Research in Elder Care (TREC) study which was funded by the Canadian Institutes of Health Research (CIHR) (MOP #53107). The TREC Team (at the time of this study) include: Carole A Estabrooks (PI), Investigators: Greta G Cummings, Lesley Degner, Sue Dopson, Heather Laschinger, Kathy McGilton, Verena Menec, Debra Morgan, Peter Norton, Joanne Profetto-McGrath, Jo Rycroft-Malone, Malcolm Smith, Norma Stewart, Gary Teare. Decision-makers: Caroline Clarke, Gretta Lynn Ell, Belle Gowriluk, Sue Neville, Corinne Schalm, Donna Stelmachovich, Gina Trinidad, Juanita Tremeer, Luana Whitbread. Collaborators: David Hogan Chuck Humphrey, Michael Leiter, Charles Mather. Special advisors: Judy Birdsell, Phyllis Hempel (deceased), Jack Williams, and Dorothy Pringle (Chair, Scientific Advisory Committee).
CAE is supported by a Canadian Institutes for Health Research (CIHR) Canada Research Chair in Knowledge Translation. JES is a postdoctoral fellow at the Ottawa Hospital Research Institute supported by CIHR postdoctoral and Bisby fellowships; at the time of this study she was a PhD student in the TREC program and Faculty of Nursing at the University of Alberta supported by Killam, CIHR, and Alberta Heritage Foundation for Medical Research (AHFMR) training awards. GGC holds CIHR New Investigator and AHFMR Population Health Investigator awards.
We would like to thank Sung Hyun Kang, MSc (University of Alberta, Canada) for his assistance with the statistical analysis.
- Rycroft-Malone J: The PARiHS framework - a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004, 19: 297-304. 10.1097/00001786-200410000-00002.View ArticlePubMedGoogle Scholar
- Beyer JM, Trice HM: The utilization process: A conceptual framework and synthesis of empirical findings. Adm Sci Q. 1982, 27: 591-622. 10.2307/2392533.View ArticleGoogle Scholar
- Damanpour F: Organizational innovation: A meta-analysis of effects of determinants and moderators. Acad Manage J. 1991, 34: 555-590.View ArticleGoogle Scholar
- Fleuren M, Wiefferink K, Paulussen T: Determinants of innovation within health care organizations: Literature review and Delphi study. Int J Qual Health Care. 2004, 16: 107-123. 10.1093/intqhc/mzh030.View ArticlePubMedGoogle Scholar
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovation in service organizations: Systematic review and recommendations. Milbank Q. 2004, 82: 581-629. 10.1111/j.0887-378X.2004.00325.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Dopson S, Fitzgerald L, Eds.: Knowledge to action: Evidence-based health care in context. 2005, Oxford: Oxford University PressGoogle Scholar
- Meijers JMM, Janssen MAP, Cummings GG, Wallin L, Estabrooks CA, Halfens RYG: Assessing the relationships between contextual factors and research utilization in nursing: Systematic literature review. J Adv Nurs. 2006, 55: 622-635. 10.1111/j.1365-2648.2006.03954.x.View ArticlePubMedGoogle Scholar
- Glisson C: The organizational context of children's mental health services. Clin Child Fam Psychol Rev. 2002, 5: 233-253. 10.1023/A:1020972906177.View ArticlePubMedGoogle Scholar
- Glisson C, Dukes D, Green P: The effects of the ARC organizational intervention on caseworker turnover, climate, and culture in children's service systems. Child Abuse Negl. 2006, 30: 855-880. 10.1016/j.chiabu.2005.12.010.View ArticlePubMedGoogle Scholar
- Estabrooks C, Squires J, Cummings G, Birdsell J, Norton P: Development and assessment of the Alberta Context Tool. BMC Health Serv Res. 2009, 9: 234-10.1186/1472-6963-9-234.View ArticlePubMedPubMed CentralGoogle Scholar
- Kitson A, Harvey G, McCormack B: Enabling the implementation of evidence based practice: A conceptual framework. Qual Health Care. 1998, 7: 149-158. 10.1136/qshc.7.3.149.View ArticlePubMedPubMed CentralGoogle Scholar
- Kitson A, Rycroft-Malone J, Harvey G, McCormack BS, Titchen A: Evaluating the successful implementation of evidence into practice using the PARiHS framework: Theoretical and practical challenges. Implement Sci. 2008, 3: 1-10.1186/1748-5908-3-1.View ArticlePubMedPubMed CentralGoogle Scholar
- Grol R, Berwick D, Wensing M: On the trail of quality and safety in health care. BMJ. 2008, 336: 74-76. 10.1136/bmj.39413.486944.AD.View ArticlePubMedPubMed CentralGoogle Scholar
- Estabrooks CA, Hutchinson AM, Squires JE, Birdsell JM, Degner L, Sales AE, Norton PG: Translating Research in Elder Care: An introduction to a study protocol series. Implement Sci. 2009, 51-Google Scholar
- Estabrooks C, Squires J, Cummings G, Teare G, Norton P: Study protocol for the Translating Research in Elder Care (TREC): Building context - an organizational monitoring program in long-term care project (Project One). Implement Sci. 2009, 4: 52-10.1186/1748-5908-4-52.View ArticlePubMedPubMed CentralGoogle Scholar
- Rycroft-Malone J, Dopson S, Degner L, Hutchinson A, Morgan D, Stewart N, Estabrooks C: Study protocol for the Translating Research in Elder Care (TREC): Building context through case studies in long-term care project (Project Two). Implement Sci. 2009, 4: 1-10.1186/1748-5908-4-1.View ArticleGoogle Scholar
- Nunnally J, Bernstein I: Psychometric Theory. 1994, New York: McGraw-Hill, 3Google Scholar
- Waltz C, Strickland OL, Lenz ER: Measurement in nursing and health research. 2005, New York: Springer Publishing Company, 3Google Scholar
- Cronbach LJ, Meehl PE: Construct validity in psychological tests. Psychol Bull. 1955, 52: 281-302.View ArticlePubMedGoogle Scholar
- American Educational Research Association, American Psychological Association, National Council on Measurement in Education: Standards for educational and psychological testing. 1999, Washington, D.C.: American Educational Research AssociationGoogle Scholar
- Streiner D, Norman G: Health measurement scales: A practical guide to their development and use. 2008, Oxford: Oxford University Press, 4View ArticleGoogle Scholar
- Estabrooks CA, Squires JE, Adachi AM, Kong L, Norton PG: Utilization of health research in acute care settings in alberta technical report. 2008, Edmonton: Faculty of Nursing, University of AlbertaGoogle Scholar
- Squires JE, Kong L, Brooker S, Mitchell A, Sales AE, Estabrooks CA: Examining the role of context in Alzheimer care centers: A pilot study technical report. 2009, Edmonton: Faculty of Nursing, University of AlbertaGoogle Scholar
- SPSS: PASW Statistics. 2009, Chicago, IL: SPSS Inc., 18Google Scholar
- Joreskog K, Sorbom D: LISREL. 2004, Chicago, IL: Scientific Software International, Inc., 8.71Google Scholar
- Betz N: Test construction. The Psychology Research Handbook: A Guide for Graduate Students and Research Assistants. Edited by: Leong F, Austin J. 2000, Thousand Oaks, CA: Sage Publications, 239-250.Google Scholar
- Barrett P: Structural equation modelling: Adjudging model fit. Pers Individ Dif. 2007, 42: 815-824. 10.1016/j.paid.2006.09.018.View ArticleGoogle Scholar
- Hayduk L, Cummings G, Boadu K, Pazderka-Robinson H, Boulianne S: Testing! Testing! One, two, three - Testing theory in structural equation models!. Pers Individ Dif. 2007, 42: 841-850. 10.1016/j.paid.2006.10.001.View ArticleGoogle Scholar
- Kalisch B, Hyunhwa L, Salas E: The development and testing of the nursing teamwork survey. Nurs Res. 2010, 59: 42-50. 10.1097/NNR.0b013e3181c3bd42.View ArticlePubMedGoogle Scholar
- Hu L, Bentler P: Cut-off criteria for fit indices in covariance structure analyses: Conventional criteria versus new alternatives. Structural Equation Modelling. 1999, 6: 1-55.View ArticleGoogle Scholar
- Byrne B: Structural equation modeling. 1994, Thousand Oaks, CA: SageGoogle Scholar
- Squires JE, Estabrooks CA, O'Rourke HM, Gustavsson P, Newburn-Cook C, Wallin L: A systematic review of the psychometric properties of self-report research utilization measures used in healthcare. Implement Sci.Google Scholar
- Estabrooks CA: The conceptual structure of research utilization. Res Nurs Health. 1999, 22: 203-216. 10.1002/(SICI)1098-240X(199906)22:3<203::AID-NUR3>3.0.CO;2-9.View ArticlePubMedGoogle Scholar
- Estabrooks CA, Scott S, Squires JE, Stevens B, O'Brien-Pallas L, Watt-Watson J, Profetto-Mcgrath J, McGilton K, Golden-Biddle K, Lander J, Donner G, Boschma G, Humphrey CK, Williams J: Patterns of research utilization on patient care units. Implement Sci. 2008, 3: 31-10.1186/1748-5908-3-31.View ArticlePubMedPubMed CentralGoogle Scholar
- Estabrooks CA: Modeling the individual determinants of research utilization. West J Nurs Res. 1999, 21: 758-772. 10.1177/01939459922044171.View ArticlePubMedGoogle Scholar
- Estabrooks CA, Kenny DJ, Cummings GG, Adewale AJ, Mallidou AA: A comparison of research utilization among nurses working in Canadian civilian and United States Army healthcare settings. Res Nurs Health. 2007, 30: 282-296. 10.1002/nur.20218.View ArticlePubMedGoogle Scholar
- Profetto-McGrath J, Hesketh KL, Lang S, Estabrooks CA: A study of critical thinking and research utilization among nurses. West J Nurs Res. 2003, 25: 322-337. 10.1177/0193945902250421.View ArticlePubMedGoogle Scholar
- Kenny DJ: Nurses' use of research in practice at three US Army hospitals. Can J Nurs Leadersh. 2005, 18: 45-67.View ArticleGoogle Scholar
- Milner FM, Estabrooks CA, Humphrey C: Clinical nurse educators as agents for change: Increasing research utilization. Int J Nurs Stud. 2005, 42: 899-914. 10.1016/j.ijnurstu.2004.11.006.View ArticlePubMedGoogle Scholar
- Profetto-McGrath J, Smith KB, Hugo K, Patel A, Dussault B: Nurse educators' critical thinking dispositions and research utilization. Nurse Educ Pract. 2009, 9: 199-208. 10.1016/j.nepr.2008.06.003.View ArticlePubMedGoogle Scholar
- Connor N: The relationship between organizational culture and research utilization practices among nursing home departmental staff. 2007, M.N. Dalhousie University (Canada)Google Scholar
- Cobban SJ, Profetto-McGrath J: A pilot study of research utilization practices and critical thinking dispositions of Alberta dental hygienists. Int J Dent Hyg. 2008, 6: 229-237. 10.1111/j.1601-5037.2008.00299.x.View ArticlePubMedGoogle Scholar
- Glick WH: Conceptualizing and measuring organizational and psychological climate: Pitfalls in multilevel research. Acad Manage Rev. 1985, 10: 601-616.Google Scholar
- Rosenthal R, Rosnow RL: Essentials of behavioral research: Methods and data analysis. 1991, New York: McGraw-HillGoogle Scholar
- Keppel G: Design and analysis: A researcher's handbook. 1991, Prentice-HallGoogle Scholar
- Altman DG, Bland JM: Statistics notes: Units of analysis. BMJ. 1997, 314: 1874-View ArticlePubMedPubMed CentralGoogle Scholar
- Foner N: Nursing home aides: Saints or monsters?. Gerontologist. 1994, 34: 245-250. 10.1093/geront/34.2.245.View ArticlePubMedGoogle Scholar
- McGilton KS, McGillis Hall L, Wodchis WP, Petroz U: Supervisory support, job stress, and job satisfaction among long-term care nursing staff. J Nurs Adm. 2007, 37: 366-372. 10.1097/01.NNA.0000285115.60689.4b.View ArticlePubMedGoogle Scholar
- Squires JE, Estabrooks CA, Newburn-Cook C, Gierl M: Validation of the Conceptual Research Utilization Scale: An application of the Standards for Educational and Psychological Testing in healthcare. BMC Health Serv Res. 2011, 11: 107-10.1186/1472-6963-11-107.View ArticlePubMedPubMed CentralGoogle Scholar
- Coward RT, Hogan TL, Duncan RP, Horne CH, Hilker MA, Felsen LM: Job satisfaction of nurses employed in rural and urban long-term care facilities. Res Nurs Health. 1995, 18: 271-284. 10.1002/nur.4770180310.View ArticlePubMedGoogle Scholar
- Chappell NL, Novak M: The role of support in alleviating stress among nursing assistants. Gerontologist. 1992, 32: 351-359. 10.1093/geront/32.3.351.View ArticlePubMedGoogle Scholar
- Young TL, Kirchdoerfer LJ, Osterhaus JT: A development and validation process for a disease-specific quality of life instrument. Drug Inf J. 1996, 30: 185-193.Google Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/11/107/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.