Skip to main content

Development and initial psychometric evaluation of the computer-based prostate Cancer screening decision aid acceptance scale for African-American men

Abstract

Background

To reliably evaluate the acceptance and use of computer-based prostate cancer decision aids (CBDAs) for African-American men, culturally relevant measures are needed. This study describes the development and initial psychometric evaluation of the 24-item Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale among 357 African-American men.

Methods

Exploratory factor analysis (EFA) with maximum likelihood estimation and polychoric correlations followed by Promax and Varimax rotations.

Results

EFA yielded three factors: Technology Use Expectancy and Intention (16 items), Technology Use Anxiety (5 items), and Technology Use Self-Efficacy (3 items) with good to excellent internal consistency reliability at .95, .90, and .85, respectively. The standardized root mean square residual (0.035) indicated the factor structure explained most of the correlations.

Conclusions

Findings suggest the three-factor, 24-item Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale has utility in determining the acceptance and use of CBDAs among African-American men at risk for prostate cancer. Future research is needed to confirm this factor structure among socio-demographically diverse African-Americans.

Peer Review reports

Background

Prostate cancer (PrCA) incidence and mortality rates are higher among African-American men than any other racial group [1]. Many socio-economic [2, 3], environmental [3, 4], and epigenetic [5,6,7] factors are hypothesized to be key contributors to PrCA disparities among African-Americans and other racial groups, but no definitive causal links have been identified between PrCA and these factors. Fraught with controversy [8], the public health response to PrCA has potentially contributed to PrCA disparities. Whereas clear screening recommendations are available for many other major cancers (e.g., breast, colon, lung, cervical), recommendations for PrCA are not clear cut and have evolved over the past two decades [9, 10]. Until recently, the U. S. Preventive Services Task Force [11] recommended against routine prostate specific antigen (PSA) screening, a position that was counter to other organizations such as the American Cancer Society [12] and American Urological Association [13], which recommend that men make an informed decision with a healthcare provider about whether to receive PrCA screening. In 2017, the U. S. Preventive Services Task Force released draft recommendations that are more consistent with agencies that support informed decision making [14], which involves a man understanding the risks, benefits, uncertainties, and alternatives to PrCA screening and participating in the decision at the level that he desires [15].

In order to engage in informed decision making, African-American men need plain language PrCA knowledge information and adequate decision self-efficacy [16]. PrCA knowledge refers to the information necessary for an individual to understand PrCA, including the prostate’s anatomy and function; PrCA risk factors; types of PrCA screening and the risk benefits, uncertainties, and alternatives to each type of screening; and PrCA warning signs [17]. Self-efficacy refers to the level of confidence an individual possesses to actively involve himself, to the extent that he desires, in the PrCA screening decision-making process [18]. Prior studies indicate that African-American men who participate in informed decision-making interventions for PrCA screening often experience increased PrCA knowledge and decision self-efficacy [19], which may equip them to actively participate in informed decision making. Increasingly, PrCA decision-making interventions are being offered through digital mediums such as computers and mobile phones [20,21,22]. Technology-based dissemination strategies may, in part, be driven by stark increases in technology ownership [23] across all racial, ethnic, and age groups. However, older African-Americans with low incomes and low education attainment are the least likely to have access to computer or mobile technologies [23, 24].

To determine the quality of a user’s experience while using a technology-based PrCA intervention, some researchers conduct feasibility or usability testing [20,21,22]. However, most studies evaluating the efficacy of technology-based interventions assessed the target population’s technology acceptance, which encompasses the conditions under which an individual will adopt a technology for regular use [25]. Therefore, technology acceptance is one key determinant of the sustainability of a technology-based intervention beyond its research use. Sustainability is especially important for PrCA screening interventions because informed decisions about screening will occur many times over a man’s life course and the effects of the intervention after one exposure to the intervention could diminish over time [26].

Technology acceptance and use models

While some technology acceptance factors are highly correlated with usability (e. g., ease of use; [27]), most technology acceptance models also include constructs that are external to the user (e. g., social influences such as subjective norms; [25]). Based on socio-ecological theory, technology acceptance models posit that an individual’s decision to adopt a specific technology is not based solely on self-identified benefits, but is the result of complex interactions between an individual and their social and physical environment [25]. One of the most widely accepted models informing the measurement of technology acceptance is the Technology Acceptance Model [28], which posits that users’ adoption of a technology for normal use is dependent on its perceived usefulness (i.e., enhances task performance) and perceived ease of use (i.e., extent to which a technology requires effort). Developed in 1989, the technology acceptance model has undergone several modifications to enhance its utility (e.g., Technology Acceptance Model 2; [29, 30]). Specifically, modified models have integrated a plethora of other socio-ecologic factors that influence technology use. [30] Since it’s introduction, the technology acceptance model has been tested with over 25 various external variables that are posited to influence the relationship between the three major constructs: perceived usefulness, perceived ease of use, and technology acceptance [30]. The most commonly tested variables include computer anxiety, self-efficacy, enjoyment, computer support, and computer-use experience [30]. To enhance the performance of the technology acceptance model, Venkatesh and Davis, the model’s creators, have also expanded the model integrating additional five variables, including job relevance (an individual’s perception regarding the degree to which the target system is applicable to his or her job), subjective norms (an individual’s perception that most people who are important to them think they should or should not perform the behavior in question), image (the degree to which use of an innovation is perceived to enhance one’s status in one’s social group), output quality (how well the system performs specific tasks), and result demonstrability (tangibility of the results of using the innovation) to explain the conditions under which individuals affect a users’ perceived ease of use [29]. Two additional variables, experience and voluntariness of use, were hypothesized to mediate subjective norms and perceived usefulness and/or an individual’s intention to use a system. Similar to the original technology acceptance model, this modified version (i.e., Technology Acceptance Model 2) has been widely adopted [25] and used in a variety of contexts, which also includes the assessment of technology-use in the healthcare environment [31, 32]. Synthesizing the Technology Acceptance Model with existing models to examine technology acceptance (e. g., Diffusion of Innovation; [33]), Venkatesh and Davis et al., [34], collaborators on the Technology Acceptance Model 2, developed the Unified Theory of Acceptance and Use of Technology. Developed in 2003, the model postulates that four factors (performance expectancy, effort expectancy, social influence, and facilitating conditions) and four moderators (i.e., age, gender, experience, and voluntariness) determine an individual’s intention to use a technology and ultimately whether they decide to adopt a technology for regular use [34]. In their development of the Unified Theory of Acceptance and Use of Technology, Venkatesh et al. [29] also examined an individual’s self-efficacy, anxiety, and attitudes and hypothesized that these factors were not causally related to technology use intention, but were fully mediated by other factors in their model (e.g., effort expectancy). Based on the Unified Theory of Acceptance and Use of Technology, Venkentash et al. [29] created a 24-item scale, the Unified Theory of Acceptance and Use of Technology Scale, that measures acceptance and use of a technology based on each of the aforementioned moderating factors and intention to use technology. In their seminal article, Venkentash et al. [29] found that factor loadings for each item were acceptable with most loadings being .70 or higher. In addition, internal consistency reliability for the full scale and subscales ranged between .77 and .94. As hypothesized by Vekentash et al. [29], self-efficacy, anxiety, and attitudes towards technology did not have a direct causal relationship with intention to use technology. However, exploratory factor analysis of the Unified Theory of Acceptance and Use of Technology Scale found that self-efficacy and anxiety are significant predictors of technology use intention [35]. Despite these differences, the Unified Theory of Acceptance and Use of Technology Scale has been validated for use across several sectors including, but not limited to, mobile banking, social networking, web-based learning environments, decisions support systems, digital learning environments, and retail [36, 37].

Related to decision support systems and web-based learning environments, there are a growing number of studies that have employed the Unified Theory of Acceptance and Use of Technology Scale for assessing technology acceptance in health environments [38]. These studies most often investigate the acceptance of clinical support technology, such as electronic medical records, by healthcare providers or other clinical staff [39,40,41,42,43]. However, a number of studies assessed the acceptance of web-based telecare systems among patients [44,45,46,47]. Each of these studies demonstrate the application of the Unified Theory of Acceptance and Use of Technology to healthcare settings, though the strength of hypothesized relationships among the variables has varied. According to Holden and Karsh [31], these inconsistencies in the strength of associations among variables can be largely explained by differences in the contextual operationalization of the constructs within the Unified Theory of Technology Use and Acceptance. More specifically, some studies implement the scale with general wording, but others often alter wording to be context specific to the type of technology being tested [31]. The addition of contextual relevancy is necessary to ensure that the scale, which was originally designed for use in non-healthcare settings will meaningfully translate for use in a healthcare setting [31]. Therefore, some studies also added environment-related constructs to the model which made it even more contextually relevant [31, 44, 46,47,48,49]. For example, Ciperman et al. [46] posited that a doctor’s opinion regarding the use of telehealth technology influences performance expectancy, computer anxiety influences effort expectancy, and perceived security influences performance and effort expectancy and has a direct influence on behavioral intention. Findings show that computer anxiety has a significant negative influence on effort expectancy, while doctor’s opinion and performance expectancy had significant positive influences on performance expectancy and behavioral intention, respectively. Despite methodological differences, the aforementioned studies report acceptable to high internal consistency reliability scores (α= > .70), with some researchers conducting factor analyses [46, 50]. A few studies also found that the Unified Theory of Acceptance and Use of Technology Scale had strong convergent and divergent validity for assessing technology acceptance for health-related purposes [46, 50, 51].

Only one recent study [52] has investigated the Unified Theory of Technology Use and Acceptance in relation to cancer care or decision making. Among 300 cancer survivors, Senft et al. [52] examined whether facilitating conditions, social influence, ease of use, perceived usefulness, and security/trustworthiness was associated with eHealth use and if attitudes about the security and trustworthiness of online health services was associated with eHealth use more strongly among African-American than White cancer survivors. They discovered that facilitating conditions and perceived usefulness are associated with increased eHealth use among African-Americans and Whites and social influence did not influence e-health use among either group. Also, perceived ease of use was associated with decreased e-health activity for Whites only, whereas security/trustworthiness is associated with increased eHealth activity for African-Americans only. The authors do not report the internal consistency reliability of the Unified Theory of Technology Use and Acceptance Scale in this study. A second cancer-related study [53], proposes to use the Unified Theory of Technology Use and Acceptance Scale to understand the acceptance and use of an mHealth app for PrCA survivorship by patients, caregivers, and clinicians in the United Kingdom, but only a study protocol is currently available.

Although a number of studies report acceptable reliability and validity of the Unified Theory of Technology Use and Acceptance Scale for assessing technology use and acceptance across sectors, including healthcare, the psychometric properties of this scale have not been tested among African Americans. Having a culturally-relevant measure of technology use and acceptance is needed to measure the performance of technology-based interventions that seek to enhance decision making about PrCA screening among African-American men, who experience the highest mortality from the disease [1]. Given the need for and utility of a reliable and valid CBDA for PrCA, this study described the development and tested the psychometric properties of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale among African-American men.

Methods

This psychometric study used cross-sectional data from a pilot study conducted to assess the efficacy of iDecide, a computer-based decision aid (CBDA) to prepare African-American men for informed decision making about PrCA screening. During the pilot study, participants received a self-administered, paper survey at post-intervention to measure their acceptance of iDecide. Detailed recruitment strategies and a description of iDecide are published in prior manuscripts [54]. Human subject’s approval was received from the Institutional Review Board at the University of South Carolina.

Participants

This study included a purposive sample of 354 African-American men aged 40 and older. Eligible participants were those who (a) self-identified as African American; (b) spoke and comprehended English; (c) had no personal history of PrCA; and (d) had no self-reported history of cognitive decline. Participants were not required to have prior technology use experience. They were recruited from several social and faith-based venues in South Carolina between July 2015 to February 2016. All participants received a written consent form in-person prior to their participation in any study activities. Prior to requesting a signature on the consent form, the form was explained in detail by a member of the research team. This study was approved by the Institutional Review Board at the University of South Carolina.

Scale development

The Unified Theory of Acceptance and Use of Technology Scale was adapted to examine the usability and acceptance of iDecide, a CBDA for African-American men [55]. Specifically, each question was modified to refer generally to a CBDA as opposed to generally referring to a system”. For example, question Q28 (Table 1) was adapted to read “The system is somewhat intimidating to me” to “The CBDA is somewhat intimidating to me.” This question revision is similar to prior studies that have adapted the Unified Theory of Acceptance and Use of Technology Scale to be contextually relevant to their study environment [31]. Scale modification was conducted by the first author (O.O), an African-American male with expertise in health communications, health technology, and PrCA within the target population. Therefore, scale items were adapted for contextual relevancy while maintaining content equivalence and items were eliminated that were not contextually related to the CBDA (e.g., The senior management of this business has been helpful in the use of the system). Subscales (performance expectancy, effort expectancy, social influence, and facilitating conditions, self-efficacy, attitudes towards technology, and anxiety) and two of four original moderators (i.e., age, experience) that are hypothesized to be directly related to technology acceptance were retained. Because this scale is specific to CBDAs, the modified scale was titled, the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale.

Table 1 Computer-based prostate cancer screening decision aid acceptance scale

The Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale is comprised of 24-items with Likert scale response categories ranging from 1 (strongly agree) to 5 (strongly disagree) and 6 (does not apply). Scoring involves taking the average of all responses, excluding does not apply responses. A higher score indicates greater acceptance and use of the CBDA for PrCA informed decision making. Table 1 compares items of the Unified Theory of Acceptance and Use of Technology Scale to items of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale.

Conceptual framework of the computer-based prostate Cancer screening decision aid acceptance scale

Based on the Unified Theory of Technology Use and Acceptance [29],the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale emphasizes the influence of the dynamic interplay between the individual and their social and physical environment about whether an individual will adopt a CBDA. To determine the acceptance and use of a CBDA for PrCA informed decision making, seven factors are influential including: (a) performance expectancy (the degree to which an individual believes that the CBDA will lead to personal gains such as increases in PrCA knowledge and decision self-efficacy), (b) effort expectancy (the amount of effort associated with using the CBDA to retrieve PrCA information), (c) social influence (the degree to which an individual perceives that his or her social network will endorse the use of the CBDA), (d) facilitating conditions (the degree to which an individual believes that infrastructure exists to support their use of a CBDA for informed decision making about PrCA screening), (e) computer anxiety (the level of emotional fear or apprehension when an individual thinks about having to use a CBDA for finding PrCA information), (f) self-efficacy (an individual’s belief in their ability to effectively use the CBDA to find PrCA information, (g) attitudes towards technology (an individual’s positive or negative behaviors regarding use of the CBDA) and (h) behavioral intention (an individual’s intention to use a technology). Each of these factors are moderated by an individual’s age and technology-use experience [34]. Counter to the original Unified Theory of Technology Use and Acceptance, gender and voluntariness are not moderators in our conceptual framework because the CBDA is designed for an African-American male population and use of this technology is voluntary.

Pre-testing

To assess face validity, the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale was pre-tested with a convenience sample of two African-American men with high school or higher education. During the pre-test, the men were provided with a survey containing the full battery of 65-items that were used during the testing of the iDecide PrCA screening CBDA. These men completed a hard copy survey and noted if there were questions, words, or concepts on the survey they found difficult to interpret or that might be difficult to interpret for men with low reading levels. After survey completion, they were interviewed by the first author (O.O.) about difficulties completing the survey along with questions about survey formatting (e.g., clarity of instructions, response option formatting). Neither participant suggested changes to the survey.

Data analysis

Descriptive statistics described the sociodemographic characteristics of African-American men in the sample. Polychoric correlations assessed the association between factors and subscale items. Cronbach’s alpha assessed internal consistency reliability for the total scale and each subscale.

Exploratory factor analysis (EFA) is a data-driven, exploratory technique and that does not require a priori specification of the relationships between latent and observed variables [56,57,58]. Thus, model specification is not required because factor structure and factor loadings are assumed to be unknown. In this study, EFA was conducted to identify the number of latent constructs (factors) and the underlying factor structure of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale. Performance expectancy, effort expectancy, social influence, facilitating conditions, computer anxiety, self-efficacy, and attitudes towards technology were exogenous latent variables hypothesized to be correlated with each other. Subscale items loading on each factor were also hypothesized to be correlated with each other. The number of participants to item ratio was 14:1, which is above the recommended 10:1 often used to determine a priori sample size for EFA [59].

EFA was conducted using preliminary estimates of communalities obtained from the square of the multiple correlation coefficient of each variable. Iterated principal factor extraction with prior communalities set to 1 was used for data extraction followed by Varimax rotation. Factor retention was assessed through parallel analysis, which has been demonstrated as a more accurate assessment of factor retention than other factor retention methods [58]. Specifically, using the K-1 method can lead to sampling error, which can overestimate the number of factors [60]. Parallel analysis produces correlation matrices from a randomly chosen simulated dataset that has a similar number of observations as the original dataset [60]. The simulated observations have the same potential sampling error as the original observations. Eigenvalues were then computed for both the simulated and original data. To determine the number of factors to retain, simulated and original data eigenvalues were compared to determine the point at which the eigenvalue in the simulated data was higher than the original data [61]. The number of factors before this transition point denoted the number of factors that were retained [61]. A scree plot was also produced to visually compare eigenvalues from simulated and original data to corroborate our determination of the number of factors to retain [59].

Factor loadings were assessed using item communalities, cross-loadings, and item statistics. A factor with less than three factor loadings was considered weak and unstable [59] and was deleted from the factor structure. Factors with three or more factor loadings were retained. An item was determined to load on a factor if the factor loading was 0.40 or greater for that factor and was less than 0.40 for other factors [62]. An item was cross-loaded if it loaded on more than one factor at 0.40 or above [59]. The standardized root mean square residual measured the difference between the observed correlation and the predicted correlation. Acceptable standardized root mean square residual estimates are less than .05 [63]. The Kaiser-Meyer-Olkin measured sampling adequacy and estimates between 0.8 and 1 were considered adequate [64].

Missing values ranged from 0.56% (n = 2) to 2.54% (n = 9) for scale items. Single and multiple imputation were used to impute missing values for 24 items. Means for each item was compared with and without imputation, which included no imputation, single imputation, and multiple imputation (n = 1000) for missing data, which were similar. Descriptive statistics were analyzed using original data (N = 354). EFA was conducted using original, single, and multiple imputation datasets given the Computer-Based Prostate Cancer Decision Aid and Acceptance Scale is a major adaptation of the Unified Theory of Acceptance and Use of Technology Scale. All data analyses were performed using SAS/STAT® statistical software, version 9.4 [65].

Results

For each item of the 24-item scale, mean responses ranged from 2.2 to 4.46. Table 2 reports sociodemographic characteristics of the 354 African-American men. They had a mean age of 59.5 ± 9.61 years and most were married (55%, n = 194). An overwhelming majority were insured (91%; n = 323) and had a regular healthcare provider (87%, n = 309). Nearly half were employed (47%, n = 167) or reported a household income between $20,000–79,999 (46%, n = 165). Over half (54%, n = 192) had a high school diploma or attended some college. Most participants report using technology prior to study participation including: television (87%, n = 307), cellphones (85%, n = 303), automated teller machines (71%, n = 252), computers (69%, n = 246) and/or tablet computers (57%, n = 201). They also reported using the following technology features: cell phone apps (75%, n = 76), text messaging (69%, n = 243), email (67%, n = 237), and/or the internet (65%, n = 229).

Table 2 Summary of African American male participant characteristics

Factor structure of the computer-based prostate Cancer decision aid acceptance scale

Table 3 reports factor loadings of the 24-item Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale using the original, single and multiple imputation datasets. Factor loadings were similar for original, single and multiple imputation datasets. Parallel analysis, scree plot (Fig. 1), and the proportion of variance explained by each factor suggested three meaningful factors. Factor 1 (F1), Technology Use Expectancy and Intention had 16 factor loadings ranging from .51 to .85 for original data, and .44 to .83 and .44 to .83 for single and multiple imputation, respectively. Factor 2 (F2), Technology Use Anxiety, had five factor loadings ranging from .52 and .92 for original data, and .41 to .93 for both single and multiple imputation. Factor 3 (F3), Technology Use Self-efficacy, had three factor loadings ranging from .67 to .88 for original and imputed datasets.

Table 3 Factor structure and factor loadings of the 24-item computer-based prostate cancer screening decision aid acceptance scale, with and without imputation (N = 354)
Fig. 1
figure 1

Scree plot for the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale. Actual: refers to the eigenvalues in the original dataset. Simulated: refers to the eigenvalues in the randomly chosen simulated dataset

Factor-loadings varied between original and imputed datasets. For the original dataset, three items, Q11, Q12, and Q13, cross-loaded on two factors—Technology Use Expectancy and Intention (F1) and Technology Use Self-Efficacy (F3). For single and multiple imputed datasets, five items, Q9, Q10, Q11, Q12, and Q13, cross-loaded on two factors—Technology Use Expectancy and Intention (F1) and Technology Use Self-Efficacy (F3). All cross-loaded items were retained on Technology Use Expectancy and Intention (F1) because all factor loadings were highest on F1 compared to F3 (Table 3).

The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.93, which is acceptable. All residuals were small and the overall standardized root mean square residual was 0.035, indicating that the factor structure explains most of the correlations. Further, the internal consistency reliability of the full scale was .91 for the original dataset and .89 for both single and multiple imputation datasets; and .95, .90, and .85 for Factors 1, 2, and 3, respectively, for all datasets (Table 4).

Table 4 Internal consistency reliability of the computer-based prostate cancer screening decision aid scale (N = 354)

Discussion

This study evaluated the psychometric properties of the 24-item, Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale, in African-American men using a CBDA for informed PrCA screening decision making. EFA resulted in a 24-item, three-factor structure involving Technology Use Expectancy and Intention (F1), Technology Use Anxiety (F2), and Technology Use Self-Efficacy (F3). Factor loadings were moderate to high and the total scale and subscales had good to excellent internal consistency reliability. These findings expand the Unified Theory of Technology Acceptance and Use and builds upon prior evidence indicating that technology acceptance and use is a multidimensional construct. To our knowledge, this is the first scale developed to measure the efficacy of a CBDA for PrCA screening. Our findings suggest the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale has utility in determining the acceptance and use of CBDAs by African-American men for informed PrCA screening decision making.

Although our findings are theoretically consistent with the Unified Theory of Technology Acceptance and Use, the three-factor structure of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale is inconsistent with the six-factor structure the Unified Theory of Acceptance and Use of Technology Scale [29]. Our findings are more parsimonious and suggest that some of the Unified Theory of Acceptance and Use of Technology constructs may be highly correlated when used for informed PrCA screening decision making by African-American men. Specifically, Venkatesh et al.’s [29] validation study did not find that self-efficacy, computer anxiety, and attitudes towards technology were correlates of technology acceptance and use. However, the factor structure of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale showed that performance expectancy (Q1-Q4), effort expectancy (Q5-Q8), attitudes toward technology (Q9-Q12), social influence (Q13) and behavioral intention to use the system (Q24) loaded on Technology Use Expectancy and Intention with factor loadings ranging from .44 and .85 for all datasets. The high factor loadings and factor loading pattern suggest that these constructs are far more interrelated than distinctive in a community sample of middle-aged African-American men who used a measure that was adapted for healthcare decision making compared to individuals from business, non-profit, and academic organizations who used the Unified Theory of Acceptance and Use of Technology Scale to assess use and acceptance of a new technology in prior psychometric studies. Our data are also distinct from two healthcare-related studies that conducted psychometric testing [46, 50] of the Unified Theory of Acceptance and Use of Technology Scale for assessing healthcare technology acceptance among non-African-Americans. For example, in the assessment of factors influencing Korean healthcare professionals’ adoption of mobile electronic medical records, Kim et al. [50] validated the six-factor structure posited by Venkentash et al. [29], but this population is innately different from our target population of African-American end-users. Additionally, our purposive sample of African-American men, recruited from social and faith-based organizations in a southeastern state, may be more homogeneous in terms of age, socioeconomic status, and belief systems, which may partially support Sundaravej’s (2010) suggestion that socio-demographic factors (e.g., age, experience) moderate an individual’s intention to use a specific technology. These moderators, such as faith which has been shown to positively influence technology acceptance [66, 67], were not tested in our study. Furthermore, social influence and behavioral intention were measured with one item, which may have forced them to load onto factors that are not directly related. Future research should confirm the factor structure of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale, test moderators, and assess model fit using more fit indices. Overall, the Kaiser-Meyer-Olkin and standardized root mean square residual estimates and small residuals further support the good fit of our 24-item, three-factor model.

As for factor loadings, one of the two items for facilitating conditions (Q14; “I have the knowledge necessary to use the CBDA.”) loaded on Technology Use Expectancy and Intention (F1), whereas the other item (Q15; ‘The CBDA is not compatible with other systems I use.) loaded on Technology Use Anxiety (F2). Specifically, Q14 is related to whether an individual has the “knowledge” necessary to use the CBDA, whereas Q15 is related to “compatibility” of the CBDA with technologies that an individual currently uses. This suggests that facilitating conditions may not be a distinct construct and possibly conceptually related to technology use and intention overall, especially in the context of informed PrCA screening decision making. Among middle-aged African-American men who are using CBDAs to assist with PrCA screening decisions, facilitating conditions may be different compared to assessment for diffusion of innovation in non-profit, business, and academic organizations. The lack of homogeneity of facilitating conditions for technology use and intention may also explain why the items loaded on two different factors [68]. Further, given African Americans have low levels of technology experience overall [69], African-American men in this study may lack the technical knowledge to ascertain whether the CBDA was consistent with technology they currently use. Future administrations and psychometric testing of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale in a more heterogeneous sample of African-American men who are at risk for PrCA and women who assist with PrCA screening decision making may provide more evidence about the role of facilitating conditions for technology use and acceptance, especially regarding age, gender, and social environment.

The four computer anxiety (Q20-Q23) items loaded onto Technology Use Anxiety (F2). Counter to the Unified Theory of Technology Use and Acceptance Scale, computer anxiety was a salient correlate that may predict whether an African-American man decided to use a specific technology. Prior studies have also identified anxiety as detrimental to intention to use technology For example, among 1204 racially diverse participants, Czaja et al. [69] found that lower computer anxiety, higher education, younger age, higher computer use self-efficacy, and higher intelligence were associated with higher technology use, and that African Americans have less experience overall with technology, which may indicate lower computer self-efficacy and higher levels of computer anxiety. Similarly, among 300 older adults (64 to 98 years), Mitzner et al. [70] found that the strongest correlates of positive perceptions about technology use were computer attitudes (i.e., self-efficacy, anxiety, and interest), more technology experience, and agreeable personalities. Because computer anxiety is a prominent influencer of technology use, our findings expand the conceptualization of the Unified Theory of Technology Use and Acceptance to African Americans who may have high levels of computer anxiety and is consistent with current evidence on technology use.

Interestingly, 3 of 4 self-efficacy items (Q17-Q19) loaded onto Technology Use Self-efficacy (F3) with factor loadings ranging from .57 to .88, whereas the remaining item (Q16; ‘I can complete the prostate cancer education program using the CBDA if no one is around to tell me what to do as I go’) loaded on Technology Use Expectancy and Intention (F1) with factor loadings at .54 and .52 for original and imputed datasets, respectively. Because African Americans may have less technology use experience and lower technology self-efficacy [69], the idea of using the CBDA without assistance may be anxiety-provoking and increase the effort exerted to use CBDAs for African-American men faced with PrCA screening decision making. Although the Q16 factor loading is contrary to prior evidence on the Unified Theory of Technology Use and Acceptance Scale [29], it is consistent with current evidence on technology use among African Americans [69, 70]. The loading of Q16 onto Technology Use Expectancy and Intention may also be attributed to question wording. Similar to a reverse worded item, Q16 contains a negation ‘no one around,’ which is different from other self-efficacy items. Therefore, Q16 may have been misinterpreted because of the negation or vague wording, which increases response bias. Perhaps rewording Q16 to read, ‘I can complete the prostate cancer education program using the CBDA without assistance’, in future administrations and psychometric testing will reduce response bias and provide further evidence of convergent validity (i.e., items loading on a single factor at 0.50 and above [58]).

Five items (Q9-Q13) cross-loaded on Technology Use Expectancy and Intention (F1) and Technology Use Self-Efficacy (F3) for both original (Q11-Q13) and imputed (Q9-Q13) datasets. Although all cross-loaded items loaded highest on (see Table 3) and were allocated to Technology Use Expectancy and Intention (F1), cross-loading could be the result of vague and/or confusing question wording. Q9, which reads, ‘Using the CBDA is a good idea’ and Q10, which reads, ‘The CBDA makes learning about prostate cancer more interesting’, may both be related to performance expectancy, social influence, and self-efficacy. Q11, which reads, ‘The CBDA makes learning about prostate cancer fun,’; Q12, which reads, ‘I like using the CBDA,’; and Q13, which reads, ‘People who are important to me will likely support my use of the CBDA’ may all seem vague and conceptually unrelated to technology use and acceptance for healthcare decision making. Given cancer is a grave topic and that the purposive sample of African-American men were at risk for PrCA, asking about whether learning about PrCA is fun may seem awkward or even inappropriate. Perhaps rewording Q9-Q13 during future administrations of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale may improve factor structure.

Study strengths included a large community sample of African American men, which exceeded the minimum recommended sample size for EFA (> 200). However, the African- American men were from one mid-sized city in a southeastern state and may be more homogenous than a national sample of African-American men. Therefore, psychometric findings reported may not be generalizable to African-American men who reside in other U. S. regions or men of other races and ethnicities. Although participants had moderate experience with technology use, they had the least prior experience with using a tablet computer (57%, n = 201), the device on which our CBDA was administered. Two items (Q15, Q24) had factor loadings of less than .50 in at least one dataset, which suggests poor convergent validity. Cross-loadings suggest factors may not be conceptually distinct. The modified scale was only pre-tested with two African-American men who may have not been representative of African-American men included in the sample. Lastly, the research team did not test the influence of important moderators that could affect technology acceptance such as faith. Despite these limitations, this study provides valuable psychometric evidence, which can contribute to the future development and evaluation of culturally-tailored CBDAs to facilitate PrCA screening decisions of African-American men who are at risk for the deadliest PrCA globally.

Future psychometric testing (i.e., CFA) is warranted to confirm convergent and discriminate validity of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale. Future research should also confirm the factor structure of the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale using a larger and more demographically diverse sample of African Americans. Having a diverse sample is especially important given that the Unified Theory of Acceptance and Use of Technology postulates that sociodemographic factors such as age and computer experience can moderate technology use and acceptance outcomes.

Conclusion

In sum, the three-factor, 24-item Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale can be considered distinct from the Unified Technology Acceptance and Use Scale given that the later was developed to assess technology acceptance and use in business and banking, whereas the former is a major adaption to assess technology acceptance and use for informed PrCA decision making that may have life or death consequences. Thus, the emotion and anxiety evoked by informed PrCA screening decision making as well as the personal nature of the task including involving family members and health care providers suggest our scale is conceptually unique for healthcare decision making. Although preliminary, psychometric evidence from this study suggests the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale has a conceptually distinct factor structure, good to excellent internal consistency reliability, and acceptable convergent and discriminant validity. PrCA is highly prevalent among African American men and PrCA knowledge is critical to making decisions about PrCA screening and early identification of PrCA. Given high rates of PrCA mortality among African-American men and the growing development of culturally-tailored CBDAs to assist African-American men with healthcare decisions about PrCA, the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale can be influential in the evaluation of technology-based PrCA interventions. Most notably, our scale has robust psychometric proprieties for use among African-American men, who are not well-represented in current studies on technology use and acceptance. Because technology is more accessible than ever before and has been integrated into all aspects of our lives, the Computer-Based Prostate Cancer Screening Decision Aid Acceptance Scale shows promise as playing a key role in increasing PrCA knowledge and assisting in informed PrCA screening decision making among African-American men.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CBDA:

Computer-based decision aid

EFA:

Exploratory factor analysis

PrCA:

Prostate cancer

References

  1. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2017. CA Cancer J Clin. 2017;67(1):7–30.

    Google Scholar 

  2. Chornokur G, Dalton K, Borysova ME, Kumar NB. Disparities at presentation, diagnosis, treatment, and survival in African American men, affected by prostate cancer. Prostate. 2011;71(9):985–97.

    Article  PubMed  Google Scholar 

  3. DeRouen MC, Schupp CW, Koo J, Yang J, Hertz A, Shariff-Marco S, Cockburn M, Nelson DO, Ingles SA, John EM, Gomez SL. Impact of individual and neighborhood factors on disparities in prostate cancer survival. Cancer Epidemiol. 2018;53:1–1.

    Article  PubMed  Google Scholar 

  4. Wagner SE, Burch JB, Bottai M, Puett R, Porter D, Bolick-Aldrich S, Temples T, Wilkerson RC, Vena JE, Hébert JR. Groundwater uranium and cancer incidence in South Carolina. Cancer Cause Control. 2011;22(1):41–50.

    Article  Google Scholar 

  5. Hemminki K. Familial risk and familial survival in prostate cancer. World J Urol. 2012;30(2):143–8.

    Article  PubMed  Google Scholar 

  6. Haiman CA, Chen GK, Blot WJ, Strom SS, Berndt SI, Kittles RA, Rybicki BA, Isaacs WB, Ingles SA, Stanford JL, Diver WR. Characterizing genetic risk at known prostate cancer susceptibility loci in African Americans. PLoS Genet. 2011;7(5):e1001387.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Farrell J, Petrovics G, McLeod D, Srivastava S. Genetic and molecular differences in prostate carcinogenesis between African American and Caucasian American men. Int J Mol Sci. 2013;14(8):15510–31.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  8. Kim EH, Andriole GL. Prostate-specific antigen-based screening: controversy and guidelines. BMC Med. 2015;13(1):61.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Smith RA, Andrews KS, Brooks D, Fedewa SA, Manassaram-Baptiste D, Saslow D, Brawley OW, Wender RC. Cancer screening in the United States, 2017: a review of current American Cancer Society guidelines and current issues in cancer screening. CA Cancer J Clin. 2017;67(2):100–21.

    Article  PubMed  Google Scholar 

  10. Smith RA, Cokkinides V, von Eschenbach AC, Levin B, Cohen C, Runowicz CD, Sener S, Saslow D, Eyre HJ. American Cancer Society guidelines for the early detection of cancer. CA Cancer J Clin. 2002;52(1):8–22.

    Article  PubMed  Google Scholar 

  11. Moyer, Virginia A. Screening for prostate cancer: US Preventive Services Task Force recommendation statement. Ann Intern Med. 2012;157(2):120–34.

    Article  PubMed  Google Scholar 

  12. Smith RA, Cokkinides V, Brooks D, Saslow D, Brawley OW. Cancer screening in the United States, 2010: a review of current American Cancer Society guidelines and issues in cancer screening. CA: a cancer journal for clinicians. 2010;60(2):99–119.

    Google Scholar 

  13. Carter HB, Albertsen PC, Barry MJ, Etzioni R, Freedland SJ, Greene KL., ... & Penson DF. Early detection of prostate cancer: AUA Guideline. J urol. 2013;190(2):419–26.

  14. Bibbins-Domingo K, Grossman DC, Curry SJ. The US preventive services task force 2017 draft recommendation statement on screening for prostate cancer: an invitation to review and comment. JAMA. 2017;317(19):1949–50.

    Article  PubMed  Google Scholar 

  15. Briss P, Rimer B, Reilley B, Coates RC, Lee NC, Mullen P, Corso P, Hutchinson AB, Hiatt R, Kerner J, George P. Promoting informed decisions about cancer screening in communities and healthcare systems. Am J Prev Med. 2004;26(1):67–80.

    Article  PubMed  Google Scholar 

  16. Mullen PD, Allen JD, Glanz K, Fernandez ME, Bowen DJ, Pruitt SL, Glenn BA, Pignone M. Measures used in studies of informed decision making about cancer screening: a systematic review. Ann Behav Med. 2006;32(3):188–201.

    Article  PubMed  Google Scholar 

  17. Cormier L, Kwan L, Reid K, Litwin MS. Knowledge and beliefs among brothers and sons of men with prostate cancer. Urology. 2002;59(6):895–900.

    Article  PubMed  Google Scholar 

  18. Bandura A. Albert Bandura and social learning theory. Learning theories for early years practice. 2018;63.

  19. Sajid S, Kotwal AA, Dale W. Interventions to improve decision making and reduce racial and ethnic disparities in the management of prostate cancer: a systematic review. J Gen Intern Med. 2012;27(8):1068–78.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Allen JD, Mohllajee AP, Shelton RC, Drake BF, Mars DR. A computer-tailored intervention to promote informed decision making for prostate cancer screening among African American men. Am Journal Mens Health. 2009;3(4):340–51.

    Article  Google Scholar 

  21. Sultan DH, Rivers BM, Osongo BO, Wilson DS, Schenck A, Carvajal R, Rivers D, Roetzheim R, Green BL. Affecting African American men’s prostate cancer screening decision-making through a mobile tablet-mediated intervention. J Health Care Poor Underserved. 2014;25(3):1262.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Kassan EC, Williams RM, Kelly SP, Barry SA, Penek S, Fishman MB, Cole CA, Miller EM, Taylor KL. Men’s use of an internet-based decision aid for prostate cancer screening. J Health Commun. 2012;17(6):677–97.

    Article  PubMed  Google Scholar 

  23. Anderson M. Technology device ownership, 2015: pew research center; 2015.

  24. Anderson M. Digital divide persists even as lower-income Americans make gains in tech adoption. Pew research center; 2017.

    Google Scholar 

  25. Marangunić N, Granić A. Technology acceptance model: a literature review from 1986 to 2013. Universal Access INF. 2015;14(1):81–95.

    Article  Google Scholar 

  26. Volk RJ, Spann SJ, Cass AR, Hawley ST. Patient education for informed decision making about prostate cancer screening: a randomized controlled trial with 1-year follow-up. Ann Fam Med. 2003;1(1):22–8.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Burney SA, Ali SA, Ejaz A, Siddiqui FA. Discovering the correlation between technology acceptance model and usability. IJCSNS. 2017;17(11):53.

    Google Scholar 

  28. Davis F. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Mis Quart. 1989;13(3):319–40.

    Article  Google Scholar 

  29. Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manag Sci. 2000;46(2):186–204.

    Article  Google Scholar 

  30. Lee Y, Kozar KA, Larsen KR. The technology acceptance model: past, present, and future. CAIS. 2003;12(1):50.

    Google Scholar 

  31. Holden RJ, Karsh BT. The technology acceptance model: its past and its future in health care. J Biomed Inform. 2010;43(1):159–72.

    Article  PubMed  Google Scholar 

  32. Ketikidis P, Dimitrovski T, Lazuras L, Bath PA. Acceptance of health information technology in health professionals: an application of the revised technology acceptance model. Health Informatics J. 2012;18(2):124–34.

    Article  PubMed  Google Scholar 

  33. Rogers EM. 1995. Diffusion of Innovations New York 1962.

  34. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. Mis Quart. 2003;27(3):425–78.

    Article  Google Scholar 

  35. Sundaravej T. Empirical validation of unified theory of acceptance and use of technology model. JGIM. 2010;13(1):5–27.

    Google Scholar 

  36. Venkatesh V, Thong JY, Xu X. Unified theory of acceptance and use of technology: a synthesis and the road ahead. J Assoc Inf Syst. 2016;17(5):328–76.

    Google Scholar 

  37. Williams MD, Rana NP, Dwivedi YK. The unified theory of acceptance and use of technology (UTAUT): a literature review. JEIM. 2015;28(3):443–88.

    Article  Google Scholar 

  38. Taiwo AA, Downe AG. The theory of user acceptance and use of technology (UTAUT): a meta-analytic review of empirical findings. J Theor Appl Inf Technol. 2013;49(1).

  39. Esmaeilzadeh P, Sambasivan M, Kumar N, Nezakati H. Adoption of clinical decision support systems in a developing country: antecedents and outcomes of physician's threat to perceived professional autonomy. Int J Med Inform. 2015;84(8):548–60.

    Article  PubMed  Google Scholar 

  40. Maillet É, Mathieu L, Sicotte C. Modeling factors explaining the acceptance, actual use and satisfaction of nurses using an electronic patient record in acute care settings: an extension of the UTAUT. Int J Med Inform. 2015;84(1):36–47.

    Article  PubMed  Google Scholar 

  41. Heselmans A, Aertgeerts B, Donceel P, Geens S, Van de Velde S, Ramaekers D. Family physicians’ perceptions and use of electronic clinical decision support during the first year of implementation. J Med Syst. 2012;36(6):3677–84.

    Article  PubMed  Google Scholar 

  42. Chen R-F, Hsiao JL. An investigation on physicians’ acceptance of hospital information systems: a case study. Int J Med Inform. 2012;81(12):810–20.

    Article  PubMed  Google Scholar 

  43. Chang I-C, Hsu H-M. Predicting medical staff intention to use an online reporting system with modified unified theory of acceptance and use of technology. Telemed J E Health. 2012;18(1):67–73.

    Article  PubMed  Google Scholar 

  44. Or CKL, Karsh BT, Severtson DJ, Burke LJ, Brown RL, Brennan PF. Factors affecting home care patients' acceptance of a web-based interactive self-management technology. JAMIA. 2010;18(1):51–9.

    PubMed  Google Scholar 

  45. Rho MJ, Kim HS, Chung K, Choi IY. Factors influencing the acceptance of telemedicine for diabetes management. Clust Comput. 2015;18(1):321–31.

    Article  Google Scholar 

  46. Cimperman M, Makovec Brenčič M, Trkman P. Analyzing older users’ home telehealth services acceptance behavior—applying an extended UTAUT model. Int J Med Inform. 2016;90:22–31.

    Article  PubMed  Google Scholar 

  47. Hennemann S, Beutel ME, Zwerenz R. Drivers and barriers to acceptance of web-based aftercare of patients in inpatient routine care: a cross-sectional survey. J Med Internet Res. 2016;18(12).

    Article  PubMed  PubMed Central  Google Scholar 

  48. Ebert DD, Berking M, Cuijpers P, Lehr D, Pörtner M, Baumeister H. Increasing the acceptance of internet-based mental health interventions in primary care patients with depressive symptoms. A randomized controlled trial. J Affect Disord. 2015;176:9–17.

    Article  CAS  PubMed  Google Scholar 

  49. Baumeister H, Nowoczin L, Lin J, Seifferth H, Seufert J, Laubner K, Ebert DD. Impact of an acceptance facilitating intervention on diabetes patients’ acceptance of internet-based interventions for depression: a randomized controlled trial. Diabetes Res Clin Pract. 2014;105(1):30–9.

    Article  CAS  PubMed  Google Scholar 

  50. Kim S, Lee K-H, Hwang H, Yoo S. Analysis of the factors influencing healthcare professionals’ adoption of mobile electronic medical record (EMR) using the unified theory of acceptance and use of technology (UTAUT) in a tertiary hospital. BMC Med Inform Decis Mak. 2016;16(1):12.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Bozan K, Parker K, Davey B, editors. A closer look at the social influence construct in the UTAUT model: An institutional theory based approach to investigate health IT adoption patterns of the elderly. 2016 49th Hawaii International Conference on System Sciences (HICSS); 2016: IEEE.

  52. Senft N, Abrams J, Katz A, Barnes C, Charbonneau DH, Beebe-Dimmer JL, Zhang K, Eaton T, Heath E, Thompson HS. eHealth activity among African American and white Cancer survivors: a new application of theory. J Health Commun. 2019:1–6.

  53. Pham Q, Cafazzo JA, Adoption FA. Acceptability, and Effectiveness of a Mobile Health App for Personalized Prostate Cancer Survivorship Care: Protocol for a Realist Case Study of the Ned AppJMIR research protocols. 2017;6(10):e197-e.

    Article  Google Scholar 

  54. Owens O, James C, Friedman D. Overcoming the challenges of African American recruitment in health sciences research: strategies and recommendations. Urol Nurs. 2017;37(6).

    Article  Google Scholar 

  55. Owens OL, Friedman DB, Brandt HM, Bernhardt JM, Hebert JR. An iterative process for developing and evaluating a digital prostate Cancer decision aid for African-American men. Health Promot Pract. 2015;16(5):642–55.

    Article  PubMed  Google Scholar 

  56. Brown TA. Confirmatory factor analysis for applied research. New York, NY: Guilford Press; 2006.

    Google Scholar 

  57. Harrington D. Confirmatory factor analysis: Oxford university press. USA; 2008.

  58. Kline R. Principles and practice of structural equation modeling: NY: Guilford. New York, NY; 2005.

  59. Costello AB, Osborne JW. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess Res Eval. 2005;10(7):1–9.

    Google Scholar 

  60. Çokluk Ö, Koçak D. Using Horn's parallel analysis method in exploratory factor analysis for determining the number of factors. Educational Sciences: Theory and Practice. 2016;16(2):537–51.

    Google Scholar 

  61. Hayton JC, Allen DG, Scarpello V. Factor retention decisions in exploratory factor analysis: a tutorial on parallel analysis. Organ Res Methods. 2004;7(2):191–205.

    Article  Google Scholar 

  62. Shultz KS, Whitney DJ, Zickar MJ. Measurement theory in action: case studies and exercises: Routledge; 2013.

    Book  Google Scholar 

  63. Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures. Methods Psychol Res Online. 2003;8(2):23–74.

    Google Scholar 

  64. Williams B, Onsman A, Brown T. Exploratory factor analysis: a five-step guide for novices. Australasian Journal of Paramedicine. 2010;8(3).

  65. SAS Institute. Base SAS 9.4 Procedures Guide: SAS Institute; 2015.

  66. Barnes SJ. Strength of religious faith, trusting beliefs and their role in technology acceptance. Int J Innov Organ Learn. 2009;6(1):110.

    Article  Google Scholar 

  67. Baazeem RM. The role of religiosity in technology acceptance: the case of privacy in Saudi Arabia. In: Censorship, surveillance, and privacy: concepts, methodologies, tools, and applications, vol. 2019: IGI Global. p. 1787–808.

  68. Morgado FF, Meireles JF, Neves CM, Amaral AC, Ferreira ME. Scale development: ten main limitations and recommendations to improve future research practices. Psicol-Reflex Crít. 2018;30(1):3.

    Article  Google Scholar 

  69. Czaja SJ, Charness N, Fisk AD, Hertzog C, Nair SN, Rogers WA, Sharit J. Factors predicting the use of technology: findings from the Center for Research and Education on aging and technology enhancement (CREATE). Psychol Aging. 2006;21(2):333.

    Article  PubMed  PubMed Central  Google Scholar 

  70. Mitzner TL, Rogers WA, Fisk AD, Boot WR, Charness N, Czaja SJ, Sharit J. Predicting older adults’ perceptions about a computer system designed for seniors. Universal Access INF. 2016;15(2):271–80.

    Article  Google Scholar 

Download references

Acknowledgements

Not Applicable.

Funding

This study was funded by the University of South Carolina’s Office of the Vice President for Research. Funding was also received through the University of South Carolina’s School of Pharmacy through an American Cancer Society Institutional Research Grant. Both funding mechanisms supported intervention development, study design, and the collection, analysis, and interpretation of data.

Author information

Authors and Affiliations

Authors

Contributions

We certify that author OO was involved in the conceptualization and design of the study, guided the collection of the data, co-managed the data, was involved in the analysis and led the reporting of the data, and led the drafting of the manuscript. We certify that author NW was involved in the conceptualization and design of the study, was involved in the analysis, assisted with the reporting of the data and the drafting of the manuscript. We certify that author AT was involved in the conceptualization and design of the study, led data management and analysis, and assisted with the reporting of the data and drafting of the manuscript. NW, and AT made substantial contributions to conception and design of this study; the collection, management, analysis, and reporting of the data; the drafting and revision of this publication and the final approval of the version submitted. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Otis L. Owens.

Ethics declarations

Ethics approval and consent to participate

We certify that this research was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board at the University of South Carolina (Pro00045407). All participants were provided with a consent form which was explained in person to each participant before requesting their signature stating that all inclusions in the consent form were thoroughly understood.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Owens, O.L., Wooten, N.R. & Tavakoli, A.S. Development and initial psychometric evaluation of the computer-based prostate Cancer screening decision aid acceptance scale for African-American men. BMC Med Res Methodol 19, 146 (2019). https://doi.org/10.1186/s12874-019-0776-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-019-0776-y

Keywords