Our findings contribute preliminary evidence of the effect of interview mode on responses to indicators of cancer screening behaviours among middle-aged and older heterosexual and sexual minority women. These findings add to the body of research about methods that can be used to best identify subgroups of the population most at risk for not receiving recommended cancer screenings. Women were randomly assigned to one of three data collection methods: computer assisted telephone interview (CATI), self-administered mailed questionnaire (SAMQ) and computer-assisted self-interview (CASI). Women assigned to CASI could choose to complete the assessment during an in-person CASI (CASI-I) or by receiving the questionnaire on disk (CASI-D).
We examined the effects of randomized interview mode on responses to items associated with mammography and Pap test screening. Overall, we found few meaningful differences by mode of data collection for indicators of cancer screening. Surprisingly, among the few significant mode differences, we found that women who were interviewed by research staff (CATI) were more likely than those not interviewed (CASI, SAMQ) to have an unfavourable status on the indicators. Women in the CATI mode were more likely to report being off-schedule for recent Pap testing than women in CASI and the trend was similar, but non-significant, for mammography. The other significant findings associated with cancer screening behaviours were between CASI conditions. Because we did not randomize women into the different computer-assisted methods, we cannot rule out selection bias as a threat to the validity of the findings. Furthermore, given the lower response rate in the CASI condition (Figure 1), apparently higher rates of recent screening among women in CASI may be due to the fact that those who completed the assessment were also those most knowledgeable about cancer screening recommendations. Therefore, we cannot conclude that any mode of data collection has a consistent effect on rates of reporting screening behaviours.
There are potential reasons why we did not find consistent mode differences in our sample. First, items about cancer screenings may not be considered sensitive or associated with social rejection since questions about mammography and Pap testing are routinely asked of women 40–75 years in clinical settings. Second, many of the studies that showed differences between CASI and other modes of data collection were conducted and published in the early and late 1990s [11, 14–16, 36, 37]. At that time, CASI was a novel interview mode. The increased access to, and use of, computers may explain why we did not find more significant differences between CASI and the other data collection modes. Finally, due to the relatively high percentages of women reporting mammography and Pap testing at recommended intervals (more than 80% for both behaviours), we may not have had sufficient power to detect statistically significant differences. With a sample size of 364 for comparisons between CATI and CASI, we only had statistical power of 0.78 to detect differences in means of 0.10 or higher with a standard deviation of 0.35. Similarly, we only had statistical power of 0.80 to detect comparable differences in means of 0.10 or higher between CATI and SAMQ with a sample size of 387. However, because the percentages of endorsement were remarkably consistent across mode for several items, it is not likely that increased sample sizes would change the conclusions substantially.
Another study objective was to determine whether the effects of interview mode differed by partner gender. We found only two significant mode differences for items related to self-reported mammography and Pap test screening by partner gender. There are several potential reasons that may explain the lack of more significant findings. First, Rhode Island is one of only a few states in the United States to have non-discriminatory policies towards sexual minorities. Therefore, within the political and social context, women in Rhode Island may be more willing than women in other parts of the country to disclose potentially unfavourable information. Second, all women interested in study participation were required to answer screening questions about marital status and partner gender prior to study enrollment. Asking these screening items provided women with examples of the types of questions that would be asked in the study. Women who considered these items too personal may have declined study participation. Finally, sample size, particularly for sexual minority women, may have limited our ability to detect important mode differences.
In the CASI condition, WPW were significantly more likely to select the mailed computer disk than WPM/NPP when given a choice of completing the assessment by a laptop provided by the research team or by a disk mailed to the participant's home. WPW were also more likely to have a college education, be employed full or part-time, have higher incomes and identify as White than WPM/NPP. Therefore, it is likely that WPW had greater access to, and experience with, computers than WPM/NPP and were able to complete the assessment independent of assistance from a research assistant with a laptop computer. Given our findings, we encourage future studies to further explore women's preferences for data collection methods and whether mode of data collection influences the responses of middle-aged and older sexual minorities.
Our findings also provide information about the feasibility of different methods for collecting data from a traditionally under-represented group of women. Of the 630 women who were eligible and enrolled in the study, 95% agreed to be randomized to one of three modes of data collection. Not surprisingly, women who were more likely to have access to a computer (e.g., more education, employed, white race) chose CASI-D. Women who refused randomization (self-choice) were more likely to have less than a college degree, to identify as Hispanic, and to choose SAMQ. Despite the informed consent process, women in the self-choice option may not have completely understood the concept of randomization and been concerned about the implications of agreeing to randomization. They may have chosen the mode that was most familiar to them, offered the most perceived anonymity, and provided the greatest degree of flexibility in completing the assessment (e.g., time and availability of assistance with question understanding).
We obtained an overall response rate of 93%. This response rate is generally higher than for most other studies, particularly SAMQs, and is a strength of our study because of the low potential for non-response bias. The high response rate is likely a result of the initial contact we had with women during recruitment and screening for eligibility. Unfortunately, we do not have data to inform other studies of comparable populations that do not employ similar pre-survey contact with participants.
Despite the high overall response rate, we found noteworthy differences in response rate by mode. The response rates for CATI and SAMQ were over 95%, while only 86% for CASI. The lower response rate by computer was not unexpected given other mode experiments [19] and the age of the participants. There were likely some women with less experience using computers who, despite initially agreeing to participate, worried about their ability to correctly use the software or feared unknown potential consequences of responding to a computer program. Additionally, women may have had technical difficulties with the computer that we were unaware of because they indicated that they were no longer interested in study participation rather than acknowledging problems with computer software.
We also found that more contact attempts with participants were required for CASI relative to SAMQ and CATI to achieve comparable response rates (Figure 2). Furthermore, the estimated costs per randomized participant were approximately $60 for CASI compared to $30 for SAMQ and $20 for CATI. Within the CASI condition, the cost per participant was about $115 for CASI-I and $20 for CASI-D. Had we used Internet-based data collection, the costs associated with CASI would have been substantially lower. However, the sample would have been biased towards women with higher socioeconomic positions who had access to a computer. Women in our sample who chose in-person CASI were more likely to identify as a racial minority, to be less educated and not employed compared to those who chose to complete the questionnaire on a disk that was mailed to them.
In addition to sample size, there are a number of other study limitations. First, to include sufficient numbers of sexual minorities, we used non-probability based sampling methods. Our sample was highly educated, predominantly white, and employed, with relatively higher incomes. Unfortunately, because sexual orientation is not asked of all individuals in the Census or on any large state-wide population-based survey, we do not have data to compare our sample to the eligible Rhode Island population. Therefore, care should be taken when generalizing our findings. We also did not use methods to verify self-reported data and cannot confirm whether there was substantial over- or under-reporting where differences were observed across modes. Finally, we cannot discern which mode provided the most accurate estimates of true behaviour, nor can we distinguish the extent to which differences across modes indicate differences in accuracy of reports as opposed to mode artefacts. However, given the few statistically significant differences, it appears that the incidence of mode artefacts is low.