Skip to main content
  • Research article
  • Open access
  • Published:

The impact of self-interviews on response patterns for sensitive topics: a randomized trial of electronic delivery methods for a sexual behaviour questionnaire in rural South Africa

Abstract

Background

Self-interviews, where the respondent rather than the interviewer enters answers to questions, have been proposed as a way to reduce social desirability bias associated with interviewer-led interviews. Computer-assisted self-interviews (CASI) are commonly proposed since the computer programme can guide respondents; however they require both language and computer literacy. We evaluated the feasibility and acceptability of using electronic methods to administer quantitative sexual behaviour questionnaires in the Somkhele demographic surveillance area (DSA) in rural KwaZulu-Natal, South Africa.

Methods

We conducted a four-arm randomized trial of paper-and-pen-interview, computer-assisted personal-interview (CAPI), CASI and audio-CASI with an age-sex-urbanicity stratified sample of 504 adults resident in the DSA in 2015. We compared respondents’ answers to their responses to the same questions in previous surveillance rounds. We also conducted 48 cognitive interviews, dual-coding responses using the Framework approach.

Results

Three hundred forty (67%) individuals were interviewed and covariates and participation rates were balanced across arms. CASI and audio-CASI were significantly slower than interviewer-led interviews. Item non-response rates were higher in self-interview arms. In single-paper meta-analysis, self-interviewed individuals reported more socially undesirable sexual behaviours. Cognitive interviews found high acceptance of both self-interviews and the use of electronic methods, with some concerns that self-interview methods required more participant effort and literacy.

Conclusions

Electronic data collection methods, including self-interview methods, proved feasible and acceptable for completing quantitative sexual behaviour questionnaires in a poor, rural South African setting. However, each method had both benefits and costs, and the choice of method should be based on context-specific criteria.

Peer Review reports

Background

There has long been concern that the measurement of sexual behaviour is fraught with potential biases [1, 2]. In cross-section, there is a high likelihood that individuals will be affected by a desire to provide socially desirable responses. This social desirability bias may lead to over-reporting (e.g. men reporting numbers of sexual partners) or under-reporting (e.g. women reporting numbers of sexual partners) [3]. Additionally, recall of behaviour in the past is likely to suffer from unintentional errors which are at best random and at worst also affected by social desirability.

Longitudinally, there are additional concerns, all of which apply to both research on sexual behaviour and other outcomes. First, individuals may learn how to respond in order to minimize response burden, e.g. reporting fewer partners when each partner triggers a follow-up set of questions [4, 5]. Second, socially desirable responses may change over calendar time (e.g. after a publicity campaign promoting condom use, reported condom use levels may rise) or based on lifecourse stage (e.g. increasing self-reported age at first sex by the same individuals over time [6]). Third, the composition of open cohorts may change of over time, including in ways associated with behaviour (e.g. loss to HIV-related mortality). Such changes may mean that apparent trends reflect a combination of intra-and inter-respondent behaviour change [7]. These longitudinal effects may obscure actual changes in sexual behaviour over time, limiting the power of cohort data in inferring programme impact on actual behaviour.

Some of these potential biases may be tempered by using self-interview techniques. In self-interviews, instead of the interviewer asking questions and writing down responses, the respondent completes the form. A common format for self-interviews is the computer-assisted self-interview (CASI) where a computer programme leads the respondent through the questionnaire. This can be coupled with a headphone set to allow for audio-computer-assisted self-interviews (ACASI), which is particularly helpful in lower-literacy populations [8]. Any form of CASI, however, requires form literacy, i.e. the ability to navigate the questionnaire [9]. When computer-based this includes computer literacy; when paper-based respondents need to be able to interpret and follow skip patterns and other instructions.

A number of past studies have compared self-interview to face-to-face techniques. These include comprehensive reviews of sexual behaviour in low and middle-income countries [10, 11] and worldwide [12]. A further worldwide meta-analysis compared paper- and computer-based self-completed interviews [13]. No one method appears to be universally best, although on average sensitive behaviours appear to be reported more often during self-interviews, at least when first introduced [14]. Self-interview methods sometimes improve the rate of reporting of socially undesirable behaviours (e.g. number of sexual partners, forced sex [11, 15]) and decrease item non-response rates [16]. However, they also increase the level of internally inconsistent responses [17].

Interviewer-led interviews can be affected by interviewer-related variability in response [18]. However the required interaction with the interviewer sometimes leads to increased willingness to reveal highly sensitive answers [19], and may be particularly useful for complex or ambiguous questions (e.g. concurrency). Qualitative evidence suggests that respondents are more willing to accurately report sensitive topics in self-interviews [10], and respondents report that self-interview methods are preferable for sexual matters [20, 21]. Nevertheless, recent experiments using biomarkers found little difference in validity between face-to-face and self-interview arms [22, 23].

In the context of a long-running, paper-based longitudinal surveillance programme in rural South Africa, consideration has been given to how to improve questionnaire delivery. We therefore conducted a randomized trial with mixed methods evaluation of the feasibility and acceptability of using electronic methods to administer sexual behaviour questionnaires. We measured overall and item non-response rates, time taken to conduct the interviews and how the new methods were viewed by respondents and field staff.

Methods

This electronic delivery methods study (“EDM”) compared four methods for delivering a questionnaire on sexual behaviour to participants. Research interviews can be considered to be any interaction between an interviewer and a respondent, in which questions are asked with the aim of eliciting information. Such interviews may use close-ended questions in a questionnaire format to capture structured information. Often such questions require responses that fit into one of a number of pre-determined categories (e.g. “have you ever had sex”) or are numeric (e.g. “how many sexual partners have you had in your lifetime”). Alternatively, they may require short responses (e.g. “which town did you grow up in”). Interviews can also use open-ended questions intended to elicit less structured responses (e.g. “how does going to church make you feel”). Such open-ended questions can be pre-scripted, or allowed to arise spontaneously as follow-up questions during the interview process. The EDM interview consisted of a structured, largely quantitative questionnaire with open-ended “cognitive interview” questions embedded between sections. The cognitive interview questions were intended to help us better understand responses to the close-ended questions. This interview was conducted on a single occasion at the home of the respondent.

The four methods we used to conduct our structured, quantitative questionnaire were: (1) Paper and pen interview (PAPI): the interviewer asks the questions and writes responses onto a paper form. (2) Computer-assisted personal interview (CAPI): the interviewer asks the questions and enters the responses into a portable tablet computer. (3) Computer-assisted self-interview (CASI): the respondent reads questions on the tablet and enters the responses themselves. (4) Audio Computer-assisted self-interview (ACASI): the respondent reads or listens using headphones to the questions on the tablet and enters the responses themselves. These were grouped into interviewer-led arms (PAPI and CAPI) and respondent-led arms (CASI and ACASI).

The study was conducted in August and September 2015 in the Somkhele demographic surveillance area (DSA) of the Africa Health Research Institute (AHRI). The DSA is a ~ 435 km2 area in the uMkhanyakude district of KwaZulu-Natal province. The DSA has been under semi- or tri-annual household demographic surveillance since 2000, including annual individual health questionnaires since 2003 [24]. This health surveillance questionnaire consists of closed-ended or very short text quantitative questions, and contains sections on general health (chronic conditions, healthcare utilization), sexual health (marital status, contraception, paternity/maternity, circumcision) and sexual behaviour, including partner-specific behaviour covering up to three partners from the past 12 months [25]. The DSA contains one urban area (KwaMsane) but is otherwise rural. There are ~11,000 households in the DSA, and any resident household member aged 15 and over who can consent is eligible for the health questionnaire. All surveillance questionnaires are conducted as PAPI.

At the beginning of 2015, 36,336 individuals were listed as potentially eligible for health surveillance in that year. Of these, 10.9% had died, migrated or their household was dissolved prior to interview and were considered no longer eligible by the time they were approached between February and April 2015. Of those still eligible, a further 7.2% were not contactable. Of those contacted, 5.4% were unable to provide informed consent, and a further 1.2% were too sick to participate. Of those contacted and capable of consent, 54.8% consented to be interviewed. Of those who consented, 49.6% (27.2% of all eligible individuals) answered any of the sexual behaviour questions. Literacy rates in this area are high; in 2014 77.9% of residents aged 18–49 had attended secondary school and 45% had reached the final year of secondary school.

Study design

For the quantitative questionnaire, we drew a random stratified sample of 504 individuals aged 18 and over who were eligible for health surveillance questionnaires in the first 14 weeks of surveillance in 2015, i.e. were resident members of a DSA household at the previous household surveillance visit (conducted between August and December 2014). We expected to interview 75% of sampled individuals (allowing for migration and non-consent). We therefore expected this sample size to provide 80% power to see a rise in the proportion of individuals reporting more than one partner in the past 12 months from 3% in 2013 health surveillance to the national level of 12.5% [26], when comparing respondent- and interviewer-led techniques.

The sample consisted of equal numbers from four of the 23 izigodi (traditional Zulu community areas, singular isigodi) within the DSA: one urban; one peri-urban; and two rural locations. Within each isigodi we further stratified the sample into six equal sets of 21 by gender and three age categories: 18–29, 30–49 and over 50. We made two attempts to contact each selected individual at their place of residence. In line with existing DSA procedures, reasons for no longer being eligible were: (i) death; (ii) dissolution of the household; (iii) out-migration from the household. All those contacted were interviewed unless they were incapable of providing informed consent or declined to interview.

The EDM questionnaire contained seven sections. Many of the questions we used were the same as those asked in annual surveillance questionnaires, but we also included new questions that we expected to be particularly sensitive to answer in this setting. We endeavoured to keep our question wording as close as possible to that used in annual surveillance questionnaires, although we did retranslate the text for this study. The first section, on marital status, was asked by the interviewer in all trial arms. Three sections were gender-specific: pregnancy and contraception (women only); paternity (men only); circumcision (men only). These first four sections contained exactly the same questions as the surveillance questionnaire.

Section five covered general sexual history, including numbers of partners and use of condoms. This section contained all surveillance questionnaire questions, with additional new questions on numbers of sexual acts and regularity of condom use in the past 4 weeks. Section six asked about partner-specific sexual history on up to three most-recent partners within the past 12 months. The final section asked about lifetime involvement in high-risk sexual behaviours, i.e. exchange sex, anal sex, same-sex involvement and forced sex. All of these questions were new. In this analysis we focus on the last three sections of the questionnaire covering sexual behaviour (general and partner-specific sexual history), since these are the sections most likely to be affected by social desirability bias and non-response [27].

After the interviewer-led marital status section, individuals allocated to self-interview arms (CASI or ACASI) were provided with an additional brief section introducing them to the tablet software. This training section included examples of different question types (e.g. numeric, multiple-choice, text entry) using non-sensitive, non-health questions. Respondents were informed that this was a training section and that their responses in the section would not be analysed. All arms required all questions to be answered before progressing, and all questions included a “Prefer not to answer” option, although this was not explicitly presented to respondents in the interviewer-led arms. The questionnaire was programmed in OpenDataKit [28], a free open-source software, and all commands and questions were translated into isiZulu and the translations piloted within the study team. While every respondent was allocated to a specific study arm, those in self-interview arms were offered the opportunity to conduct the questionnaire as a CAPI if they preferred, and were also told they could ask for assistance from the interviewer at any time; the level of assistance provided was recorded at the end of the interview.

Within each study arm we randomly selected 12 individuals to be invited to participate in a cognitive interview [29, 30]. Cognitive interviewing is a qualitative method for helping to identify potential sources of error in questionnaire responses. The method focuses explicitly on understanding the cognitive processes used by respondents in answering research questions in four stages. First, question comprehension: what does the respondent believe the question to be asking. Second, retrieval of relevant information: what types of information does the respondent need to recall and what strategies do they use to answer the question. Third, decision process: does the respondent want to tell the truth and how much mental effort is dedicated to answering the question accurately. Fourth, response process: can the respondent match their internally generated answer to the question categories. Questions were open-ended and we used the verbal probing approach based on initial scripted probes followed by spontaneous follow-up probes to unpack responses. The approach has been used previously in sexual behaviour questionnaire development [31, 32].

After each of the seven sections of the questionnaire, we used both broad and question-specific cognitive interview probes. We additionally asked a set of overarching questions about the interview process after all quantitative data collection was complete in order to understand the overall acceptability of using electronic data collection methods, both in-and-of-themselves and relative to past paper-based approaches. These cognitive interviews were transcribed and translated into English. We continued to invite allocated individuals to participate in cognitive interviews until the qualitative interviewers in discussion with the qualitative coordinator agreed that saturation had been reached.

After completing all data collection for the trial, we conducted a group discussion with all six interviewers to gather information on the lessons they had learned from the study. Specifically, we asked about interactions with the local community, which questions respondents found problematic and about the experiences of fieldworkers and respondents in using electronic tablets for data collection.

Analytic design

We first describe rates of contact and consent by arm, as well as interview duration. Our primary quantitative outcomes of interest are rates of: (i) overall response; (ii) item response for sexual behaviour questions; (iii) affirmative responses to sexual behaviour questions. Our primary comparison was an intention-to-treat (ITT) analysis between interviewer- and respondent-led arms (to protect against non-random switching from self-interview to CAPI arms); as a secondary analysis we conducted an As Treated (AT) analysis. Differences were examined using χ 2 tests for binary outcomes and Kruskal-Wallis tests for continuous and ordinal outcomes. We present effect size estimates using \( \phi =Z/\sqrt{N} \) for χ 2 tests and \( r=\sqrt{\chi^2/N} \) for Wilcoxon rank-sum tests. Both measures provide an estimate of the proportion of variance seen that is due to correlation between study arms and the response variable of interest.

To summarize our findings we also conducted a single-paper meta-analysis (SPM) of non-response by arm for the 24 sexual behaviour questions, and affirmative proportions for all 15 binary outcome questions. We used a restricted maximum-likelihood estimator in a random-effects model to estimate the mean difference in proportions of either item non-response or affirmative response, comparing interviewer- and respondent-led arms. We further estimated between-question heterogeneity responses across study arms using I 2, the percentage of observed variance due to variance in true effect sizes rather than chance [33]. Our working hypothesis was that respondent-led arms would have the greatest increase in response rates for the most sensitive questions. A priori, we expected these to include questions about partner numbers, concurrent relationships, explicitly exchanging sex for goods or money, having anal sex (highly stigmatized in South Africa [34]), same-sex attraction and forced sex. We therefore ran a third SPM for just the seven binary outcomes for highly sensitive questions.

In addition to conducting cross-sectional analysis, we also compared individuals’ responses in this trial to their most-recent responses in a surveillance questionnaire. This supplementary analysis aimed to evaluate to what extent results seen in the EDM trial reflected the trial environment itself: i.e. if those in the interviewer-led arms responded differently in the EDM versus in surveillance.

Finally, we assessed the acceptability and feasibility of answering questions relating to sexual health, and the benefits or drawbacks of using electronic delivery methods, using data from the cognitive interviews. We used the Framework approach to derive a case-and-theme structure from the cognitive interview data [35], and focused on key prompts relating to each sexual behaviour section and to the overall questionnaire – including a comparison of their experiences of the EDM study compared to past annual surveillance (prompts listed in Additional file 1: Content S1). Initial coding was conducted by GH and DM who compared selected scripts which they had coded separately to ensure consistent codes were used.

Results

The flow of the 504 potential respondents sampled through the trial is shown in Fig. 1. 84 (16.7%) of sampled individuals were not in the DSA, and thus no longer eligible, and further 55 (10.9%) could not be contacted within the study period. Amongst the 365 individuals contacted, 15 (3.0%) of individuals were unable to provide informed consent and 10 more (2.0%) declined to participate. Each arm was balanced by design on gender, age and location, and there were no statistical differences in the number of individuals being contacted or consenting to participate by arm (Table 1). Older and non-urban individuals were significantly more likely to be contacted, but there were no differences in willingness to participate once contacted.

Fig. 1
figure 1

Sankey diagram of study outcomes for sampled individuals. Data underlying this figure are shown in Additional file 1: Table S1

Table 1 Respondent characteristics by response status and intention-to-treat arm

Of the 166 consenting respondents who were assigned to the CASI or ACASI arms, 29 (17%) did not complete the questionnaire as a self-interview (24/86 CASI [6.3%] vs 5/80 ACASI [27.9%]; \( {\chi}_1^2=13.5 \), p < 0.001). The most common reasons given for requesting CAPI rather than a self-interview were: inability to read or write (n = 15); eyesight problems (n = 9); and dislike of computers (n = 2). The proportion of individuals who declined self-interviews rose with age, from 2.1% amongst 18–29 year olds to 19.2% amongst 30–49 year olds to 27.3% amongst over 50 year-olds (Cuzick non-parametric trend test: Z =  − 3.4, p = 0.001), but was not significantly different by gender (female: 19.1%; male: 15.6%, \( {\chi}_1^2=0.4 \), p = 0.55).

Across the three computer-based arms (CAPI, CASI, ACASI), interview duration varied systematically by arm among the 224 non-cognitive interview respondents (Fig 2). Under ITT, median duration was 8.3 min (interquartile range (IQR) 5.4–11.70) in the CAPI arm, 13.7 (IQR 13.7–20.1) in the CASI arm and 19.9 (IQR 14.6–30.9) in the ACASI arm; all distributions were right-skewed (skewness: 6.9; 5.0; 7.5). These differences were significant and moderately sized using a Wilcoxon rank-sum test: CAPI vs. CASI, Z = 4.9, p < 0.001, r = 0.40; CASI vs. ACASI, Z = 4.1, p < 0.001, r = 0.33. AT analysis results were qualitatively similar, although the 26 individuals opting-out of self-interview arms and into CAPI took a median of 12.4 min (IQR 12.4–19.2), significantly longer than those who did not opt-out (Wilcoxon Z = 2.6, p = 0.009, r = 0.21). There were no significant differences, either overall or within study arms, by age group or respondent gender.

Fig. 2
figure 2

Interview duration for tablet computer study arms. N = 219. All durations measured as end of interview time minus start of interview time, so no data is presented for the Paper and Pen Interview (PAPI) arm. Five individuals with a reported interview length of greater than 60 min (CAPI: 271 min; CASI: 157 min; ACASI: 63, 94 and 357 min; the 357 min interview was opted-out to CAPI), and all 20 individuals completing cognitive interviews, on the understanding that these interviews had been interrupted, are not shown

Item non-response rates were generally higher in self-interview arms (Tables 2, 3, 4). In meta-analysis, self-interview respondents were significantly more likely to avoid responding to questions (Additional file 1: Figure S2). The mean percentage of respondents declining to answer was 4.4% in the interviewer-led arms versus 6.5% in the self-interview arms (mean difference: 2.1%, 95% confidence interval: 0.1–3.3%). However, this difference should be treated with caution given the high level of heterogeneity across questions: non-response was significantly (up to 10 percentage points) higher in self-interviews for several questions relating to respondents’ most-recent partner, but (non-significantly) lower for a range of other questions. Quantitatively, heterogeneity of effects for non-response was estimated to be very high (I 2=88.4, 95%CI: 85.4–90.7%).

Table 2 Item response rates for general sexual behaviours
Table 3 Item response rates for sexual behaviour questions not previously used in the surveillance
Table 4 Item response rates for partner-specific sexual behaviours with most-recent sexual partner

Amongst those who answered questions, in only a few cases were there significant differences between interviewer-led and self-interview arms (Tables 2, 3, 4). However, meta-analysis highlighted that self-interview respondents were more likely answer affirmatively to seven binary highly sensitive questions: mean percentage answering yes: 6.1% vs 4.2% for interviewer-led arms (Fig. 3). This difference was relatively small in absolute terms, but statistically significant (mean: 1.9%, 95% confidence interval [CI]: 0.3–3.6%). Heterogeneity of effects was estimated to be moderate (I 2=65%, 95% CI: 36–81%), although all effects were in the same direction. When we considered all 15 binary questions, the results were highly heterogeneous and no significant association was seen (Additional file 1: Figure S3). Effect sizes for both item non-response and affirmative responses were small to moderate, with a highest value of ϕ = 0.21 and mostly with values <0.10.

Fig. 3
figure 3

Single-paper meta-analysis of most sensitive binary response questions. Size of point estimates is in proportion to the log of the number of respondents for each question. Values at right are means and 95% confidence intervals

Our supplementary analysis comparing respondents’ EDM questionnaire responses to their prior surveillance questionnaire is presented in Additional file 1: Content S2. We did not find any significant differences either in surveillance responses or changes between last surveillance response and EDM response across EDM arms. Questions that should have time-invariant responses (e.g. age of sexual debut) did not significantly change between surveillance to EDM questionnaires.

Cognitive interviews

Acceptability and feasibility of sexual behaviour questions

In this area where sexual health surveillance has been conducted for over 10 years, few respondents found the topics covered unacceptable or difficult. Almost all respondents reported positive feelings towards answering sexual history questions that they had seen before, using terms such as ‘happy’, ‘no problem’, ‘comfortable’, ‘alright’ and ‘okay’. Difficulties in responding to sexual health questions revolved around question complexity – either due to long periods of recall or unclear question phrasing – or the inclusion of new topics that some respondents were not expecting:

I don’t know how many different people I have had sex with in my lifetime. I am unable to count. When one grows up, you have sexual partners here and there. I did not save them in my memory because I didn’t know this information would be required at a later stage in my life” (male, 63 years old).

Some respondents, however, perceived some sexual behaviours as either socially acceptable or unacceptable:

I didn’t have a problem to answer [meaning age at first sex]…I think I was at the right age to have sex” (male, 42 years old).

It was difficult to answer this question [about anal sex]. It [anal sex] is for homosexuals…and practiced in prisons” (male, 34 years old).

Respondents did not generally find it difficult to recall details of specific sexual relationships, especially when discussing current sexual relationships which were going well. However a small number of participants found the partner-specific section difficult because it was depressing to talk about ex-partners; this suggests that participants may differentially underreport relationships that are concluded or undergoing strain:

I felt unhappy…I didn’t really love one of them [meaning sexual partner]” (female, 51 years old).

Furthermore, a 75 year old female respondent repeatedly stated that she felt uncomfortable answering many questions about her sexual behaviour from the distant past with a much younger interviewer.

Respondents were also aware that reporting multiple recent partners might lead to more questions or more complex cognitive processes, with some commenting on their relief that they had few partners to report.

Differences from previous surveillance interviews

Half of those respondents who had previously completed AHRI sexual health questionnaires in annual surveillance using PAPI methods found it easier than before. The current version was seen as easier due to: (i) similar question wording to previous questionnaires; (ii) non-inclusion of more sensitive questions (e.g. self-reported HIV status); and (iii) the use of tablet computers. Amongst those in self-interview arms, the explicit option to not answer each question was appreciated. The other half of repeat respondents found the questionnaire harder than before, due to: (i) increased questionnaire length; (ii) perceived repetition of questions; and (iii) difficulty of recall, especially for older respondents.

The majority of participants had positive comments regarding the use of a computer in the EDM interview, such as “felt comfortable”, “felt no problem”, “felt good”, “happy about the computer”, “felt at ease after the practice”, “easy to use computer”, “comfortable with technology” and “happy about self-interview”.

Benefits and drawbacks of using electronic delivery methods

Tablets were seen as making interviews quicker and simpler than paper-based forms, as well as increasing confidentiality, trust and security – particularly for the self-interview arms.

“The use of computers made it easier…in the past [AHRI] used paper-based questionnaires, which compromised confidentiality. Interviewers could disclose our information to other people…but the use of computers protects our information” (Male, 29 years old, CAPI).

“No one can see our information on the tablet but paper questionnaires might get lost and found by other people who then read our confidential information” (Female, 20 years old, CAPI).

Participants in the self-interview arms broadly expressed excitement and comfort about answering questions themselves on the computer. However, some respondents reported that the self-interview methods placed more demand on the participant, since reading questions requires attention and focus; furthermore, one respondent, a 37 year old man, reported that the ACASI method felt slow.

In addition, some participants also expressed concerns about the use of tablets due to illiteracy, having lower education levels, or having eyesight problems.

The group discussion with study interviewers reinforced several themes from the cognitive interviews. These themes included respondent perceptions that self-interview methods were exciting and more confidential, although these factors led to slower interviews. Additionally, interviewers reported that self-interviews increased respondent trust in interviewers and the research process, since respondents had previously thought interviewers were making up some questionnaire questions (especially on sensitive topics), but now they could see that interviewers had not been misleading them. Interviewers also reported their preference for CAPI over other methods, since it was the fastest of all four methods, much lighter than carrying paper, and helped ensure data quality through skip patterns and error warnings.

Discussion

In this study, we found that the use of electronic delivery methods, including self-interview approaches, was broadly feasible and acceptable in rural South Africa, across a wide range of interviewees. Additionally, while self-interview methods did not consistently impact the rate at which sexual behaviours were reported, they did increase the level of reporting for sexual behaviours most likely to suffer from social desirability bias. Whilst this increase was small in absolute terms (approximately 2 percentage points) it reflected a 45% relative increase in reporting rates. Self-interviews also increased item non-response rates by a similar absolute and relative amount. The study finds that there were both advantages and disadvantages to using self-interviews in this setting.

The great majority of respondents who were offered the opportunity to self-interview did so. Amongst the subsample invited to discuss their experiences, the great majority expressed positive feelings about the interview process and the use of electronic and self-interview methods. Furthermore, the study fieldworkers reported that the CAPI software reduced the risk of data entry inconsistencies and errors. Several of the respondents aged over 30 declined a self-interview due to limited literacy or vision, although this was much reduced in the audio self-interview (ACASI) arm. However, the ACASI interviews were significantly slower to complete, potentially due to the novelty of listening to questions on headphones.

We did not find significant differences in willingness to participate in the study by arm, potentially due to very high response rates in all arms. Response rates were substantially higher for this trial than for the annual surveillance conducted in the same population. These higher response rates may have been due to the perceived novelty of the trial, particularly since the AHRI-standard community engagement “roadshows” held in each trial area 1 week prior to EDM interviews taking place appeared to generate substantial interest in the study: several respondents mentioned these roadshows to interviewers.

Rates of item non-response, i.e. opting out of questions, were frequently higher in self-interview arms, especially for detailed questions about sexual behaviour with MRP and for receipt of support and forced sex questions; item non-response was lower in self-interview arms for age-related and condom use questions, and for anal sex. Past literature suggests we might expect higher rates of non-response in self-interviews for questions requiring complex thought – either to understand or recall – and lower rates for more sensitive topics [16, 19]. Our findings do not firmly support these patterns.

Reporting of sensitive or socially undesirable behaviours differed less across study arms than has been seen in other similar studies [11, 15]; our work was powered to see differences of 10 percentage points for sensitive questions, rather than the 2 percentage point difference we saw on average. This smaller difference may reflect a truly relatively low-risk sexual behaviour profile in this community, or the impact of self-interview privacy may be limited in this rural, African setting: in a recent meta-analysis of self- vs. face-to-face-interviews, Phillips and colleagues saw greater differences in urban, higher-educated and Asian populations [11].

Alternatively, it may be that study participants in this population have learned how to rapidly negotiate structured questionnaires so as to minimize their response burden [5], while still complying with the request to participate due to extrinsic motivation (either controlled – to avoid shame/guilt – or autonomous – because they see responding as important to society) [36]. In such a scenario, while a novel delivery method providing greater privacy might induce some respondents to provide a fuller picture of their sexual history, most respondents will continue to follow the response script that they have developed previously. Such an interpretation is supported by cognitive interview responses implying awareness that reporting more than one sexual partner would lead to additional follow-up questions. The lack of significant within-individual change from previous surveillance questionnaires to this EDM questionnaire for time-invariant questions also lends some support to the idea that the EDM trial may not have strongly affected willingness to report sensitive information. This study cannot directly confirm such a “scripting” explanation, but does suggest that future in-depth interviews might fruitfully investigate this possibility.

Nevertheless, the response pattern in this study does suggest two, countervailing, trends which highlight the trade-offs of using self-interview methods in this setting. First, some sensitive questions (e.g. >1 partner in the past year, recent non-conjugal partners, history of exchange sex, history of anal sex) were answered affirmatively more often in self-interview arms. Within the self-interview arms a few outlier responses were provided (e.g. one respondent reported eight partners in the past year and current involvement with six). Second, there were higher rates of item non-response in self-interview arms, especially for sensitive and partner-specific questions. This latter is likely to reflect the on-screen option to skip any question by choosing “prefer not to answer”, which was not presented explicitly to the respondent in interviewer-led interviews. The combination of these trends suggests that self-interviews are likely to increase reporting of sensitive events, at the cost of higher missingness that is likely to be differential by respondent characteristics.

The decision as to whether to use self-interviews in a particular context will depend on whether the expected advantages outweigh the potential disadvantages of the self-interview method in a given setting. Specifically, if (computer) literacy in the research population is high enough, the research topic sufficiently sensitive, and the expected or pilot-tested increase in response rates elicited by self-interview methods substantial, then it may be worth the additional time taken to complete questionnaires using self-interviews. This approach may address possible biases introduced by higher non-participation due to limited literacy in some subgroups, and higher item non-response by those with behaviours they are unwilling to acknowledge or report.

Strengths and limitations

This study benefited greatly from a very well-defined population base arising from repeated censuses of the study area, from which a truly random sample could be drawn. The conduct of interviewer-led interviews by local residents with substantial experience of answering similar questionnaires ensured that the comparison between interviewer- and self-led interviews was a fair one of the strongest available version of each method. One limitation of the study was that sampled residents were informed as to their study arm assignment prior to inviting them to participate, potentially biasing response rates; however, very few people declined to participate and thus this issue is unlikely to have had substantive impact. As ever, reporting of sexual behaviours is hard to validate, and so we cannot test which responses were in fact closest to the gold standard of actual activity.

Conclusion

Electronic data collection methods, including self-interview methods, appear to be feasible and acceptable in a poor, rural South African setting. The use of computer-based self-interviews is likely to become even more feasible as smartphone penetration rises and an increasing proportion of the population are members of younger “digital native” cohorts. However, the use of such methods in place of paper-based approaches did not substantially change the data provided by respondents. Furthermore, self-interview methods provided respondents with greater ability to skip questions which they were uncomfortable answering. Interviewers considering using electronic or self-interview methods should carefully consider the relative benefits and costs of such approaches in their specific context.

Abbreviations

ACASI:

Audio computer-assisted self-interview

AHRI:

Africa Health Research Institute

AT:

As treated

CAPI:

Computer-assisted personal interview

CASI:

Computer-assisted self-interview

DSA:

Demographic surveillance area

EDM:

Electronic delivery methods

HIV:

Human immunodeficiency virus

IQR:

Interquartile range

ITT:

Intention-to-treat

MRP:

Most recent partner

PAPI:

Paper and pen interview

SPM:

Single-paper meta-analysis

References

  1. Dare OO, Cleland JG. Reliability and validity of survey data on sexual behaviour. Health Transit Rev. 1994;4(Suppl):93–110.

    PubMed  Google Scholar 

  2. Tourangeau R, Yan T. Sensitive questions in surveys. Psychol Bull. 2007;133:859.

    Article  PubMed  Google Scholar 

  3. Nnko S, Boerma JT, Urassa M, Mwaluko G, Zaba B. Secretive females or swaggering males? An assessment of the quality of sexual partnership reporting in rural Tanzania. Soc Sci Med. 2004;59:299–310.

    Article  PubMed  Google Scholar 

  4. Eagle DE, Proeschold-Bell RJ. Methodological considerations in the use of name generators and interpreters. Soc Networks. 2015;40:75–83.

    Article  Google Scholar 

  5. Warren JR, Halpern-Manners A. Panel conditioning in longitudinal social science surveys. Sociol Methods Res. 2012;41:491–534.

    Article  Google Scholar 

  6. Wringe A, Cremin I, Todd J, McGrath N, Kasamba I, Herbst K, et al. Comparative assessment of the quality of age-at-event reporting in three HIV cohort studies in sub-Saharan Africa. Sex Transm Infect. 2009;85:i56–63.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Shafer LA, Nsubuga RN, Seeley J, Levin J, Grosskurth H. Examining the components of population-level sexual behavior trends from 1993 to 2007 in an open Ugandan cohort. Sex Transm Dis. 2011;38:697–704.

    PubMed  Google Scholar 

  8. Schwitters A, Swaminathan M, Serwadda D, Muyonga M, Shiraishi RW, Benech I, et al. Prevalence of rape and client-initiated gender-based violence among female sex workers: Kampala, Uganda, 2012. AIDS Behav. 2015;19:68–76.

    Article  Google Scholar 

  9. Al-Tayyib AA, Rogers SM, Gribble JN, Villarroel M, Turner CF. Effect of low medical literacy on health survey measurements. Am J Public Health. 2002;92:1478–80.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Langhaug LF, Cheung YB, Pascoe SJ, Chirawu P, Woelk G, Hayes RJ, et al. How you ask really matters: randomised comparison of four sexual behaviour questionnaire delivery modes in Zimbabwean youth. Sex Transm Infect. 2011;87:165.

    Article  PubMed  Google Scholar 

  11. Phillips AE, Gomez GB, Boily MC, Garnett GP. A systematic review and meta-analysis of quantitative interviewing tools to investigate self-reported HIV and STI associated behaviours in low- and middle-income countries. Int J Epidemiol. 2010;39:1541–55.

    Article  PubMed  Google Scholar 

  12. McCallum EB, Peterson ZD. Investigating the impact of inquiry mode on self-reported sexual behavior: theoretical considerations and review of the literature. J Sex Res. 2012;49:212–26.

    Article  PubMed  Google Scholar 

  13. Gnambs T, Kaspar K. Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis. Behav Res Methods. 2015;47:1237–59.

    Article  PubMed  Google Scholar 

  14. Gregson S, Mushati P, White P, Mlilo M, Mundandi C, Nyamukapa C. Informal confidential voting interview methods and temporal changes in reported sexual risk behaviour for HIV transmission in sub-Saharan Africa. Sex Transm Infect. 2004;80:ii36–42.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Hewett PC, Mensch BS, Erulkar AS. Consistency in the reporting of sexual behaviour by adolescent girls in Kenya: a comparison of interviewing methods. Sex Transm Infect. 2004;80:ii43–8.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Langhaug LF, Sherr L, Cowan F. How to improve the validity of sexual behaviour reporting: systematic review of questionnaire delivery modes in developing countries. Tropical Med Int Health. 2010;15:362–81.

    Article  Google Scholar 

  17. Gorbach PM, Mensch BS, Husnik M, Coly A, Mâsse B, Makanani B, et al. Effect of computer-assisted interviewing on self-reported sexual behavior data in a Microbicide clinical trial. AIDS Behav. 2013;17:790–800.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Houle B, Angotti N, Clark SJ, Williams J, Gómez-Olivé FX, Menken J, et al. Let’s talk about sex, maybe interviewers, respondents, and sexual behavior reporting in rural South Africa. Field Methods 2016;28(2):112–132

  19. Poulin M. Reporting on first sexual experience: the importance of interviewer-respondent interaction. Demog Res. 2010;22:237.

    Article  Google Scholar 

  20. Beauclair R, Meng F, Deprez N, Temmerman M, Welte A, Hens N, et al. Evaluating audio computer assisted self-interviews in urban south African communities: evidence for good suitability and reduced social desirability bias of a cross-sectional survey on sexual behaviour. BMC Med Res Methodol. 2013;13:1.

    Article  Google Scholar 

  21. Adebajo S, Obianwu O, Eluwa G, Vu L, Oginni A, Tun W, et al. Comparison of audio computer assisted self-interview and face-to-face interview methods in eliciting HIV-related risks among men who have sex with men and men who inject drugs in Nigeria. PLoS One. 2014;9:e81981.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Mensch BS, Hewett PC, Abbott S, Rankin J, Littlefield S, Ahmed K, et al. Assessing the reporting of adherence and sexual activity in a simulated microbicide trial in South Africa: an interview mode experiment using a placebo gel. AIDS Behav. 2011;15:407–21.

    Article  PubMed  Google Scholar 

  23. Kelly CA, Hewett PC, Mensch BS, Rankin JC, Nsobya SL, Kalibala S, et al. Using biomarkers to assess the validity of sexual behavior reporting across interview modes among young women in Kampala. Uganda Stud Fam Plann. 2014;45:43–58.

    Article  PubMed  Google Scholar 

  24. Tanser F, Hosegood V, Bärnighausen T, Herbst K, Nyirenda M, Muhwava W, et al. Cohort profile: Africa Centre demographic information system (ACDIS) and population-based HIV survey. Int J Epidemiol. 2008;37:956–62.

    Article  PubMed  Google Scholar 

  25. McGrath N, Eaton JW, Bärnighausen TW, Tanser F, Newell ML. Sexual behaviour in a rural high HIV prevalence south African community: time trends in the antiretroviral treatment era. AIDS. 2013;27:2461–70.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Shisana O, Rehle T, Simbayi LC, Zuma K, Jooste S, Zungu N, et al. South African national HIV prevalence, incidence and behaviour survey, 2012. Cape Town: Human Sciences Research Council; 2014.

    Google Scholar 

  27. Eaton JW, McGrath N, Newell M-L. Unpacking the recommended indicator for concurrent sexual partnerships. AIDS. 2012;26:1037.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Hartung C, Lerer A, Anokwa Y, Tseng C, Brunette W, Borriello G. Open data kit: tools to build information services for developing regions. In Proceedings of the 4th ACM/IEEE International Conference on Information and Communication Technologies and Development. ACM. 2010;18. http://dx.doi.org/10.1145/2369220.2369236.

  29. Beatty PC, Willis GB. Research synthesis: the practice of cognitive interviewing. Public Opin Q. 2007;71:287–311.

    Article  Google Scholar 

  30. Tourangeau R. Cognitive sciences and survey methods. In: Jabine TB, Straf M, Tanur J, Tourangeau R, editors. Cognitive aspects of survey methodology: building a bridge between disciplines. Washington, DC: National Academy Press; 1984. p. 73–100.

    Google Scholar 

  31. Aicken CR, Gray M, Clifton S, Tanton C, Field N, Sonnenberg P, et al. Improving questions on sexual partnerships: lessons learned from cognitive interviews for Britain’s third National Survey of sexual attitudes and lifestyles (“Natsal-3”). Arch Sex Behav. 2013;42:173–85.

    Article  PubMed  Google Scholar 

  32. Mavhu W, Langhaug L, Manyonga B, Power R, Cowan F. What is ‘sex’exactly? Using cognitive interviewing to improve the validity of sexual behaviour reporting among young people in rural Zimbabwe. Cult Health Sex. 2008;10:563–72.

    Article  PubMed  Google Scholar 

  33. McShane BB, Böckenhold U. Single paper meta-analysis: benefits for study summary, theory-testing, and replicability. J Consum Res. 2017;43:1048–63.

    Google Scholar 

  34. Ndinda C, Chimbwete C, McGrath N, Pool R. Perceptions of anal sex in rural South Africa. Cult Health Sex. 2008;10:205–12.

    Article  PubMed  Google Scholar 

  35. Smith J, Firth J. Qualitative data analysis: the framework approach. Nurse Res. 2011;18:52–62.

    Article  PubMed  Google Scholar 

  36. Ryan RM, Deci EL. Intrinsic and extrinsic motivations: classic definitions and new directions. Contemp Educ Psychol. 2000;25:54–67.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We thank the six interviewers who collected all the data for this study, and the ongoing generosity of the respondents and community in participating in our research.

Funding

This study was supported by Wellcome Trust grant 082384/Z/07/Z. GH and TB were additionally supported by NICHD of NIH grant R01-HD084233. TB was also supported by the Alexander von Humboldt Foundation through the Alexander von Humboldt professor award, funded by the Federal Ministry of Education and Research; the Wellcome Trust; the European Commission; the Clinton Health Access Initiative; and from NIA of NIH (P01-AG041710), NIAID of NIH (R01-AI124389 and R01-AI112339) as well as FIC of NIH (D43-TW009775). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Availability of data and materials

The datasets generated and analysed during the current study are available in the Africa Health Research Institute’s data repository, which can be accessed at https://data.africacentre.ac.za.

Author information

Authors and Affiliations

Authors

Contributions

GH, NM and KH conceptualized the study. GH, DG and TM oversaw the data collection process. GH conducted the quantitative analyses and summarized the results in tables and graphs. DG, GH, JS conducted the qualitative analyses. GH and DG wrote the first draft of the paper. All authors contributed to the study design, data interpretation and final revisions to the text. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Guy Harling.

Ethics declarations

Ethics approval and consent to participate

Ethical approval for the AHRI surveillance programme has been given by the University of KwaZulu-Natal’s Biomedical Research Ethics Committee (ref BF233–09) and is renewed annually. This nested study was approved as an amendment to the overall surveillance programme. Written informed consent was obtained from all participants in the study presented in this paper.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Additional analyses and materials. (PDF 692 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harling, G., Gumede, D., Mutevedzi, T. et al. The impact of self-interviews on response patterns for sensitive topics: a randomized trial of electronic delivery methods for a sexual behaviour questionnaire in rural South Africa. BMC Med Res Methodol 17, 125 (2017). https://doi.org/10.1186/s12874-017-0403-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-017-0403-8

Keywords