Research article | Open | Open Peer Review | Published:
The influence of interviewers on survey responses among female sex workers in Zambia
BMC Medical Research Methodologyvolume 19, Article number: 60 (2019)
Interviewers can substantially affect self-reported data. This may be due to random variation in interviewers’ ability to put respondents at ease or in how they frame questions. It may also be due to systematic differences such as social distance between interviewer and respondent (e.g., by age, gender, ethnicity) or different perceptions of what interviewers consider socially desirable responses. Exploration of such variation is limited, especially in stigmatized populations.
We analyzed data from a randomized controlled trial of HIV self-testing amongst 965 female sex workers (FSWs) in Zambian towns. In the trial, 16 interviewers were randomly assigned to respondents. We used hierarchical regression models to examine how interviewers may both affect responses on more and less sensitive topics, and confound associations between key risk factors and HIV self-test use.
Model variance (ICC) at the interviewer level was over 15% for most topics. ICC was lower for socio-demographic and cognitively simple questions, and highest for sexual behaviour, substance use, violence and psychosocial wellbeing questions. Respondents reported significantly lower socioeconomic status and more sex-work related violence to female interviewers. Not accounting for interviewer identity in regressions predicting HIV self-test behaviour led to coefficients moving from non-significant to significant.
We found substantial interviewer-level effects for prevalence and associational outcomes among Zambian FSWs, particularly for sensitive questions. Our findings highlight the importance of careful training and response monitoring to minimize inter-interviewer variation, of considering social distance when selecting interviewers and of evaluating whether interviewers are driving key findings in self-reported data.
clinicaltrials.gov NCT02827240. Registered 11 July 2016.
A substantial literature highlights that interviewers can affect survey responses at three levels: (i) unit non-response, i.e., declining to interview; (ii) item non-response, i.e., declining to answer a specific question; and (iii) item quality, i.e., not providing the true answer . Most research to date has been on item quality, although there is some evidence that item non-response may also be affected by interviewers. Variation in responses across interviewers may reflect random differences in interviewers’ manner (e.g., how they frame or explain questions) and ability to draw out responses (e.g., how judgemental they seem). This may be controlled for through the use of hierarchical models accounting for interviewer-level variation .
Interviewers may also generate different response patterns due to systematic variation in their characteristics. Past research has highlighted many candidate characteristics, including gender, age, race/ethnicity, socioeconomic status (SES), research experience and personality . Several theories posit how characteristics of the interviewer alone, or of the interviewer-respondent dyad, may affect responses. First, social distance theory suggests that when interviewers and respondents are similar, response rates and item quality should be higher, due to respondents being more at ease and more likely to be honest . Second, social desirability theory suggests that respondents are likely to match their responses to what they believe the interviewer believes or wants to hear . Finally, social role theory suggests that interviewer effects may be different for different types of question, with a particularly strong effect when asking about topics linked to roles expected to be espoused by interviewers, e.g., reporting more caring behaviour to female interviewers, reporting less racism to ethnic minority interviewers .
These theories can be illustrated with the example of gender . Social distance theory predicts that responses will be more accurate for same-gender pairings for both male and female respondents. Social desirability theory in contrast predicts responses will vary by interviewer gender alone . If both theories apply, we would expect to see an interaction of interviewer and respondent genders to generate four levels of response (male-male, male-female, female-male and female-female). Finally, social role theory predicts that differences between male and female interviewers would be greatest for those questions with the strongest gender expectations, e.g., greater reporting of caring behaviour to female interviewers.
Empirically, female interviewers appear to be considered more sympathetic, less judgmental and less threatening for a broad range of interview types [3, 9,10,11]. There is also evidence that same-gender interviewers elicit more responses, in particular to sensitive questions; i.e., those questions on which respondents believe they are most likely to be judged for their response [7, 12, 13]. Perhaps as a result, most studies find that female interviewers elicit more responses from female respondents, although the literature on male-male interviews is more mixed [3, 11, 14, 15].
Systematic interviewer variation in response for self-reported surveys has long been recognized for public health outcomes . Interviewer gender is frequently considered, particularly for sexual behaviour questions, with a wide range of response patterns seen. These include an increased willingness for men to report sexual behaviours to women , for everyone to report sexual behaviours to same-sex interviewers  and for male military personnel in the Dominican Republic to report more sexual activity, but less alcohol use and sexual coercion, to female interviewers .
Within sub-Saharan Africa, findings on interviewer impact for sexual behaviour questions are also mixed, again largely focused on gender. One Ghanaian study found that men did not report differentially by interviewer gender; but women reported more prior sexual activity and concern about AIDS to male interviewers, and more often that condoms spoil sex to females . In contrast, a study in South Africa found no effects for female respondents, but that men reported more sexual partners to female interviewers, and lower-risk behaviours to older interviewers . A smaller Ghanaian cross-over study (respondents talking with both male and female interviewers) found no significant results .
The impact of interviewer characteristics has also been considered for gender-based violence (GBV) and intimate partner violence (IPV) questions . Violence prevalence may be underreported due to reticence on the part of the interviewer or respondent to discuss the topic, due to low privacy, expected social roles or distress generated [22, 23]. Since some of these mechanisms may be gendered, some studies have adjusted for interviewer effects when measuring IPV [24, 25]. Explicit evaluation of gender-of-interviewer effects is limited, although a race-of-interviewer study for Africa-American respondents in the USA found little impact on IPV disclosure .
While interviewer effects have been examined in Africa, we are not aware of any work considering highly stigmatized populations, particularly when asking about potentially stigmatizing behaviours. However, these may be exactly the respondent populations and topics most likely to craft responses to fit narratives that either they have about themselves, or that they believe interviewers to have about them. We therefore analysed how the identities and characteristics of interviewers affected both risk factor prevalence and measures of association between variables in a survey of sexual and other experiences amongst female sex workers (FSWs) in three Zambian transit towns.
We used data from the Zambian Peer Educators for HIV Self-Testing (ZEST) study, a cluster randomized trial of the impact of HIV self-testing provision among FSWs in Chirundu, Livingstone and Kapiri Mposhi [27, 28]. Peer educators, who were current or former FSWs, were recruited from existing female sex worker organizations operating in the study towns. Each peer educator recruited six women into the trial. Eligibility criteria were: (i) primarily living in one of the towns; (ii) being at least 18 years old; (iii) reporting exchanging sex for money, goods or other items of value at least once in the prior month; (iv) self-reporting either being HIV negative or of unknown serostatus; and (v) not having tested for HIV in the past 3 months. Peer educators referred potential participants from within their social networks to study staff who screened them for eligibility first by phone and then in-person by study staff. Respondents received 50 Zambian Kwacha (ZMW; ~US$5) per interview they completed and no incentive for participation in peer educator sessions; peer-educators were paid for their participation . The study was reviewed by the Institutional Review Boards at the Harvard T.H. Chan School of Public Health in Boston, USA and ERES Converge in Lusaka, Zambia. Written informed consent was obtained from all participants.
The baseline survey lasted an average of 35 min. Each survey was conducted by a research assistant recruited locally, in the local language chosen by the respondent. Data were collected through a face-to-face, computer-assisted personal interview (CAPI) at a private and convenient location, using a tablet computer and the CommCare (Dimagi Inc., Cambridge, MA) electronic data capture platform. There were follow-up interviews at one and 4 months post-baseline.
Research assistants were hired in each town. Desirable qualifications included substantial education (preferably including some tertiary attendance), computer literacy and experience of working with FSWs. Many of those hired had past experiences working with FSWs through the Corridors of Hope project . Assignment of research assistant interviewers to respondents was random at the level of the peer educator, within each town, and this assignment of peer educators to research assistants was made prior to study commencement.
We considered 80 variables captured in the baseline survey, ranging from non-sensitive to highly sensitive questions, in four overarching categories: (i) socio-demographics; (ii) sex work; (iii) sexual behaviour and health; and (iv) other HIV risk factors – including history of abuse, substance use, interactions with law enforcement and psychosocial wellbeing (depression, HIV stigma, social support and self-efficacy). Tables 1, 2, 3 and 4 contain a detailed list of variables. We also considered self-reported testing for HIV since baseline at one-month follow-up, and testing in the past month at four-month follow-up.
We described how survey responses varied according to the gender of the interviewer, testing for significant differences using Wilcoxon Rank-Sum and χ2 tests for continuous/ordinal and nominal categorical data respectively. We then conducted multilevel regression analysis for each outcome, with respondents nested within interviewers, using the appropriate link function for each outcome. We first ran models that contained fixed effects for study site and random intercepts for interviewer identity, and recorded the intraclass correlation coefficient (ICC) at the interviewer level, i.e. the proportion of model variance explained by interviewer identity. ICC was calculable for linear and logistic models, but not for Poisson or ordered logistic ones. Additional file 1: Table S1 details model forms for all variables.
We then ran models with an indicator variable for female gender to test for systematic differences in response by gender of interviewer. We did not adjust for respondent covariates other than study site and interviewer since interviewers were randomly assigned (within study sites), and thus other factors should not change any associations seen between interviewer gender and self-reported variables. From each regression model we estimated prevalences for male and female interviewers based on marginal predicted values from regression coefficients. Given that we were conducting many tests of the same hypothesis, i.e., that responses for each variable differed by interviewer gender, we adjusted all p-values for multiple testing using the Benjamini-Hochberg methodology . We conducted a sensitivity analysis modelling all bivariate associations as three-level models additionally including random intercepts for peer educator identity.
Finally, we considered how adjustment for interviewer identity and gender affected measures of association between key covariates (study arm, age, past abuse) and subsequent HIV self-testing, to evaluate whether variation in responses by interviewer gender also affected measures of association between multiple measures. In line with the ZEST primary outcomes analysis , we ran generalized linear models with a Poisson distribution and log link, and standard errors robust to heteroskedasticity. For each combination of exposure (study arm, age in categories – < 25, 25–29, 30–34, ≥35 – and baseline reports of adult physical and sexual abuse) and outcome (recent HIV testing at one and 4 months), we ran three models: (i) just containing fixed effects for study arm and site; (ii) adding random effects for interviewer identity; and (iii) adding a fixed effect for female vs. male interviewer. Statistical analyses were run in Stata version 13 (College Station, TX).
The ZEST baseline sample consisted of 965 FSWs interviewed by 9 male and 7 female interviewers (all with 60 respondents bar one with 65). There were equal numbers of male and female interviewers in Chirundu (two of each), more female than male in Kapiri Mposhi (three vs one) and more male than female in Livingstone (six vs two). Interviewer ages ranged from 25 to 45 (median: 35, interquartile range [IQR]: 31.5–38). The socio-demographic and behavioural composition of the population has been described previously , but briefly they were young (75% aged under 30), almost none were married and they had low SES (Table 1).
For the 62 variables using linear and logistic regression, models containing only fixed effects for study site and random intercepts for interviewer identity, variability at the interviewer level accounted a median of 14.6% of all variance (IQR: 7.6–23.4%). Interviewer-level variation was generally lowest for socio-demographic and cognitively simple questions, and highest for questions relating to sexual behaviour, substance use, abuse and psychosocial wellbeing (Tables 1, 2, 3 and 4).
FSWs were more likely to report lower educational attainment and lower income to female interviewers than to male ones (Table 1). Specifically, respondents were less likely to tell female interviewers that they were literate, more likely to report earning less than ZMW 500 (~US$50) per month and more likely to report being poor or very poor; this last comparison was statistically significant after adjusting for multiple testing. Despite these differences, FSWs reported almost identical levels of self-perceived relative SES to male and female interviewers.
In the context of sex work, FSWs were non-significantly more likely to report they always asked clients to use condoms, and less likely to report that they frequently asked clients to disclose their HIV status, to male interviewers (Table 2). Respondents told female interviewers that other FSWs had more clients per night than they did male interviewers, although much of this difference was due to a few outlying values for one interviewer.
Sexual behaviour and health
When discussing their sexual health and behaviour other than sex work, respondents reported very similar behaviours and beliefs to male and female interviewers (Table 3).
One exception to this was that FSWs were non-significantly more likely to tell female interviewers that they were uncomfortable telling medical providers about sex work and that they felt judged by medical providers for doing sex work.
Other HIV risk factors
Reporting patterns for substance use, FSW-empowerment and various psychosocial scales were very similar by interviewer (Table 4). However, reporting of abuse varied substantially by interviewer gender. Specifically, FSWs reported non-significantly, but substantially, lower rates of lifetime childhood or adult physical abuse to female interviewers (over 20 percentage points difference), but similar rates of sexual abuse at both ages. When asked specifically about abuse in the past 12 months, respondents reported significantly higher rates of both physical and sexual abuse from sex work clients to female respondents, and correspondingly lower rates of abuse from their non-client partners. They were similarly far (over 30 percentage points) more likely to report having had sex with a client in the past 12 months because they were afraid to female interviewers (Fig. 1). Additional adjustment for peer educator identity did substantively affect any of the above results (Additional file 1: Table S3).
In regressions predicting recent HIV testing history at follow-up, we did not see any effect of adding interviewer random effects or respondent age to models of the primary ZEST study association, i.e. difference in testing rates by study arm (Table 5). Nor did we see any impact of accounting for interviewer identity on the association between history of sexual abuse and recent HIV testing at 1 month. However at 4 months, sexual abuse was significantly associated with not testing when no adjustment was made for interviewer identity, but this became non-significant after including interviewer random effects. Including interviewer gender did not affect our associations of interest, over and above interviewer random effects.
In this analysis of data from an HIV self-test trial among FSWs in three Zambian border towns, we show that interviewers often substantially affected what respondents reported regarding their lives, in particular their psychological wellbeing and experiences of violence. In the context of 16 interviewers each conducting at least 60 interviews, an average of one-sixth of all variance in question responses was observed at the interviewer level, even after accounting for study site. This interviewer-level variance rose to almost one-third for questions about psychological ill-health and violence, despite the prevalence of both being very high and careful interviewer training . These variations fed through in some cases to measures of association, i.e., failing to account for interviewer effects led to different coefficient estimates in regression models.
The importance of interviewer variation has long been recognized in the survey design and analysis literature [12, 19, 31] and our findings reinforce the importance of interviewers for measures of prevalence. Our findings support particularly strong interviewer effects for sensitive topics, notably physical and sexual abuse, and subjective ones, such as depression, social support and self-efficacy. For example, for the question “In the past 12 months, has a sexual partner ever physically forced you to have sex when you did not want to?”, the proportion of each interviewer’s 60 respondents answering in the affirmative varied from 13 to 97%. This occurred despite the two interviewers with the most extreme proportions working in the same town, and thus theoretically interviewing fully exchangeable respondents.
The potential impact of interviewer variation can be minimized by careful training in question presentation, and monitoring of response patterns by interviewer identity during study conduct (with feedback of these findings to the field teams). Other potentially useful steps include matching interviewers and respondents by age and gender, and providing support for interviewers in managing their own distress in hearing reports of violence or other hardship . When interviewer-level variance is anticipated, it is also preferable to have a large number of interviewers doing few interviews, rather than a few interviewers doing many; this both reduces the burden on interviewers, and avoids outlying interviewers from having oversized impacts .
Despite the substantial variance in responses at the interviewer level, interviewers’ gender was associated with relatively few variables. There were substantive (i.e., more than 10 percentage points), if non-significant, differences by gender-of-interviewer for several variables and significant differences for two question topics: SES and sex-work related violence. We were unable to determine in this analysis whether the gender-of-interviewer differences seen reflect social distance or social desirability, since there was no variation in respondent gender. However, our finding that the largest gender-of-interviewer effects exist for topics which have substantial gender components (i.e., SES and IPV) provides support for social role theory. Specifically, FSWs reported having lower SES and more recent sex-work related IPV to female interviewers. This was in contrast to almost no reporting difference for questions such as age, marital status, pregnancy history and perceived risk of being HIV-positive. These findings highlight that, while matching interviewers and respondents on key characteristics may not be feasible, the influence of interviewer-respondent dyad characteristics should evaluated for analysis on topics with strong social role expectations, such as gender-based violence and economic behaviour.
We also showed that the association between two self-reported variables can be confounded by interviewers. In our analysis, recent HIV testing behaviour was significantly negatively associated with both past physical and sexual abuse when we did not include interviewer identity in our models, but this association was attenuated and rendered non-significnat by including interviewer-level random effects. In order for interviewers to have such an effect, both exposure and outcome must be susceptible to interviewer influence. This is clearly the case when both variables are self-reported, but can also arise when interviewers are also asking individuals to take a test – a topic that has been substantively investigated in the context of HIV testing within population studies [33, 34]. Our results highlight the need to consider interviewer identity as a possible confounder in associational as well as prevalence analyses.
Given that much of the data in this study is self-reported, it is difficult to know which interviewers are receiving the “truer” responses and thus which results to act on. In this population, for example, even based on responses to male interviewers respondents are poor and at substantial risk for IPV: median income is under $600 per annum, half the Zambian average, and over 40% reported each of: physical abuse; sexual abuse; and having had sex when they did not want to because they were afraid in the past 12 months. There is clearly a substantial public health concern whichever values are closer to reality. However, in some other settings, the level of impact interviewer gender had in this study may be sufficient to provide conflicting results – with male interviewers finding a substantial health risk but female interviewers only a limited one, or vice versa.
Strengths and limitations
Our results should be interpreted in the light of various strengths and limitations. The underlying ZEST study comprised almost 1000 FSWs who were part of a population with relatively little experience of engaging with researchers, which should minimize respondent learning effects in terms of intentional mis-reporting. However, this may also have led to respondents misunderstanding questions they had not previously considered in a systematic fashion.
Since all ZEST participants were women, we are unable to differentiate whether the gender effects we saw reflected gender-of-interviewer effects or gender-homophily of interviewer-respondent dyads. Our ability to generalize from the ZEST study population to others is also somewhat limited: it is hard to know whether FSWs in more cosmopolitan settings, or women more generally in Zambia or sub-Saharan Africa (including those engaging in informal sex work), would have been similarly affected by interviewer characteristics. Nevertheless, our key findings that interviewers can generate substantial, systematic differences in item response patterns, even when randomly assigned to respondents, are likely to be widely applicable.
Furthermore, we do not have sufficiently detailed information available on interviewer identities to determine whether interviewers varied systematically by gender on other characteristics, e.g. educational attainment, that might have affected their ability to elicit sensitive responses from respondents. Concern on this front is somewhat allayed by the very similar responses (and low ICC values) for less sensitive topics. Finally, the ZEST study did not include follow-up interviews on the topic of interviewer-respondent interaction, and thus we are not able to directly assess whether between-interviewer differences reflected true random difference or some combination of social distance, social desirability and social role.
In a trial of HIV self-testing among FSWs in Zambian border towns, we found very high levels of interviewer-level variability in responses to sensitive questions. We also found some evidence of differential reports by interviewer gender for topics relating to gender roles, and demonstrated that interviewers influenced measures of association between a key risk factor, past sexual abuse, and the study’s primary outcome, recent HIV testing at follow-up visits. This work highlights the importance of conducting careful interviewer training, and evaluating how responses vary by interviewer, for sensitive questions – especially when prevalence or association measures have policy relevance. It also underscores the importance of considering social distance between respondents and interviewers, especially for topics that are either highly stigmatized or have strong social role expectations.
Acquired Immunodeficiency Syndrome
Computer-assisted personal interview
Female sex worker
Human Immunodeficiency virus
Intraclass correlation coefficient
Intimate partner violence
United States of America
Zambian Peer Educators for HIV Self-Testing study
Tourangeau R, Yan T. Sensitive questions in surveys. Psychol Bull. 2007;133:859–83.
Dijkstra W. How interviewer variance can bias the results of research on interviewer effects. Qual Quant. 1983;17:179–87.
West BT, Blom AG. Explaining interviewer effects: a research synthesis. J Survey Stat Method. 2016;5:175–211.
Tu S-H, Liao P-S. Social distance, respondent cooperation and item nonresponse in sex survey. Qual Quant. 2007;41:177–99.
Krumpal I. Determinants of social desirability bias in sensitive surveys: a literature review. Qual Quant. 2013;47:2025–47.
Diekman AB, Schneider MC. A social role theory perspective on gender gaps in political attitudes. Psychol Women Q. 2010;34:486–97.
Lipps O, Lutz G. Gender of Interviewer Effects in a Multi-topic Centralized CATI Panel Survey. Methods Data, Anal. 2017;11:67–86.
Paulhus DL. Socially desirable responding: The evolution of a construct. In: Braun HI, Jackson DN, Wiley DE, editors. The role of constructs in psychological and educational measurement. Mahwah: Lawrence Erlbaum Associates; 2002. p. 49–69.
Pollner M. The effects of interviewer gender in mental health interviews. J Nerv Ment Dis. 1998;186:369–73.
Nass C, Robles E, Heenan C, Bienstock H, Treinen M. Speech-based disclosure systems: effects of modality, gender of prompt, and gender of user. Int J Speech Technol. 2003;6:113–21.
Dykema J, Diloreto K, Price JL, White E, Schaeffer NC. ACASI gender-of-interviewer voice effects on reports to questions about sensitive behaviors among young adults. Public Opin Q. 2012;76:311–25.
Davis RE, Couper MP, Janz NK, Caldwell CH, Resnicow K. Interviewer effects in public health surveys. Health Educ Res. 2010;25:14–26.
Wilson SR, Brown NL, Mejia C, Lavori PW. Effects of interviewer characteristics on reported sexual behavior of California Latino couples. Hisp J Behav Sci. 2002;24:38–62.
Catania JA, Binson D, Canchola J, Pollack LM, Hauck W, Coates TJ. Effects of interviewer gender, interviewer choice, and item wording on responses to questions concerning sexual behaviour. Public Opin Q. 1996;60:345–75.
Johnson TP, Parsons JA. Interviewer effects on self-reported substance use among homeless persons. Addict Behav. 1994;19:83–93.
Fuchs M. Gender-of-interviewer effects in a video-enhanced web survey: results from a randomized field experiment. Soc Psychol. 2009;40:37–42.
Chun H, Tavarez MI, Dann GE, Anastario MP. Interviewer gender and self-reported sexual behavior and mental health among male military personnel. Int J Public Health. 2011;56:225–9.
McCombie SC, Anarfi JK. The influence of sex of interviewer on the results of an AIDS survey in Ghana. Hum Organ. 2002;61:51–7.
Houle B, Angotti N, Clark SJ, Williams J, Gómez-Olivé FX, Menken J, et al. Let’s talk about sex, maybe interviewers, respondents, and sexual behavior reporting in rural South Africa. Field Methods. 2016;28:112–32.
Agula J, Barrett JB, Tobi H. The other side of rapport: data collection mode and interviewer gender effects on sexual health reporting in Ghana. Afr J Reprod Health. 2015;19:111–7.
Fraga S. Methodological and ethical challenges in violence research. Porto Biomed J. 2016;1:77–80.
Ellsberg M, Heise L, Pena R, Agurto S, Winkvist A. Researching domestic violence against women: methodological and ethical considerations. Stud Fam Plan. 2001;32:1–16.
Jewkes R, Watts C, Abrahams N, Penn-Kekana L, Garcia-Moreno C. Ethical and methodological issues in conducting research on gender-based violence in southern Africa. Reprod Health Matters. 2000;8:93–103.
Jewkes RK, Levin JB, Penn-Kekana LA. Gender inequalities, intimate partner violence and HIV preventive practices: findings of a south African cross-sectional study. Soc Sci Med. 2003;56:125–34.
Abramsky T, Watts CH, Garcia-Moreno C, Devries K, Kiss L, Ellsberg M, et al. What factors are associated with recent intimate partner violence? Findings from the WHO multi-country study on women's health and domestic violence. BMC Public Health. 2011;11:109.
Fincher D, VanderEnde K, Colbert K, Houry D, Smith LS, Yount KM. Effect of face-to-face interview versus computer-assisted self-interview on disclosure of intimate partner violence among African American women in WIC clinics. J Interpers Violence. 2015;30:818–38.
Oldenburg CE, Ortblad KF, Chanda MM, Mwanda K, Nicodemus W, Sikaundi R, et al. Zambian peer educators for HIV self-testing (ZEST) study: rationale and design of a cluster randomised trial of HIV self-testing among female sex workers in Zambia. BMJ Open. 2017;7:e014780.
Chanda MM, Ortblad KF, Mwale M, Chongo S, Kanchele C, Kamungoma N, et al. HIV self-testing among female sex workers in Zambia: a randomized controlled trial. PLoS Med. 2017;14:e1002442.
Jain S, Greene M, Douglas Z, Betron M, Fritz K. Risky business made safer - corridors of Hope: an HIV prevention program in Zambian border and transit towns. Case study series: task order 1. Arlington: USAID’s AIDS Support and Technical Assistance Resources, AIDSTAR-One; 2011.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B. 1995;57:289–300.
Collins M. Interviewer variability: a review of the problem. J Mark Res Soc. 1980;22:77–95.
O’Muircheartaigh C, Campanelli P. The relative impact of interviewer effects and sample design effects on survey precision. J R Stat Soc: Ser A (Stat Soc). 1998;161:63–77.
Marra G, Radice R, Bärnighausen T, Wood SN, McGovern ME. A simultaneous equation approach to estimating HIV prevalence with non-ignorable missing responses. J Am Stat Assoc. 2016;112:484–96.
Harling G, Moyo S, McGovern ME, Mabaso M, Marra G, Bärnighausen T, et al. National South African HIV prevalence estimates robust despite substantial test non-participation. S Afr Med J. 2017;107:590–4.
We thank the participants of the ZEST study for their time and dedication to participating in this study. We also thank the research assistants who collected data for the ZEST study.
This study was funded by the International Initiative for Impact Evaluation (3ie). KFO was supported in part by NIAID T32AI007535 (PI: Seage). CEO was supported in part by NIDA T32DA013911 (PI: Flanigan) and NIMH R25MH083620 (PI: Nunn). GH and TB were supported in part by NICHD R01HD084233 (PIs: Bärnighausen and Tanser). TB was supported by the Alexander von Humboldt Foundation through the endowed Alexander von Humboldt Professorship funded by the German Federal Ministry of Education and Research, as well as by the Wellcome Trust, the European Commission, the Clinton Health Access Initiative, and NIAID R01AI124389 (PIs: Bärnighausen and Tanser), and D43TW009775 (PIs: Fawzi and Bärnighausen). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Availability of data and materials
All data is publicly available at an online data repository: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/OCWCF5.
Ethics approval and consent to participate
The study was reviewed by the Institutional Review Boards at the Harvard T.H. Chan School of Public Health in Boston, USA and ERES Converge in Lusaka, Zambia. Written informed consent was obtained from all participants.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Table S1. Regression forms for variables considered. Table S2. Impact of interviewer identity on associations between baseline covariates and HIV testing history in an HIV self-test trial amongst female sex workers in Zambia, full regression results. Table S3. Comparison of bivariate associations from hierarchical models containing random intercepts only for interviewers (two-level) or interviewers and peer educators (three-level). (PDF 120 kb)