Effect of reminders on mitigating participation bias in a case-control study
BMC Medical Research Methodology volume 11, Article number: 33 (2011)
Researchers commonly employ strategies to increase participation in health studies. These include use of incentives and intensive reminders. There is, however, little evidence regarding the quantitative effect that such strategies have on study results. We present an analysis of data from a case-control study of Campylobacter enteritis in England to assess the usefulness of a two-reminder strategy for control recruitment.
We compared sociodemographic characteristics of participants and non-participants, and calculated odds ratio estimates for a wide range of risk factors by mailing wave.
Non-participants were more often male, younger and from more deprived areas. Among participants, early responders were more likely to be female, older and live in less deprived areas, but despite these differences, we found little evidence of a systematic bias in the results when using data from early reponders only.
We conclude that the main benefit of using reminders in our study was the gain in statistical power from a larger sample size.
The selective inclusion in health studies of individuals whose participation is dependent on the outcome and risk factors being investigated is a common problem in epidemiology. Case-control studies, in which exposure information is collected after diagnosis of the outcome, are particularly susceptible to such bias. Individuals suffering from the condition being investigated are likely to be more interested in participating, particularly if they are aware of, and are exposed to, risk factors for the condition. Conversely, healthy individuals in the population may be less likely to participate, and participation is often related to other factors that may be correlated with exposure, such as age, gender, socioeconomic position and educational level [1, 2]. Correction for such bias in the analysis is not possible without relevant information on non-participants, which is usually unavailable.
A commonly recommended way to minimize bias from non-participation is to increase participation, in the hope of minimizing systematic differences between participants and non-participants. A number of strategies are employed by researchers for this purpose, including use of different modes of contact, incentives, reminders, shorter data collection instruments, and more engaging documentation. In a review and meta-regression of 26 case-control studies conducted over 13 years in Germany, Stang et al. found that studies using multiple contact modes (e.g. mail and telephone) had higher participation than those using letters as the only form of contact . In a recent systematic review of randomized trials of participation in postal and electronic surveys not restricted to health studies, Edwards et al. identified a number of factors for which demonstrable evidence existed of an effect on participation rates. Use of (particularly monetary) incentives included with the questionnaire, shorter and more interesting questionnaires, recorded delivery, prior contact, follow-up contact, providing a second copy of the questionnaire with the follow-up, personalized questionnaires, handwritten addresses on the envelopes, use of stamped return envelopes and university affiliation were all associated with increases in participation . In another review by Nakash et al. that focused on trials of methods to increase response to postal questionnaires pertaining to health research it was found that strategies using intensive reminders and shorter questionnaires resulted in higher participation. Incentives did not increase participation, although that review included only studies recruiting patients receiving treatment, who may already have had a high incentive to participate .
Despite the evidence that these strategies can improve participation, much less evidence exists about the quantitative effect that increased participation rates have on study results. Stang et al. demonstrated in simulations that studies with lower participation rates can in some situations result in less bias . This might occur if late responders in studies with higher participation have a higher probability of non-differential misclassification. In a case-control study, such non-differential misclassification would occur if late responders reported exposure to risk factors less accurately than early responders - for example, because of poorer recall - but the degree to which they mis-reported exposure was no different among those who were ill (cases) and those who were not (controls). The effect of strategies to increase participation on study results is thus difficult to predict, and will depend on numerous factors, including the type of study design, the question being investigated, the invasiveness of the data collection process, and the period of time over which information needs to be collected. In the context of case-control studies, evidence of the impact of such strategies is limited and conflicting. In a study of renal cell carcinoma, Kreiger et al. found that follow-up intensity had little effect on the effect estimates ; in another study of breast cancer using a validation substudy to assess the accuracy of recall on use of antihypertensive drugs, Voigt et al. found that information provided by late responders was no less accurate than that provided by early responders, but analyzing data from early responders only resulted in considerable bias .
In this paper, we present an analysis of the effect of mail reminders to increase participation among controls in a case-control study of risk factors for Campylobacter enteritis in England.
Between April 2005 and June 2006, we conducted a case-control study of risk factors for Campylobacter enteritis among individuals aged 18 years and above in five Health Protection Units (HPU) in England. The details of the study are extensively described elsewhere . Laboratory-confirmed cases of Campylobacter enteritis reported within each HPU were sent a letter from the local Consultant in Communicable Disease Control (CCDC) inviting them to participate in the study, together with a consent form, a 12-page, self-administered risk factor questionnaire, and a pre-paid, addressed return envelope. The questionnaire enquired about health details (presence of diabetes and chronic gastrointestinal illness, and use of acid-suppressing medications), exposure to animals in the home, workplace or elsewhere, recreational exposure to water sources, and a detailed history of normal dietary habits as well as consumption of chicken, and untreated dairy and water in the five days prior to illness onset. No reminders were sent to cases, as a pilot study indicated that there was little benefit in doing so.
Controls were randomly sampled from lists of individuals registered with general practice clinics in the five HPUs. Based on previous years' distribution of reported cases, five times as many controls as expected cases were sampled in each HPU, frequency matched on age group, sex and month of report. Potential controls were approached with an initial mailing pack similar to that used for cases. Individuals who had not responded within two weeks were sent a reminder letter. A second reminder and another copy of the questionnaire were sent to those who had still not responded after three weeks. Controls were asked for the same risk factor information as cases, but for recent risk factors we sought information about exposure in the five days prior to questionnaire completion.
The study received a favorable ethical opinion from the North West Multicentre Research Ethics Committee. Approval was obtained from Local Research Management and Governance departments serving each study site.
Overall participation was 46.5% (n = 2381) among cases and 37.3% (n = 5256) among controls. In the original study, we excluded individuals reporting irritable bowel syndrome (cases = 221, 9.3%; controls = 324, 6.2%), because of difficulties ascertaining date of onset and because risk factors in this group may differ. We additionally excluded controls reporting gastrointestinal symptoms in the previous 14 days (n = 431, 8.2%), and cases and controls reporting foreign travel in the 14 days prior to illness onset or questionnaire completion, respectively (cases = 560, 23.5%; controls = 511, 9.7%). Finally, we excluded two cases and seven controls because we could not determine whether they were aged 18 years or above, and a further six cases that occurred in the same household as a previously identified case. After exclusions, 1592 and 3983 cases and controls were available for analysis. In the final multivariable model, self-reported, past Campylobacter enteritis, use of acid-suppressing medications, recent acquisition of a pet dog, and consumption of chicken prepared outside the home were identified as risk factors for Campylobacter enteritis.
In our previous analysis , the potential for bias due to non-participation was assessed using inverse probability weighting, where the weights were inversely proportional to the probability of participation and derived from a two-level logistic model regressing participation against study site, a three-way interaction between age group, sex and case/control status, and area of residence as a latent, random intercept variable capturing area-level deprivation. The analysis indicated that weighting made little difference to the effect estimates for risk factors identified in the final multivariable model.
For the present analysis, we categorized controls as follows: (1) individuals who returned a completed questionnaire and were included in the anaysis (included controls); (2) individuals who returned a completed questionnaire and were subsequently excluded from analysis because of the above-mentioned reasons (excluded controls); (3) individuals who declined to participate (active refusers); (4) inviduals sent a questionnaire but whose address details were subsequently found to be incorrect or invalid (incorrect addresses); and (5) individuals from whom no response was obtained after three reminders (passive refusers). Included controls (group 1) were further categorized as (A) controls who completed or returned a questionnaire within two weeks of the initial contact; (B) controls who completed or returned a questionnaire after being sent the first reminder, but before a second reminder was sent out; and (C) controls who returned a questionnaire after being sent a second reminder.
We compared controls in groups 1 to 5 with respect to the distribution of age group, sex and area-level deprivation. We obtained the latter by linking individuals' postcodes of residence to Super Output Areas (SOAs), geographical boundaries comprising approximately 1000 residents for which aggregated census data are available. SOAs are ranked according to a standard Index of Multiple Deprivation (IMD) , which captures geographic variation in deprivation, using a range of education, employment, health, crime, housing and environment indicators. Individuals were assigned to a quintile of deprivation based on their SOA of residence. The distributions of these variables between the five groups were tabulated. For a small fraction of individuals, HPUs were unable to provide information on age (n = 344, 2.3%), sex (n = 66, 0.4%), or postcode (133, n = 0.9%). These individuals were excluded from analysis of the relevant variable, but included in other comparisons for which data were available.
For included controls (group 1), we investigated the effect of each wave of reminders on mitigating participation bias by estimating the effect of individual risk factors on case status for those returning a questionnaire before the first reminder (group 1A), those returning a questionnaire before the second reminder (1A+1B) and all included controls (1A+1B+1C). We used unconditional logistic regression adjusting for the stratifying variables of age group, sex, study site and month. For each risk factor, we calculated the absolute difference in the effect estimate, δ, as the difference in the regression coefficient between group 1A and all controls, and groups 1A+1B and all controls:
where β i,all represents the logarithm of the odds ratio for risk factor i using all controls, and β i,j is the logarithm of the OR for risk factor i using controls j (j = 1A, 1A+1B). For each mailing wave, we determined the proportion of variables yielding Wald test p-values < 0.2, according to the conventional practice of selecting such variables for further analysis in a stepwise regression.
Even in the absence of systematic error, differences in the coefficients occur due to random error. The extent of this error is dependent on the prevalence of the risk factor, as for a given sample size random error increases with decreasing prevalence. To assess whether bias might have occurred that exceeded that expected from random error, we plotted absolute bias against prevalence for each risk factor, by mailing wave.
In addition, we investigated the effect on the final multivariable model of using only initial respondents, and participants responding before a second reminder, as compared with the analysis using all controls.
Analysis was performed using Stata 10 (Stata Corporation, Texas) and Microsoft Excel 2007 (Microsoft Corporation, Washington) software.
The distribution of sex, age group and area-level deprivation for each of the five groups is shown in Table 1. Compared with all sampled potential controls, participants (groups 1 and 2) had a greater proportion of females and tended to reside in less deprived areas. Excluded controls also had a greater proportion of middle-aged individuals. Among active refusers, over a third were over 65 years of age. By contrast, passive refusers were more likely to be male, younger and reside in areas of greater deprivation. Similarly, among those with incorrectly-recorded addresses, two-thirds were male, aged 18 to 44 years, and resided in areas in the lowest three quintiles of deprivation.
Among included controls, groups 1A and 1B were similar in terms of age distribution, age at leaving full-time education and area-level deprivation, but controls returning a questionnaire before the first reminder were more likely to be female. In contrast, individuals returning a questionnaire after a second reminder were more likely to be younger, have left full-time education at age 16 years or still be in full-time education, and reside in an area of greater deprivation (Table 2).
In total, 110 indicator variables were tested in the regression models. Of these, 58 yielded Wald test p-values < 0.2 after adjustment for sex, age group, study site and month regardless of reminder wave. A further four variables were selected using this criterion when using all controls only, while another four were selected when using groups 1A or 1A+1B, but not when all controls were included. However, all of these latter eight variables had p-values close to 0.2 and 80% confidence intervals (CIs) for the OR close to one.
Figure 1 shows, by mailing wave, the absolute difference in the log OR of all variables relative to an analysis using all controls, plotted against the unadjusted prevalence of each variable. The differences are clearly centered around zero, and decrease in magnitude with increasing prevalence and mailing wave, indicating that greater random error resulting from lower risk factor prevalence or smaller sample size were primarily responsible for observed differences in the log OR.
Figure 2 shows, by mailing wave, ORs and 95% CIs for all factors included in the final multivariable model, adjusted for sex, age group, study site and month. For all variables, there are only marginal differences in the ORs, and the 95% CIs include the point estimates from the other models using alternative groups of controls.
Our analysis has shown that, in our study, use of additional reminders among controls had little effect on mitigating bias due to non-participation. Despite some differences between early and late responders in terms of sex, age, educational level and area-level deprivation, using only data from early responders resulted in small differences in the effect estimates relative to using all controls. These differences were mainly due to random error and were minor in comparison to the uncertainty in the estimates. The main benefit of using reminders in our study was thus the gain in statistical power resulting from the larger sample size.
In this analysis we are unable to assess the effect of true participation bias, that is, the potential from bias resulting from systematic differences between participants and non-participants, on whom limited information was available. In the original study, we adjusted for differences between participants and non-participants in terms of age, sex, study site and area-level deprivation, and concluded that these factors made little difference to the results . Bias could still have occurred, however, if within strata of these factors, important differences existed between participants and non-participants with respect to other factors related to Campylobacter enteritis.
Among non-participants, we also observed important differences between active and passive refusers. Compared with individuals who actively refuse to participate, those from whom no response is obtained are more likely to be male, younger and to live in more deprived areas. A high proportion of active refusers were over 65 years of age. A possible explanation for this is that many of these individuals were in long-term care or had other health conditions that precluded participation. These differences suggest that, when refusal is high, replacement through more intense recruitment from passive refusers may not be adequate to mitigate bias if active refusal is related to factors associated with the outcome of interest.
A number of potential controls approached were subsequently found to have incorrectly recorded or out-of-date addresses. These individuals tended to be male, younger, and lived in more deprived areas. The most likely reason for the incorrect recording of addresses is list inflation, which results from a delay in removing from general practice registers records of individuals who are deceased or no longer living in the area. Studies in the late 1990s estimated that approximately 10% of addresses on general practice registers were incorrect , although this figure is believed to have decreased due to recent efforts to reduce list inflation across the National Health Service. Incorrect addresses were ascertained when questionnaires were returned undelivered. This number is probably an underestimate, as it is likely that additional undelivered questionnaires were not returned to us. We know of only two studies that have investigated the fate of incorrectly addressed letters. Sandler et al. found that all envelopes sent to invalid (non-existent) addresses were returned. By contrast, among letters sent to fictitious individuals at valid addresses, 13% were not returned . In Germany, Schmidt-Pokrzywniak et al. found that around 2% of such letters were not returned . We thus expect that a fraction of passive refusers were individuals for whom address details were incorrect, but if the above findings are applicable to our study setting, this fraction should be small.
The presence of participation bias is likely to depend on numerous factors, including some over which researchers have little control. These might include media interest and public awareness in the subject and hypotheses under investigation, and health behaviors that may be related both to participation and risk factors being studied. Even in the absence of systematic differences between participants and non-participants, bias may occur if information from late responders is less accurate than that from early responders. We think this unlikely in our study, as all controls were asked to provide information about exposure to risk factors in the previous five days, regardless of when they completed the questionnaire.
Our findings may be difficult to generalize to other settings, because the effect of response propensity and timeliness may differ depending on the research question and risk factors of interest. Frameworks for addressing participation bias include the Leverage-Saliency Theory put forward by Groves . The theory postulates that, for a given research topic, there is a pool of individuals in the population with a propensity to participate. The likelihood that an individual will participate depends on this so-called leverage, and the saliency with which the topic is presented to them by researchers. The theory predicts that for individuals with a low interest in the topic, other incentives, such as monetary remuneration, can improve participation. Such predictions have been tested using population subgroups for which easily identifiable proxies exist; for example, by gauging teachers' interest in educational surveys . For many health topics, however, these subgroups will be difficult to identify, as interest will only be tangentially related to easily identifiable characteristics such as age and occupation. This will nevertheless be an area of growing importance in epidemiology. It is known that the effort required to achieve comparable participation levels among controls in health studies is increasing over time , making assessments of the potential for participation bias increasingly important. Our study indicates that among those with a propensity to participate, there is a fraction for which obtaining a response requires more effort. However, the added benefit in doing so appears to be limited, because these late-responding individuals are not substantively different from early responders in terms of factors relevant to the analysis. Further, if these late responders differ in important ways from passive refusers, then the added effort in recruiting them will be fruitless, because the impact on mitigating bias will be minimal. Instead, focusing additional resources on strategies to engage groups known to have low participation should be more productive. Specific recruitment strategies and study documentation may need to be designed so as to attract in particular young males and those living in more deprived areas.
In our study, controls who responded early differed from late responders in terms of demographic and socioeconomic factors, but these differences did not influence the results of our risk factor analysis. Pursuing initial non-responders through reminders in case-control studies may thus not be sufficient to mitigate bias if those with a propensity to participate differ in important ways from those who would not participate regardless of how many reminders were sent. Instead, enhancing individuals' propensity to participate in research studies through targeted strategies and study materials aimed at population subgroups with low participation should be more successful at reducing the potential for participation bias.
This study was funded by the United Kingdom Food Standards Agency (Project Number B14011).
Shen M, Cozen W, Huang L, Colt J, De Roos AJ, Severson RK, Cerhan JR, Bernstein L, Morton LM, Pickle L, Ward MH: Census and geographic differences between respondents and nonrespondents in a case-control study of non-Hodgkin lymphoma. Am J Epidemiol. 2008, 167 (3): 350-361. 10.1093/aje/kwm292.
Madigan MP, Troisi R, Potischman N, Brogan D, Gammon MD, Malone KE, Brinton LA: Characteristics of respondents and non-respondents from a case-control study of breast cancer in younger women. Int J Epidemiol. 2000, 29 (5): 793-798. 10.1093/ije/29.5.793.
Stang A, Ahrens W, Jockel KH: Control response proportions in population-based case-control studies in Germany. Epidemiology. 1999, 10 (2): 181-183. 10.1097/00001648-199903000-00017.
Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, MR000008-3
Nakash RA, Hutton JL, Jorstad-Stein EC, Gates S, Lamb SE: Maximising response to postal questionnaires--a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006, 6: 5-10.1186/1471-2288-6-5.
Stang A, Jockel KH: Studies with low response proportions may be less biased than studies with high response proportions. Am J Epidemiol. 2004, 159 (2): 204-210. 10.1093/aje/kwh009.
Kreiger N, Nishri ED: The effect of nonresponse on estimation of relative risk in a case-control study. Ann Epidemiol. 1997, 7 (3): 194-199. 10.1016/S1047-2797(97)00013-6.
Voigt LF, Boudreau DM, Weiss NS, Malone KE, Li CI, Daling JR: Re: "Studies with low response proportions may be less biased than studies with high response proportions". American Journal of Epidemiology. 2005, 161 (4): 401-402. 10.1093/aje/kwi056.
Tam CC, Higgins CD, Neal KR, Rodrigues LC, Millership SM, O'Brien SJ: Chicken consumption and use of acid-suppressing medications as risk factors for Campylobacterenteritis, England. Emerg Infect Diseases. 2009,
The English Indices of Deprivation 2004: Summary (revised). 2004, London: The Office of the Deputy Prime Minister, [http://www.rbkc.gov.uk/kcpartnership/General/pc_indices_deprivation.pdf]
O'Mahony PG, Thomson RG, Rodgers H, Dobson R, James OF: Accuracy of the family health services authority register in Newcastle upon Tyne, UK as a sampling frame for population studies. J Epidemiol Community Health. 1997, 51 (2): 206-207.
Sandler RS, Holland KL: Fate of incorrectly addressed mailed questionnaires. J Clin Epidemiol. 1990, 43 (1): 45-47. 10.1016/0895-4356(90)90054-S.
Schmidt-Pokrzywniak A, Stang A: Study of return rate and return time of undeliverable postal letters. Eur J Epidemiol. 2010, 25 (7): 467-470. 10.1007/s10654-010-9463-3.
Groves RM, Singer E, Corning A: Leverage-saliency theory of survey participation: description and an illustration. Public Opin Q. 2000, 64 (3): 299-308. 10.1086/317990.
Groves RM, Presser S, Dipko S: The Role of Topic Interest in Survey Participation Decisions. Public Opin Q. 2004, 68 (1): 2-31. 10.1093/poq/nfh002.
Rogers A, Murtaugh MA, Edwards S, Slattery ML: Contacting controls: are we working harder for similar response rates, and does it make a difference?. Am J Epidemiol. 2004, 160 (1): 85-90. 10.1093/aje/kwh176.
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/11/33/prepub
We thank the members of the Campylobacter Case-Control Study Group, and staff of the Health Protection Units and Environmental Health Departments at the respective study sites for supplying data for the study.
The Campylobacter Case-Control Study Group comprises the following: Sarah J. O'Brien (Manchester University); Clarence C. Tam, Craig D. Higgins, Laura C. Rodrigues, and Brendan W. Wren (London School of Hygiene and Tropical Medicine); Keith R. Neal (University of Nottingham); Bob Owen and Judith Richardson (Health Protection Agency Centre for Infections); Bharat C. Patel (Health Protection Agency Collaborating Centre, North Middlesex Hospital); Peter Sheridan (North East and Central London HPU); John Curnow (Cheshire and Merseyside HPU); Ken Lamden (Cumbria and Lancashire HPU); and Sally Millership (Essex HPU).
The authors declare that they have no competing interests.
CCT and CDH conceived the idea for the study. CCT conducted the analysis. All authors contributed to the interpretation of results and drafting of the manuscript.
About this article
Cite this article
Tam, C.C., Higgins, C.D. & Rodrigues, L.C. Effect of reminders on mitigating participation bias in a case-control study. BMC Med Res Methodol 11, 33 (2011). https://doi.org/10.1186/1471-2288-11-33