Skip to main content

Impact of different cover letter content and incentives on non-response bias in a sample of Veterans applying for Department of Veterans Affairs disability benefits: a randomized, 3X2X2 factorial trial

Abstract

Background

Non-random non-response bias in surveys requires time-consuming, complicated, post-survey analyses. Our goal was to see if modifying cover letter information would prevent non-random non-response bias altogether. Our secondary goal tested whether larger incentives would reduce non-response bias.

Methods

A mailed, survey of 480 male and 480 female, nationally representative, Operations Enduring Freedom, Iraqi Freedom, or New Dawn (OEF/OIF/OND) Veterans applying for Department of Veterans Affairs (VA) disability benefits for posttraumatic stress disorder (PTSD). Cover letters conveyed different information about the survey’s topics (combat, unwanted sexual attention, or lifetime and military experiences), how Veterans’ names had been selected (list of OEF/OIF/OND Veterans or list of Veterans applying for disability benefits), and what incentive Veterans would receive ($20 or $40). The main outcome, non-response bias, measured differences between survey respondents’ and sampling frame’s characteristics on 8 administrative variables, including Veterans’ receipt of VA disability benefits and exposure to combat or military sexual trauma. Analysis was intention to treat. We used ANOVA for factorial block-design, logistic, mixed-models to assess bias and multiple imputation and expectation-maximization algorithms to assess potential missing mechanisms (missing completely at random, missing at random, or not random) of two self-reported variables: combat and military sexual assault.

Results

Regardless of intervention, men with any VA disability benefits, women with PTSD disability benefits, and women with combat exposure were over-represented among respondents. Interventions explained 0.0 to 31.2% of men’s variance and 0.6 to 30.5% of women’s variance in combat non-response bias and 10.2 to 43.0% of men’s variance and 0.4 to 31.9% of women’s variance in military sexual trauma non-response bias. Non-random assumptions showed that men’s self-reported combat exposure was overestimated by 19.0 to 28.8 percentage points and their self-reported military sexual assault exposure was underestimated by 14.2 to 28.4 percentage points compared to random missingness assumptions. Women’s self-reported combat exposure was overestimated by 8.6 to 10.6 percentage points and military sexual assault exposure, by 1.2 to 6.9 percentage points.

Conclusions

Our interventions reduced bias in some characteristics, leaving others unaffected or exacerbated. Regardless of topic, researchers are urged to present estimates that include all three assumptions of missingness.

Peer Review reports

Background

Not everyone invited to participate in mailed surveys will do so, particularly when the survey’s content is sensitive [1]. When those who opt out of a survey differ systematically from those who participate, bias may enter one’s dataset [2]. Missing survey units are typically categorized as missing completely at random, missing at random, or not at random [3]. Only the first category can be ignored in analysis. For the remaining two, researchers must rely on computationally intensive, post-experimental balancing procedures to correct any potential biases. For missing at random situations, one might consider weighting, matching, stratification, multiple imputation, or propensity score adjustment to address the mechanism of missingness [4, 5]. When missingness is non-random or informative, analysis requires considerably more complex modeling procedures and expert statistical input [6]. There are few off-the-shelf statistical packages to address non-random missingness, in part because the mechanisms and patterns of missingness can vary substantially from project to project. Thus, users must often write their own computer programs for model estimation (see [7]). Even relatively simple models may become computationally prohibitive very quickly. We previously showed that male Gulf War I Veterans applying for posttraumatic disability benefits were substantially less likely than other men to return a survey asking about military sexual assault if they were sexual assault survivors [8]. Accounting for this non-ignorable non-response bias using Bayesian [4], maximum likelihood [9], and expectation-maximization [10, 11] techniques required intensive computer time and resources. Besides being time- and resource-intensive, analytical remedies for missingness must be applied after data collection ends, when the underlying biases can no longer be rectified.

According to Leverage-Salience Theory [12], many considerations prompt or dissuade people to take part in mailed surveys, with different people potentially viewing any given factor quite differently. How individuals judge a particular survey aspect—either positively or negatively—and how much weight or importance they place on that aspect are known as “leverages.” Interest in the survey’s topic and monetary incentives are two common leverages that typically encourage people to complete and return surveys [13]. Sensitive or high-threat questions, such as those asking about sexual behavior or personal finances, are examples of negative leverages that may discourage survey participation [1, 14]. Highlighting or emphasizing selected information about a research effort influences which leverages participants attend to. This is the “salience” part of the theory, since to highlight information is to make it more activating or salient to participants. Several avenues can be exploited in mailed surveys to highlight or activate selected leverages [13], including pre-notification letters, the questionnaire’s design, or, as in the present study’s focus, the cover letter.

Consistent with Leverage-Salience Theory, we previously showed that specifically mentioning a survey’s combat content in a cover letter resulted in over-representation from male combat Veterans, even when—perhaps especially when—other factors thought to suppress response rates, such as lower incentives and less privacy, were implemented [15]. In other work, we observed no difference in disability benefit status between Veteran respondents and non-respondents when we told survey recipients their name had been “randomly selected from an electronic database of Veterans” [16] but an almost 3-fold over-representation of disability benefit recipients when we informed survey recipients their name had been selected from a “list of Veterans filing disability claims” [15]. In the present study, our goal was to see if modifying key cover letter information might prevent non-random non-response bias and its attendant need for time-consuming, complicated, post-survey modeling procedures. The study extends our previous investigations into non-response bias in studies involving Veterans applying for disability benefits.

One cannot measure non-response bias without knowing something about the population of interest. Unfortunately, in many studies, population-level information is obtained from other sources, such as the United States Census, where differences in sampling approaches, question asking, and timeframe may introduce methodologically artifactual estimates of bias [5]. When sampling frame data are available, information is often limited to just a few sociodemographic characteristics, which, in turn, are only marginally related to study outcomes [5]. In the present study we take advantage of the Department of Veterans Affairs’ (VA) data warehouse to build a rich sampling frame of characteristics that we hypothesized would be related to survey non-response and to the receipt of VA disability benefits. Known predictors of receiving VA disability benefits for posttraumatic stress disorder (PTSD) include older age, male gender, and combat exposure [17, 18]; negative predictors include female sex, non-white race, and history of military sexual assault [16, 18]. Greater medical and psychiatric comorbidity are associated with receipt of any VA disability benefits [19]. We hypothesized that specifically telling Veterans that the survey asked about combat would trigger over representation of combat Veterans, that specifically mentioning military sexual assault would result in under-representation from male sexual assault survivors and possibly female sexual assault survivors, and that providing a more generic description of the survey’s content would generate the most representative respondent pool. We also hypothesized that telling survey recipients their name had been selected from a list of Veterans applying for VA disability benefits would result in over-participation by disability benefit recipients (and thus over-participation by Veterans with more medical and psychiatric comorbidities), while giving a less specific accounting of where their name came from would result in more representative participation.

Our secondary goal was to examine the effects of different incentives on non-response bias. Incentives have consistently been shown to increase survey response rates, e.g., [20] but the impact on non-response bias is less clear, e.g., [21,22,23,24]. In prior work with male disability applicants, we showed that larger incentives tended to attract younger, healthier, working men compared to smaller incentives [15]. We anticipated that a larger incentive in the present study would also reduce non-response bias related to age, health, and disability status.

Methods

Study design and human studies oversight

The study is a gender-blocked, randomized, 3X2X2 factorial comparison trial. The Minneapolis VA Health Care System’s Internal Review Board for Human Studies reviewed and approved the study protocol (#4495-B). All analyses were pre-planned. Data were collected between February and August 2016.

Participants

Participants were Veterans who had served during Operations Enduring Freedom, Iraqi Freedom, and New Dawn (OEF/OIF/OND) and had a pending VA disability claim for PTSD. From a sampling frame of 14,630 men and 2945 women, we randomly selected, without replacement, 480 men and 480 women to receive mailed surveys.

Selected Veterans had a median age of 33.0 years (interquartile range = 12, mean = 35.2, SD = 8.9, range 19–67) and 78.0% had received combat pay while on active duty. In terms of health, 6.7% had been diagnosed with bipolar disorder, schizophrenia, or schizoaffective disorder; 10.8% had Charlson Comorbidity Index [25] scores greater than zero.

Protocol

Veterans received pre-notification letters 1 week before we mailed a cover letter and 22-page questionnaire to their homes. The questionnaire asked about PTSD and depression symptoms; functioning; pain; substance use; and traumatic exposures in the military, including combat and military sexual assault. Of these items, only self-reported combat and military sexual assault are considered here.

Cover letters were mailed with the questionnaires. All cover letters used the same language to describe the risks and benefits of participating in the research and to emphasize that participation was voluntary. At two-week intervals, non-respondents received postcard reminders; a follow-up mailing of the questionnaire; and a third, final mailing of the questionnaire via United Parcel Service’s 3-day delivery service. Veterans signified their consent to participate by returning a completed survey. Except for specific cover letter content described in “Study Arms” below, all other aspects of the survey were the same across groups, including the pre-notification letters, reminder postcards, and questionnaires.

Study arms

We used the individual cover letters to deliver the stimulus. As shown in Table 1, the first study factor varied the information survey recipients received in the cover letter about the survey’s topics. Veterans were told that the survey would ask about “combat,” about “unwanted sexual attention while in the military,” or about “lifetime and military experiences that can affect well-being.” The second factor varied the cover letter’s information about how Veterans’ names were obtained: either from a “Department of Veterans Affairs list of Veterans who served during OEF/OIF/OND” or from a “Department of Veterans Affairs list of Veterans who filed a disability claim.” The third factor examined the effect of different incentives: $20 or $40. Because of local policies, incentives were paid only to Veterans who returned a completed survey. Veterans were told which incentive they would receive in the cover letter.

Table 1 Study Factors and Allocation of Participants

After blocking on gender, we randomly divided the 480 men into 12 equal-sized groups of 40 individuals. Each of the 12 possible cover letter iterations (i.e., what recipients were told about the survey’s topic, how their name was obtained, and what incentive they would receive) was then randomly assigned to one of the 12 groups of men. This same procedure was repeated for the 480 women (see [26]). Randomizations were accomplished using a computer-generated program, overseen by BAC. The remaining co-authors were unaware of allocation until after it was completed.

Outcomes

Main outcome

The main outcome was non-response bias on each of our 8 pre-specified non-response correlates (see “Measures,” below).

Secondary outcomes

Secondary outcomes included unit (survey) response, the percent of non-response variance in our 8 correlates that was explained by the different cover letter and incentive iterations, and the impact of different missingness assumptions on the estimated prevalence of combat and military sexual assault exposures.

Measures

Non-response correlates

As mentioned previously, the characteristics we anticipated would be related to survey non-response and to the receipt/non-receipt of VA disability benefits included age, race/ethnicity, combat exposure, history of military sexual assault, and greater medical or psychiatric comorbidity. Veterans’ VA disability benefit status was of itself thought to predict survey non-response. Indicators for all these variables were available for the entire sampling frame through the VA’s Corporate Data Warehouse. Age was dichotomized as < or ≥ to 30 years. We used the Race/ethnicity data fields from the Veterans Benefits Administration, which categorizes Veterans into 7 mutually exclusive categories including, “Asian or Pacific Islander,” “Black or African American,” “Hispanic ethnicity,” “Other,” “Unknown,” or “White.” For descriptive analyses we combined the “Other” and “Unknown” categories. When calculating bias, we dichotomized race as “Non-White” and “White.” We used combat flags, medical diagnoses, special issue codes, and Veterans’ responses to the VA’s military sexual trauma screener to categorize them as having combat or military sexual trauma exposure. “Military sexual trauma” encompasses sexual assault and severe, pervasive physical sexual harassment while in the Armed Forces. Results were dichotomized as “exposed” or “not exposed.” We used inpatient and outpatient VAICD-9-CM and ICD-10-CM codes for bipolar disorder, schizoaffective disorder, or schizophrenia to determine Veterans’ serious mental illness status, dichotomized as “present” or “not present,” in the 180 days prior to survey. We likewise used inpatient and outpatient VA ICD-9-CM and ICD-10-CM codes to calculate Charlson Comorbidity Index [25] scores for each Veteran. We dichotomized results as 0 versus ≥1. Veterans with no VA health care utilization received Charlson Comorbidity Scores of 0 (n = 48).

Veterans’ disability claims were pending at the time of survey; therefore, their benefit status for any disorder or for PTSD specifically was ascertained approximately 7 months after the survey was fielded. We dichotomized results as “receiving disability benefits” or “not receiving benefits.”

Self-reported exposures

Self-reported combat exposure and military sexual assault were available for respondents’ only. We used the Combat Experiences subscale of the Deployment Risk and Resilience Inventory-2 [27] to assess combat exposures and a 5-item adaptation of the Sexual Harassment Inventory’s [28] Criminal Sexual Misconduct subscale to assess military sexual assault. Note that military sexual trauma, obtained from VA administrative data, is not equivalent to self-reported military sexual assault, as the former term also encompasses severe, physical sexual harassment. Self-reported outcomes for combat and for military sexual assault were dichotomized as “any” versus “none.”

Analysis

Analysis was intention to treat. Results are reported separately by gender to account for our sampling strategy. The primary outcome, non-response bias, was calculated as the difference in the percentage of people with each non-response correlate in the respondent sample minus the percentage of people with the same characteristic in the sampling frame. Negative numbers indicate under representation of that characteristic in the respondent pool, and positive numbers indicate over representation. Zero indicates no bias.

We used the American Association for Public Opinion Research’s [29] response rate definition #1 to calculate survey response rate. Specifically, response rate was calculated as the number of returned surveys in each condition divided by the number of Veterans assigned to that condition. We dichotomized unit response as “survey returned” versus “not returned.”

We used ANOVA to see if the mean bias differed statistically significantly across the 3 main interventions. The associated degrees of freedom for non-significant interaction terms were added to the final model’s error term. We took advantage of the ability to partition the sum of squares in ANOVA to determine the percent of non-response variance in our 8 correlates explained by the 3 study-arm manipulations. The Intervention (study-arm manipulations) Sum of Squares (SSI) divided by the Total Sum of Squares (SST), or η2, multiplied by 100% represents the variance explained by each of the study arm manipulations.

Following a similar approach to Murdoch, et al. [8], we used the survey’s two self-report variables (combat exposure and military sexual assault) to identify the study-arm manipulations’ impact on missing mechanisms (i.e., missing completely at random, missing at random, or non-random). Numerically close estimates and small variances across the 3 resulting estimates would support random missingness. We used observed values to estimate prevalence under missing completely at random assumptions, 25 copies of imputed values to estimate the prevalence of military sexual assault and combat exposure under the missing at random assumption [30] and Ibrahim and Lipsitz’s [10, 11] expectation–maximization algorithm to estimate prevalences under a not random assumption. Ibrahim and Lipsitz’s method assumes that missingness in the outcome variable is related to recorded covariates that are assumed or known to be associated with the outcome. This information is then used to estimate the joint probability of being a non-respondent and of having the outcome of interest.

We used SPSS version 19.0, SAS version 9.4, and R version 4.0.2 for analyses and graphics.

Power

For each of the 3 study manipulations, the study had 80% power a priori to detect a bias between respondents and the underlying sampling frame ranging from 5 to 12 percentage points, depending on the population’s prevalence. For conditions expected to be very common (e.g., 90% combat exposure or 90% service connection in men [8, 18]) or uncommon (e.g., 2% military sexual trauma in men [31]), the smallest detectable bias was estimated to be 5 percentage points. For intermediate prevalences (e.g., 30% combat exposure in women or 50% service connection in women [18]) the smallest detectable bias was 12 percentage points. Biases in this range approximate Cohen’s h = 0.25, generally considered a small to moderate effect [32]. We assumed a response rate of 50% and two-tailed alpha = 0.05.

Results

Response rate

Response rates by each study-arm manipulation and gender are shown in Table 2. A total of 410 Veterans (42.7%) returned completed surveys, with an overall response rate of 41.5% for men and 44.0% for women. As can be seen from Table 2, emphasizing different survey topics in the cover letter, such as combat or unwanted sexual attention, did not significantly influence either gender’s net survey response. Men, but not women, were more likely to return a survey if told their name had been obtained from a list of Veterans applying for VA disability benefits compared to a list of OEF/OIF/OND Veterans. Both men and women were statistically significantly more likely to return a survey if they were promised a $40 post-paid incentive compared to $20. Supplementary Table 1 (Additional File) shows response rates for each factorial combination by gender. Men’s survey response rates ranged from 22.5 to 60%, depending on their factorial combination, and women’s, from 32.5 to 60%. The lowest response rates for both genders were obtained from those promised a $20 post-paid incentive, told that the survey asked about lifetime and military experiences, and informed that their name had been obtained from a list of OEF/OIF/OND Veterans. Omnibus χ2 tests indicated that men’s response rates differed significantly across the different factorial combinations (p = 0.03), but women’s response rates did not (p = 0.88).

Table 2 Survey response (n) and response rate (%) by gender and study-arm manipulation

Non-response bias

Table 3 shows the sampling frame’s characteristics stratified by gender and survey response status. As can be seen, on net, male non-respondents were statistically significantly more likely to be under age 30 and less likely to have any VA disability benefits compared to survey respondents. Female non-respondents were statistically significantly less likely to have a serious mental illness and less likely to have VA disability benefits for PTSD compared to respondents. There was trend for women without combat exposure to be non-respondents (p = 0.06).

Table 3 Sample Characteristics by Gender and Response Status

Different interventions could, of course, cancel one another out, resulting in a null effect on net non-response bias. Figures 1 and 2, therefore, show the degree to which men and women with the 8 study characteristics were over- or under-represented within each study-arm manipulation. Supplementary Tables 2a and 2b (Additional file) provide the same information in tabular form. As Fig. 1 shows, across all the interventions, men under age 30 were consistently under-represented among respondents, while men with serious mental illness and men with any VA disability benefits were consistently over-represented. Men with combat exposure were most over-represented in the group told the survey asked about combat, and men with military sexual trauma were most under-represented in the group told the survey asked about unwanted sexual attention. Although these latter two effects were in the direction expected, neither was statistically significant.

Fig. 1
figure 1

Bias in the 8 non-response correlates in men by the 7 study-arm manipulations. Grid lines range from − 12 (the center point) to + 12 percentage points. Negative numbers indicate under-representation of the characteristic in respondents compared to the sample, and positive numbers, over-representation. The blue circle represents 0 or no bias. MST = military sexual trauma. OEF/OIF/OND = Operation Enduring Freedom, Operation Iraqi Freedom, Operation New Dawn. SMI = Serious mental illness diagnosis

Fig. 2
figure 2

Bias in the 8 non-response correlates in women by the 7 study-arm manipulations. Grid lines range from − 12 (the center point) to + 12 percentage points. Negative numbers indicate under-representation of the characteristic in respondents compared to the sample, and positive numbers, over-representation. The blue circle represents 0 or no bias. MST = military sexual trauma. OEF/OIF/OND = Operation Enduring Freedom, Operation Iraqi Freedom, Operation New Dawn. SMI = Serious mental illness diagnosis

Contrary to our expectations, men with serious mental illness and higher Charlson scores were over-represented among those told their name had been obtained from a list of OEF/OIF/OND Veterans compared to those told their name came from a list of Veterans applying for disability benefit; again, these differences were not statistically significant. Compared to the $20 post-paid incentive, offering $40 was not associated with statistically significant differences in bias across any of the 8 administrative variables, though men under age 30 were least under-represented in that study arm.

As Fig. 2 shows, in contrast to the men, women with combat exposure were consistently over-represented among survey respondents, regardless of study-arm manipulation. Non-white women were consistently under-represented in all the study arms, and women receiving VA disability benefits for PTSD were consistently over-represented. Women with military sexual trauma were under-represented among those told the survey asked about lifetime and military experiences compared to those told the survey asked about combat or about unwanted sexual attention, but the difference was not statistically significant (p = 0.22). Different from the men but as we had hypothesized, women with higher Charlson scores were statistically significantly over-represented in the group told their name had been obtained from a list of Veterans applying for disability benefits compared to those told their name came from a list of OEF/OIF/OND Veterans. However, opposite expectations, women with serious mental illness were similarly over-represented in both groups. As with the men, offering $40 was not associated with any statistically significant differences in bias compared to offering $20. Although we had no specific hypothesis tied to it, bias in the percentage of women receiving any VA disability benefits was statistically significantly different across the three groups receiving different information about the survey’s topic (p = 0.04).

Variance in non-response bias explained by the study-arm manipulations

Table 4 shows how much of the variance in non-response bias was explained by each of the three study-arm manipulations. This ranged from negligible to substantial, depending on the characteristic and intervention. Different descriptions of the cover letter content explained 31.2% of the variance in men’s combat non-response bias and 25.6% of the variance in men’s military sexual trauma non-response bias; in women, the different descriptions explained just 2.2% of the variance in women’s combat non-response but 31.9% of the variance in military sexual trauma non-response bias. Almost a third of the variance in non-white women’s under-representation was explained by how we said their name was obtained; this study-arm manipulation also explained more than 50% of the non-response bias in women with Charlson scores > 0. Differences in post-paid incentives had mostly negligible impact on women’s bias in any of the 8 administrative variables but explained 36.6% of the variance in men’s non-response bias by age, 18.1% of the variance in men’s combat exposure bias, and 19.1% of the bias in men’s Charlson scores > 0.

Table 4 Variance in bias (η2) explained by each study-arm manipulation, stratified by gender

Impact of different missing mechanisms on estimates of combat and military sexual assault

Tables 5 and 6 show how estimates of men and women’s combat and military sexual assault exposure based on self-report changed across the 7 study-arm manipulations, depending on the missingness assumption used. Exposures based on VA administrative data are listed for reference. As Table 5 shows, when assuming random missingness compared to non-random missingness, men’s combat exposure was overestimated by 19.0 to 28.8 percentage points, and their military sexual assault exposure was underestimated by 14.2 to 28.4 percentage points. For women (Table 6), combat exposure was overestimated by 8.6 to 10.6 percentage points when assuming random missingess compared to non-random missingness, but their military sexual assault exposure was overestimated by only 1.2 to 6.9 percentage points when comparing random to non-random missingness.

Table 5 Percentage of Men with Self-Reported Combat and Military Sexual Assault Experiences by Different Missingness Mechanisms and Study-Arm Manipulation
Table 6 Percentage of Women with Self-Reported Combat and Military Sexual Assault Experiences by Different Missingness Mechanisms and Study-Arm Manipulation

Across the three missingness assumptions, the smallest discrepancy between combat estimates for men occurred among those told their name had been obtained from a list of OEF/OIF/OND Veterans. At 19 percentage points, however, the discrepancy was still substantial. The smallest discrepancy between military sexual assault estimates for men, at 14.2 percentage points, was seen among men randomized to receive the $40 post-paid incentive. For women, the smallest discrepancies between combat estimates and military sexual assault estimates both occurred among those told the survey would ask about combat (8.6 percentage points and 1.2 percentage points, respectively).

Discussion

Our findings showed that modifying the cover letter’s content to say how survey recipients’ names had been obtained and offering a more generous post-paid incentive increased net survey response rates by as much as 10 percentage points. Our hypotheses related to non-response bias were only partially supported. As we expected, male combat Veterans were over-recruited from the group told that the survey asked about combat, and male sexual assault survivors were under-recruited from the group told that the survey asked about unwanted sexual attention. Differences in what Veterans were told about the survey’s content explained almost a third of the variance in men’s combat non-response bias and a quarter of their military sexual trauma non-response bias. However, neither of these findings were statistically significant. Women combat Veterans were over-recruited across all the study-arm manipulations. When told the survey asked about unwanted sexual attention, women with those exposures were over-recruited, but again, at a statistically non-significant level. Except for over-recruiting women with Charlson scores > 0 in the group told their name came from a list of Veterans filing for disability benefits, none of our hypotheses related to telling Veterans how their name had been obtained were supported. A higher post-paid incentive likewise had no impacts on improved representativeness of Veterans recruited into the study. Men’s self-reported combat and military sexual assault exposures were most consistent with a non-random pattern of missingness across all the study-arm manipulations. Particularly for military sexual assault, women’s estimates were considerably closer across the 3 assumptions of missingness, suggesting that their missingness may have, in fact, been random for this variable. Interestingly, the smallest discrepancy in women’s combat and military sexual assault estimates occurred in the group told that the survey asked about combat.

We had anticipated that combat Veterans and Veterans applying for PTSD disability benefits would be particularly interested in participating in research geared to those topics. Topic interest is generally considered a positive leverage that facilitates research participation. Whether it results in over-representation by interested participants is less straightforward [33]. In a clear example of topic interest leading to over-representation, Groves et al. [13] showed that members of a birdwatching association were almost twice as likely to participate in a survey about birding than they were a survey about mall design. Furthermore, members were more than twice as likely as non-members to participate in the birding survey. On the other hand, Groves et al. [13] also showed that patients with diabetes were no more likely to take part in a survey about diabetes than they were to take part in a survey about life quality. We showed elsewhere that male Gulf War I Veterans were more likely to report combat in a mailed survey if they were assigned to a low privacy, low incentive condition compared to Veterans assigned to greater privacy/higher incentive conditions [15]. These results suggested that male combat Veterans were particularly motivated to participate in a survey asking about combat exposures. The direction of effects in the present study is consistent with this interpretation, though, again, findings were statistically non-significant. Interestingly, even though women with combat exposures were over-represented in all our study-arms, their self-reported estimates of combat exposure were closer to a random missing pattern than were the men’s.

High-threat questions, such as asking about sexual victimization, are generally considered negative leverages that may discourage research participation, particularly by those who have experienced unwanted sexual attention. We previously showed that male Gulf War I Veterans who applied for PTSD disability benefits and had a history of military sexual trauma were particularly unlikely to participate in a survey asking about such experiences [8]. Their self-reported sexual assault experiences also showed a non-random pattern of missingness of a magnitude strikingly similar to what we report here. In contrast, telling women that the survey asked about unwanted sexual attention resulted in a slight, yet not statistically significant over-representation of those with military sexual trauma and the largest discrepancy between random and non-random missingness assumptions.

Although it increased net response rates, doubling our incentive had no statistically significant impacts on the bias in any of the 8 administrative variables in either gender, though it did explain almost 37% of the age bias in men and 19% of the bias in men’s Charlson Comorbidity scores. We had previously shown that a pre-paid $20 versus $10 incentive increased respondent representativeness by bringing younger and healthier participants to a survey about military traumas [15]. Possibly there are no further gains to be had by increasing incentives beyond $20. Other researchers have found mixed effects for the impact of incentives on bias [21, 23, 34]. Although incentives have been shown in several studies to enhance recruitment of African Americans [24], they had no impact on the under representation of nonwhites in the present study.

Strengths and limitations

The study has several strengths, including its randomized, factorial design and its focus on an important, policy-relevant population. While a comprehensive literature examines the impact of various interventions on survey response rates, relatively few examine methods to reduce non-response bias. In terms of limitations, our response rate was lower than anticipated, particularly in some factorial combinations. Findings are therefore susceptible to Type II error. The study also does not answer how large non-response bias can be before it becomes intolerable. Our data indicate that, in some cases, non-response biases of less than 10 absolute percentage points in men were associated with over-estimations of combat exposure and under-estimations of military sexual assault of almost 30 percentage points. Distortions of this magnitude could lead to misallocating resources—for example, underfunding military sexual assault treatment for men. VA administrative data are not perfect indicators of Veterans’ true combat and military sexual trauma status and probably underestimate both. However, positive values tend to be correct and thus relate informatively to self-reported values. The study also targeted OEF/OIF/OND Veterans applying for PTSD disability benefits. This is a highly selected group, and the manipulations we applied to the cover letters were uniquely targeted to them. Results may not apply to other populations.

Conclusions

Researchers, e.g., [13, 24], have always acknowledged that higher response rates could counterintuitively increase non-response bias were the higher response rate achieved by over recruiting subgroups with specific attributes. This study offers several examples of where this occurred. VA administrators need to be aware that trauma surveys likely overestimate combat and underestimate sexual assault in male PTSD disability applicants by a substantial degree. This information should be kept in mind when allocating scarce resources to address these issues. Specialized methods, which might include targeted recruitment methods for selected subgroups, such as men under the age of 30 or non-white women, must be used to ensure adequate representation of undercounted Veterans. Recently, Gray et al. [35] offered a range of population estimates for harmful drinking using several plausible missingness patterns, Reporting study results under different assumptions of missingness allows readers to concretely see the impact of these assumptions on study estimates.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author (MM) upon reasonable request and with local Minneapolis VA Health Care System IRB permission.

Abbreviations

ANOVA:

Analysis of variance

ICD-9-CM:

International Classification of Diseases, Clinical Modification, 9th revision

ICD-10-CM:

International Classification of Diseases, Clinical Modification, 10th revision

MST:

Military sexual trauma

OEF/OIF/OND:

Operations Enduring Freedom, Iraqi Freedom, or New Dawn

PTSD:

Posttraumatic stress disorder

SAS:

Statistical Analysis System

SMI:

Serious mental illness

SPSS:

Statistical Package for the Social Sciences

SSI:

Intervention Sum of Squares

SST:

Total Sum of Squares

VA:

Department of Veterans Affairs

References

  1. Couper M, Singer E, Conrad F, Groves R. Experimental studies of disclosure risk, disclosure harm, topic sensitivity, and survey participation. J Off Stat. 2010;26(2):287–300.

    PubMed  PubMed Central  Google Scholar 

  2. Delgado-Rodriguez M, Llorca J. Bias. J Epidemiol Community Health. 2004;58:635–41.

    Article  Google Scholar 

  3. Rubin D. Inference and missing data. Biometrika. 1976;63(3):581–92.

    Article  Google Scholar 

  4. Little R, Rubin D. Statistical analysis with missing data. 2nd ed. New York: John Wiley and Sons; 2002.

    Book  Google Scholar 

  5. Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opin Q. 2006;70(5):646–75.

    Article  Google Scholar 

  6. Allison P. Missing data. Thousand Oaks, CA: Sage; 2001.

    Google Scholar 

  7. Liu D, Yeung E, McLain A, Xie Y, Buck Louis G, Sundarama R. A two-step approach for analysis of nonignorable missing outcomes in longitudinal regression: an application to upstate KIDS study. Paediatr Perinat Epidemiol. 2017;31(5):468–78.

    Article  Google Scholar 

  8. Murdoch M, Polusny M, Street A, Grill J, Baines Simon A, Bangerter A, et al. Sexual assault during the time of gulf war I: a cross-sectional survey of U.S. service men who later applied for Department of Veterans Affairs PTSD disability benefits. Milit Med. 2014;179(3):285–93.

    Article  Google Scholar 

  9. Qin J, Leung D, Shao J. Estimation with survey data under nonignorable nonresponse or informative sampling. J Am Statist Assoc. 2002;97:193–200.

    Article  Google Scholar 

  10. Ibrahim J, Chen M, Lipsitz S, Herringa A. Missing-data methods for generalized linear models: a comparative review. J Am Statist Assoc. 2005;100(469):332–46.

    CAS  Article  Google Scholar 

  11. Ibrahim S, Lipsitz S. Parameter estimation from incomplete data in binomial regression when the missing data mechanism is nonignorable. Biometrics. 1996;52(3):1071–8.

    CAS  Article  Google Scholar 

  12. Groves R, Singer E, Corning A. Leverage-saliency theory of survey participation: description and an illustration. Public Opin Q. 2000;64(3):299–308.

    CAS  Article  Google Scholar 

  13. Groves R, Couper M, Presser S, Singer E, Tourangeau R, Acosta G, et al. Experiments in producing nonresponse bias. Public Opin Q. 2006;70(5):720–36.

    Article  Google Scholar 

  14. Sedgwick P. Non-response bias versus response bias. BMJ. 2014;348:g2573.

  15. Murdoch M, Simon A, Polusny M, Bangerter A, Grill J, Noorbaloochi S, et al. Randomized controlled trial using pre-merged questionnaires showed that different tracking/privacy conditions and incentives yielded distinctive subpopulations of respondents. BMC Med Res Methodol. 2014;14:90.

    Article  Google Scholar 

  16. Murdoch M, Hodges J, Cowper D, Fortier L, van Ryn M. Racial disparities in VA service connection for posttraumatic stress disorder disability. Med Care. 2003;41(4):536–49.

    PubMed  Google Scholar 

  17. Murdoch M, Nelson D, Fortier L. Time, gender, and regional trends in the application for service-related post-traumatic stress disorder disability benefits, 1980-1998. Milit Med. 2003;168(8):662–70.

    Article  Google Scholar 

  18. Murdoch M, Hodges J, Hunt C, Cowper D, Kressin N, O'Brien N. Gender differences in service connection for PTSD. Med Care. 2003;41(8):950–61.

    Article  Google Scholar 

  19. Murdoch M, vanRyn M, Hodges J, Cowper D. Mitigating effect of Department of Veterans Affairs (VA) disability benefits for PTSD on low income. Milit Med. 2005;170(2):137–40.

    Google Scholar 

  20. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Wentz R, et al. Methods to increase response to postal questionnaires. Cochrane Database Methodol Rev. 2003;(4):MR000008. https://doi.org/10.1002/14651858.MR000008.pub2.

  21. Oscarsson H, Arkhede S. Effects of conditional incentives on response rate, non-response bias and measurement error in a high response-rate context. Int J Public Opin Res. 2020;32(2):354–68.

  22. Felderer B, Muller G, Kreuter F, Winter J. The effect of differential incentives on attrition bias: evidence from the PASS wave 3 incentive experiment. Field Methods. 2018;30(1):56–69.

    Article  Google Scholar 

  23. McGonagle K, Freedman V. The effects of a delayed incentive on response rates, response mode, data quality, and sample bias in a nationally representative mixed mode study. Field Methods. 2017;29(3):221–37.

    Article  Google Scholar 

  24. Singer E, Ye C. The use and effects of incentives in surveys. Ann Am Acad Pol Soc Sci. 2013;645(January):112–41.

    Article  Google Scholar 

  25. Charlson M, Pompei P, Ales K, MacKenzie C. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis. 1987;40(5):373–83.

    CAS  Article  Google Scholar 

  26. Machin D, Fayers P. Randomized clinical trials: design, practice and reporting. Chichester: Wiley-Blackwell; 2010.

    Book  Google Scholar 

  27. Vogt D, Smith B, King D, King L. Manual for the deployment risk and resilience Inventory-2 (DRRI-2): a collection of measures for studying deployment-related experiences of military veterans. Boston: National Centers for PTSD; 2012.

    Google Scholar 

  28. Murdoch M, McGovern P. Development and validation of the sexual harassment inventory. Violence Vict. 1998;13(3):203–16.

    CAS  Article  Google Scholar 

  29. The American Association for Public Opinion Research. Standard definitions: final dispositions of case codes and outcome rates for surveys. 8th ed: AAPOR; 2015.

    Google Scholar 

  30. Rubin D. Multiple imputation for nonresponse in surveys. New York: John Wiley & Sons, Inc; 1987.

    Book  Google Scholar 

  31. Martin L, Rosen L, Durand D, Stretch R, Knudson K. Prevalence and timing of sexual assaults in a sample of male and female US Army soldiers. Milit Med. 1998;163(4):213–6.

    CAS  Article  Google Scholar 

  32. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Assoc; 1988.

    Google Scholar 

  33. Groves R, Presser S, Dipko S. The role of topic interest in survey participation decisions. Public Opin Q. 2004;68(1):2–31.

    Article  Google Scholar 

  34. Jackle A, Lynn P. Respondent incentives in a multi-mode panel survey: cumulative effects on nonresponse and bias. Surv Methodol. 2008;34(1):105–17.

    Google Scholar 

  35. Gray L, Gorman E, White I, Katikireddi S, McCartney G, Rutherford L, et al. Correcting for non-participation bias in health surveys using record-linkage, synthetic observations and pattern mixture modelling. Stat Methods Med Res. 2020;29(4):1212–26.

    Article  Google Scholar 

Download references

Acknowledgements

We thank Andrea Cutting for data management and Derek Vang for study coordination.

Disclaimer

Views expressed are solely those of the authors and do not reflect the opinion, views, policies, or position of the Department of Veterans Affairs.

Funding

The Center for Care Delivery Outcomes Research is a VA Health Services Research and Development (HSR&D) Service Center of Innovation (Center grant #HFP 98–001). This work was supported by the VA HSR&D Service (grant number IIR-14-004). The funder had no role in data analysis, manuscript preparation, or decision to publish.

Author information

Authors and Affiliations

Authors

Contributions

MM obtained funding; designed the study; oversaw data collection, analysis, and interpretation; and drafted the manuscript. BAC and SN contributed to funding acquisition, study design, data analysis and interpretation. AKB oversaw data acquisition. BAC, TJB, AKB, and SN contributed to data interpretation and manuscript revisions. BAC, TJB, AKB, and SN read and approved the final manuscript.

Corresponding author

Correspondence to Maureen Murdoch.

Ethics declarations

Ethics approval and consent to participate

The Minneapolis VA Health Care System’s Internal Review Board (IRB) for Human Studies reviewed the study protocol (#4495-B). The IRB granted the study a HIPAA waiver for written documentation of informed consent on the basis of being a minimal risk study. Therefore, participants’ signified their consent to participate in the research by returning a completed survey. All methods were carried out in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Table S1.

Men and Women’s Survey Response (n) and Response Rate (%) by each Factorial Combination. Table S2a. Mean bias (SD) by study-arm manipulation for men. Table S2b. Mean bias (SD) by study-arm manipulation for women.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Murdoch, M., Clothier, B.A., Beebe, T.J. et al. Impact of different cover letter content and incentives on non-response bias in a sample of Veterans applying for Department of Veterans Affairs disability benefits: a randomized, 3X2X2 factorial trial. BMC Med Res Methodol 22, 61 (2022). https://doi.org/10.1186/s12874-022-01531-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-022-01531-x

Keywords

  • Non-response Bias
  • Randomized trial
  • Leverage salience theory
  • Factorial design
  • Mailed survey
  • Combat
  • Military sexual trauma
  • Sexual assault