Skip to main content

Shortening a survey and using alternative forms of prenotification: Impact on response rate and quality



Evidence suggests that survey response rates are decreasing and that the level of survey response can be influenced by questionnaire length and the use of pre-notification. The goal of the present investigation was determine the effect of questionnaire length and pre-notification type (letter vs. postcard) on measures of survey quality, including response rates, response times (days to return the survey), and item nonresponse.


In July 2008, the authors randomized 900 residents of Olmsted County, Minnesota aged 25-65 years to one of two versions of the Talley Bowel Disease Questionnaire, a survey designed to assess the prevalence of functional gastrointestinal disorders (FGID). One version was two pages long and the other 4 pages. Using a 2 × 2 factorial design, respondents were randomized to survey length and one of two pre-notification types, letter or postcard; 780 residents ultimately received a survey, after excluding those who had moved outside the county or passed away.


Overall, the response rates (RR) did not vary by length of survey (RR = 44.6% for the 2-page survey and 48.4% for the 4-page) or pre-notification type (RR = 46.3% for the letter and 46.8% for the postcard). Differences in response rates by questionnaire length were seen among younger adults who were more likely to respond to the 4-page than the 2-page questionnaire (RR = 39.0% compared to 21.8% for individuals in their 20s and RR = 49.0% compared to 32.3% for those in their 30s). There were no differences across conditions with respect to item non-response or time (days after mailing) to survey response.


This study suggests that the shortest survey does not necessarily provide the best option for increased response rates and survey quality. Pre-notification type (letter or postcard) did not impact response rate suggesting that postcards may be more beneficial due to the lower associated costs of this method of contact.

Peer Review reports

1. Background

Abundant evidence suggests that the conduct of survey-based investigations is becoming increasingly difficult, with response rates to all forms of data collection (mail, telephone, and face-to-face) steadily declining over the course of the past few decades [16]. Somewhat dated evidence suggests that the decline for mailed surveys is less than those observed in its telephone and face-to-face counterparts [5]. This latter finding, coupled with the relative inexpense of mailed surveys compared to telephone and face-to-face [7], makes the mailed survey a particularly attractive method of data collection for health researchers. Nonetheless, health researchers strive to obtain the highest levels of response to their mailed surveys in an attempt to ensure the representation of their responding sample and enhance the inferential value of their survey-based investigations. Indeed, response rates to mailed surveys tend to be significantly lower than those enjoyed by telephone and face-to-face interviews [5]. In a recent large scale systematic review of the literature on mailed surveys, Edwards and colleagues [8] found the likelihood of response to be affected by such factors as the use of incentives, text on the envelope encouraging the respondent to reply, interest in the topic by the potential respondent, follow-up contact, university sponsorship, questionnaire length, and pre-notification. This article investigates the impact of manipulating the latter two factors: questionnaire length and prenotification type.

1.1. Questionnaire length effects

One of the hypotheses applied to survey participation is the notion of opportunity cost [7]. In the context of increasingly hectic lives, surveys that are perceived to take too long to complete may not be viewed favorably and may bring about diminished response. Indeed, evidence from 56 trials showed that the odds of response to a mailed survey were 60% higher in shorter versus longer questionnaires (OR = 1.64; 95% CI 1.43 to 1.87) [8]. However, what is considered long versus short appears to have changed over time. Whereas a 12 page cut-off appeared to have differentiated long versus short in the 1970s [9], subsequent speculation has suggested any questionnaire longer than four pages ought to be considered long [10]. Among physicians, response rates tend to decrease if a questionnaire exceeds a threshold of 1000 words [11]. Some have posited a curvilinear relationship between response propensity and questionnaire length whereby the likelihood of response is lowest when the questionnaire is overly long and when it is perceived to be too short [12]. The anticipated negative effect of a short questionnaire is thought to be motivated by a lack of importance attached to this type of survey vis-à-vis a longer and more comprehensive counterpart [13].

There exists some suggestive evidence in support of the notion of questionnaires being too short. For example, Asch and colleagues [14] found that mailed surveys with more pages had higher response rates than shorter surveys, although this response effect disappeared when length was measured by the number of questions rather than pages. Champion and Sear [15] found that response rates were significantly higher for a 9-page questionnaire than 3- or 6-page questionnaires. Similarly, Mond et al.[13] found the overall response rate to be higher for their long form questionnaire (14 pages) than their short form questionnaire (8 pages). On the shorter end of the questionnaire spectrum, Goldstein [16] found that the odds of response to a one page questionnaire decreased by half (OR = 0.47; 95% CI 0.34 to 0.66) when a double postcard was used. Although the preponderance of evidence falls squarely on the side of using shorter versus longer questionnaires to increase response, these findings suggest that may not always be the case, especially when considering questionnaire lengths beneath the threshold of four pages.

1.2. Prenotification effects

Prenotification, or the act of contacting prospective respondents before they are mailed an actual questionnaire, has been shown to be an effective way to increase response in both telephone surveys [17] and mailed surveys [8]. Prenotification works because it underscores the legitimacy of the survey, takes away suspicion, communicates the value of the survey, and evokes the principles of social exchange [17]. For telephone surveys, a recent meta-analysis found that prenotifcation increased participation from 58 percent (no prenotification) to 66 percent (prenotification) [17]. Prenotification may have an even larger effect in mailed surveys as Edwards et al.[8] found that the odds of response for a mailed survey were substantially higher with prenotification (OR = 1.45; 95% CI 1.29 to 1.63) than without. However, the best method of prenotification remains unclear. Virtually all of the studies reviewed in the Edwards et al. [8] meta-analysis utilized letters as the form of prenotification, some utilized telephone contact, and very few investigated the effect of postcard prenotification; none directly compared letter versus postcard. If postcard prenotification is found to be equally efficacious in terms of eliciting response, then cost savings can accrue to investigators as postcards are much less expensive to mail. Lessons from the few studies comparing the relative merits of prenotification via letter versus postcard in the context of the telephone survey suggest that postcards may be as effective in increasing response as letters [18, 19], although they are slightly less likely to be read [19, 20]. However, there is not an over-abundance of research on postcard prenotification in the telephone survey area either and there have been calls for more research into postcards as a form of prenotification [17].

1.3. The current study

Although there has been a fair amount of research undertaken studying the effects of questionnaire length on response rates, most of it has focused on manipulations in length at the higher end of the spectrum (viz. longer than four pages). Edwards et al.[8] indicates, "...that questionnaire length has a substantial impact on non-response, particularly when questionnaires are very short." (p. 11), but do not provide any direct comparisons of the impact of different lengths of surveys within the range of what is considered short. In addition, the extant literature on the effects of prenotification has mainly considered the effect of letters or telephone contact (versus none) as the primary prenotification vehicle. Very few have studied the viability of postcard prenotification even though use of postcards brings about rather substantial cost savings relative to other forms; what information exists comes from studies undertaken in the context of a telephone survey. How well these latter findings translate to mailed surveys is unclear. Therefore, we tested the effect of questionnaire length (2 pages versus 4 pages) crossed with prenotification type (letter versus postcard) on response rates, response times, and missing data totals in the context of a large population-based mail survey. To our knowledge, no published study has tested the effect of questionnaire length and prenotification type simultaneously in a factorial design.

2. Methods

2.1. Sample

This study was undertaken as part of a larger pilot study designed to determine the impact of different recall durations (3 months vs. 1 year) on individual gastrointestinal symptoms and functional gastrointestinal disorder (FGID) diagnoses. The sampling strategy and its associated power calculations were indexed off the principal aims of that parent study. Further details of this larger study and its findings can be found elsewhere [21]. Briefly, we randomly selected 900 residents of Olmsted County, Minnesota, aged 25-65 years old using the Rochester Epidemiology Project (REP). The REP is a comprehensive medical records linkage system that captures medical data from electronic and paper medical and autopsy records for patients using the Mayo Clinic, Olmsted Medical Center, their affiliated hospitals, or one private practice provider. Because most Olmsted County residents receive their medical care from one of those providers, it is possible to conduct population-based research on disease incidence, mortality, and use of health services in the region [22]. Importantly, from this sample frame we know the gender and age of both responders and non-responders, allowing us to assess how their distribution potentially differs across experimental conditions.

The sample was stratified by age and gender. Those that had previously participated in any gastrointestinal-related survey conducted by two of the authors (Talley, Locke) were excluded. Also excluded were subjects with significant illnesses, a major psychotic episode, mental retardation, dementia, inmates in the Federal Medical Center (a prison managed by the U.S. Federal Bureau of Prisons), and those that had previously refused general authorization to review their medical record for research (less than 4 percent of Olmsted County residents) [23]. These exclusions were done prior to the random assignment described in the next section. The survey was mailed in July 2008.

2.2. Procedure

Figure 1 provides a flowchart of the study sample, data collection process, and random assignment. Subjects were randomly assigned to four conditions using the RANUNI function in SAS v. 9.1. software according to a 2 × 2 factorial design to enable us to simultaneously assess the effects of 4 and 2 page versions of the questionnaire and the effect of pre-notification type (letter or postcard). The subjects were first randomized to length and then notification type within group. Approximately 225 cases were assigned to each of the four conditions. After randomization, it was discovered that 120 cases were ineligible due to residence outside of Olmsted County or deceased status. As such, 780 cases were available for data collection.

Figure 1
figure 1

Randomization and response rates of sample population. Note: Response rates did not differ significantly between the 2 and 4 page conditions

The varied questionnaire length versions were based on the Talley Bowel Disease Questionnaire (Talley-BDQ; [24]). The Talley-BDQ was designed as a self-report instrument to measure symptoms experienced over the past year and to collect past medical history. For this experiment, the full 16 page Talley-BDQ was shortened to a 4 page version and then to a 2 page version aimed to comprise only those questions needed to achieve the pre-defined specific goals. For that purpose, a sequential procedure was followed: (1) variables derived from each question were listed, (2) variables needed of the specific targeted research projects were selected, (3) all unnecessary questions were deleted, (3) remaining list was reviewed by investigators, (4) remaining questions were formatted in a 4 page questionnaire, (5) questions strictly needed to achieve the objective of one project were further refined to fit 2 page questionnaire. After the shortening process, a 2 page version of the questionnaire was drawn with 18 questions (7 questions about abdominal pain and related changes in bowel habits; 9 questions about usual bowel pattern; 2 questions relative to consultation). The 4 page version of the questionnaire included 17 additional questions (one for fecal incontinence, 11 for upper GI symptoms, and 5 for medications used) plus a short version of somatic symptom checklist (SSC -6 items). With the exception of the last question, the items were identical across the first two pages of each survey. The last question on the two page survey was, however, identical to the last question on the four page survey.

The letter and postcard prenotifications contained the same text. Both identified the survey sponsor and described the purpose of the study, how subjects were chosen, the importance of responding, the anticipated completion time (10 minutes or less), and how confidentiality will be protected. The letter and postcard also asked prospective respondents to mark a box if they wished to receive a report of the study results and alerted them to the fact that a book titled "Mayo Clinic on Digestive Health" would be included in the survey packet to come as a token of appreciation. The main differences between notification type was that the letter, but not the postcard, contained a salutation to a specific individual and included the primary investigator's signature (Locke).

All subjects were sent either a letter or a postcard one week prior to mailing the survey package. A week after pre-notification, a survey package was sent to all potential respondents. The package included a cover letter, the book, a pen incentive, and one of two versions of the modified Talley-BDQ. Reminder letters, along with another survey, were sent to nonresponders 4 weeks after the first mailing. Subjects who indicated at any point that they did not want to be contacted further were excluded from the study. All consent and study procedures were approved by the Mayo Clinic Institutional Review Board; the survey data collection was conducted by the Mayo Clinic Survey Research Center.

2.3. Statistical Methods

Sample characteristics were summarized with frequencies and percentages for categorical data (gender, age group, race), means and standard deviations for age (continuous), and medians and inter-quartile range (IQR) for time to response. Response rates (RR) were calculated overall as well as within each survey condition as the number of surveys returned divided by the number of surveys sent. Time to response (among responders) was calculated as the number of days between the initial survey mailing and response date. The primary outcome for the analysis was whether or not a survey recipient responded. Overall differences in response rates between factors (survey length and pre-notification type) and characteristics (gender, age group, and race) were assessed with chi-square tests (or Fisher's exact tests where appropriate). Race was categorized as "white", "non-white" (American Indian/Alaskan Native, Black or African American, Native Hawaiian/Pacific Islander, and Asian), and "other/unknown" (those that specifically indicated "other" or chose not to disclose). Differences in time to response among responders between survey conditions were compared with pair-wise Wilcoxon rank-sum tests. As an assessment of item non-response, the percentage of respondents with missing data for each question in common to the 2-page and 4-page surveys was compared with Fisher's exact tests.

Logistic regression was used to assess the effects of survey length and pre-notification type on likelihood to respond adjusted for each other as well as for age (continuous), gender, and race. The first model included main effects only (notification type, survey length, age, gender, and race). Two-way interactions between the survey characteristics and demographics were then assessed, and the second model included a significant age-by-survey length interaction. Odds ratios with 95% confidence intervals were reported. For the second model, to illustrate how the effect of survey length differed by age, the odds ratio for survey length was calculated at different points across the age range (ages 25, 35, 45, and 55). All analyses were performed using SAS v. 9.1 software (Cary, NC). A p-value of < 0.05 was regarded as statistically significant.

3. Results

3.1. Sample characteristics

Surveys were mailed to 780 eligible recipients among whom 392 (50.3%) were female and the average age was 43.8 years (SD = 11.2, range = 25 to 65). The majority was white (74.6%). The distributions of age, gender, and race were similar (not significantly different) across the experimental conditions (see Table 1). There were approximately 200 survey recipients in each survey condition (2-page/letter = 188, 2-page/postcard = 191, 4-page/letter = 199, 4-page/postcard = 202). At the end of data collection, 363 (46.5%) individuals responded to the survey (see Figure 1).

Table 1 Age, gender, and race distribution of sample population

3.2. Response rates, response times, and missing data totals

No significant differences in response rate by length (44.6% for 2-page vs. 48.4% for 4-page, p = 0.29) or by pre-notification type (46.3% for letter vs. 46.8% for postcard, p = 0.87) were observed. Females were significantly more likely to respond than males within each condition, and the likelihood of response significantly increased with age for most conditions. Furthermore, Whites were more likely to respond as compared to the other race categories (see table 2). This finding was also consistent within each of the four length-by-pre-notification type conditions combined (data not shown). Among responders, the median time to response was 14 days (IQR 8-24 days) in the 2-page/letter condition. Each of the three remaining conditions had a median time to response of 11 days with an IQR of 8-31 days for the 2-page/postcard condition, 8-22 days for the 4-page/letter condition, and 8-24 days for the 4-page/postcard condition. The time to response was not significantly different between the conditions. In a comparison of item non-response between the 2-page and 4-page surveys, the percentage of respondents with missing data was not significantly different between survey types for any of the survey items. In general, the missing data totals were very low (between 0% and 1.6% within the different questionnaire length groups).

Table 2 Response rates overall and by design characteristics, by gender, race, and age

3.3. Logistic regression analysis

Logistic regression analysis revealed similar findings as was seen in unadjusted analyses. In a model which only included main effects, only age (OR = 1.03 for 1-year increase in age) and gender (OR = 1.86 for females vs. males) were significant predictors of response (p ≤ 0.0001 for each, see table 3). No significant interaction was found between survey length and pre-notification type (p = 0.25), however, there was a significant interaction between survey length and age (p = 0.001). Adjusting for pre-notification type, gender, and race, younger people (25-40 years of age) were significantly more likely to respond to the 4-page survey than to the 2-page survey, however this effect reverses (insignificantly) in older people (see table 3).

Table 3 Logistic Regression Results, odds ratios comparing likelihood of response.

4. Discussion

Our study evaluated the relative effects of two key factors shown to effect participation in mailed surveys: questionnaire length and prenotification [8]. Counter to the overall conclusions of the meta-analysis recently conducted by Edwards and colleagues [8], but consistent with selected prior research [2528], we did not find a significant main effect of questionnaire length on response. We did find, however, a significant and potentially important interaction between length and age where younger individuals were more likely to respond to a longer (4 page) survey than a shorter (2 page) survey. The reasons underlying this observation are unclear. As posited by some [12, 13], it may be that shorter questionnaires convey a lack of importance and comprehensiveness required for them to be perceived as in need of completion relative to longer versions. The fact that we observed a higher response to the longer 4 page survey only among the younger population suggests that this phenomenon may be even more acute in this group. Conversely, it may be that older respondents are relatively immune to the vagaries of variations in questionnaire length as ample evidence has shown survey response rates to be highest among older citizens, irrespective of survey type [29]. Finally, it is possible our observed questionnaire length by age interaction only manifests itself at the low end of the questionnaire length spectrum (4 pages and under). Future research in this area should attempt to replicate these findings and use a study design better suited to identify the mechanisms at work.

Our observed lack of significant differences in response to the letter versus postcard prenotification is consistent with similar findings in the telephone survey literature [18, 19]. There are no studies comparing these two forms in the mailed survey literature to our knowledge. Given the observed equivalence in the likelihood of response to the two forms, the cost savings associated with utilizing a postcard as the vehicle for prenotification versus a letter suggest that the former represents a viable option for investigators facing constraints in their financial resources. In this study, it cost $0.44 more to mail the letter than the postcard (including the cost of labor for preparing the mailings and postage fees). For the purpose of this exercise we assume the cost of printing and supplies to be similar across the two modalities and as such do not include them in the comparison. Applied to our entire study, this would have represented a total of $170.63 savings accrued to the investigator, or about $0.87 per completion, postcard vs. letter. Despite potential cost savings, future researchers wishing to use a postcard should be mindful of the evidence that the respondents remember less of what was conveyed in the postcard than in the letter [19, 20]. There were no differences across conditions with respect to item non-response or time (in days after mailing) to survey response among responders.

Certain elements of our study may limit the generalizability of our findings. First, our study design called for the use of book and pen incentives. There is rather strong evidence that even nonmonetary inducements such as these increase the likelihood of response [8]. Therefore, our absolute response rates observed across conditions may be inflated. However, because the book and pen was offered to everyone, the veracity of our between group comparisons should not be compromised. Second, our two different forms of prenotification differed not only in terms of the medium chosen (viz. letter vs. postcard) but in the form of personal salutation to the prospective respondent. Specifically, the letter contained a personal salutation and the principal investigator's signature. As Edwards and colleagues [8] have shown, the presence of a personal salutation may be enough to increase the likelihood of response. As such, our letter versus postcard comparison may not fully represent an "apples to apples" comparison and our findings may be confounded by this fact. Finally, there may be a concern about the relative lack of racial/ethnic diversity of the Olmsted County population and the generalizability of the findings to other populations. However, the distributions of socioeconomic characteristics are very similar to those of U.S. whites generally, except for the percentage of the population employed in health-related services and the corresponding increase in the proportion with college or advanced degrees [22]. Historically, there have been relatively few persons of color or Hispanic ethnicity but, like many urban centers, Olmsted County is realizing rapid changes in its racial/ethnic composition, suggesting that this may be less of an issue than in the past.

5. Conclusions

This was the first study to formally evaluate questionnaire length and prenotification in a full factorial design using multiple indicators of survey quality (i.e., response rates, time to respond, and item missing data totals). In this population-based mailed survey study, we found that none of our measures of survey quality, including response rates, varied by length of survey or pre-notification type. Differences in response rates by questionnaire length were seen among young adults who were more likely to respond to the 4-page than the 2-page questionnaire. This study suggests that the shortest survey does not necessarily provide the best option for increased response rates and survey quality. This finding, coupled with the potential of reduced accuracy of the measurement process brought about through shortening a questionnaire [8], suggests that future researchers hoping to increase participation in their mail survey-based investigations should be cautious in their effort to reduce survey lengths. In addition, prenotification via postcard might bring about significant cost savings over the use of letters with very little detriment to overall participation.



Functional gastrointestinal disorders


Rochester Epidemiology Project


Talley Bowel Disease Questionnaire


Somatic symptom checklist


Inter-quartile range


Response rates


  1. Berk ML, Schur CL, Feldman J: Twenty-five years of health surveys: does more data mean better data?. Health Aff (Millwood). 2007, 26 (6): 1599-1611. 10.1377/hlthaff.26.6.1599.

    Article  Google Scholar 

  2. Curtin R, Presser S, Singer E: Changes in Telephone Survey Nonresponse over the Past Quarter Century. Public Opin Q %R101093/poq/nfi002. 2005, 69 (1): 87-98. 10.1093/poq/nfi002.

    Article  Google Scholar 

  3. de Leeuw E, de Heer W: Trends in household survey nonresponse: A longitudinal and international comparison. Survey Nonresponse. Edited by: Groves R, Dillman D, Eltinge J, Little R. 2002, New York: Wiley, 41-54.

    Google Scholar 

  4. Groves R, Couper M: Nonresponse in household interview surveys. 1998, New York: John Wiley & Sons

    Chapter  Google Scholar 

  5. Hox J, de Leeuw E: A comparison of nonresponse in mail, telephone, and face-to-face surveys: Applying multilevel modeling to meta-analysis. Quality and Quantity. 1994, 28 (4): 329-344. 10.1007/BF01097014.

    Article  Google Scholar 

  6. Steeh C, Kirgis N, Cannon B, DeWitt B: Are they really as bad as they seem? Nonresponse rates at the end of the twentieth century. J Off Stat. 2001, 17: 227-247.

    Google Scholar 

  7. Groves R, Fowler F, Couper M, Lepkowski J, Singer E, Tourangeau R: Survey Methodology. 2004, New York: Wiley

    Google Scholar 

  8. Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, MR000008-3

  9. Dillman D: Mail and telephone surveys: The total design method. 1978, New York: Wiley

    Google Scholar 

  10. Yammarino FJ, Skinner SJ, Childers TL: Understanding mail survey response behavior: a meta-analysis. Public Opin Q. 1991, 55 (4): 613-639. 10.1086/269284.

    Article  Google Scholar 

  11. Jepson C, Asch DA, Hershey JC, Ubel PA: In a mailed physician survey, questionnaire length had a threshold effect on response rate. J Clin Epidemiol. 2005, 58 (1): 103-105. 10.1016/j.jclinepi.2004.06.004.

    Article  PubMed  Google Scholar 

  12. Eslick GD, Howell SC: Questionnaires and postal research: more than just high response rates. Sex Transm Infect. 2001, 77 (2): 148-10.1136/sti.77.2.148.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Mond JM, Rodgers B, Hay PJ, Owen C, Beumont PJ: Mode of delivery, but not questionnaire length, affected response in an epidemiological study of eating-disordered behavior. J Clin Epidemiol. 2004, 57 (11): 1167-1171. 10.1016/j.jclinepi.2004.02.017.

    Article  CAS  PubMed  Google Scholar 

  14. Asch DA, Jedrziewski MK, Christakis NA: Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997, 50 (10): 1129-1136. 10.1016/S0895-4356(97)00126-1.

    Article  CAS  PubMed  Google Scholar 

  15. Champion D, Sear A: Questionnaire response rates: A methodological analysis. Social Forces. 1969, 47: 335-339. 10.2307/2575033.

    Article  Google Scholar 

  16. Goldstein L, Friedman H: A case for double postcards in surveys. Journal of Advertising Research. 1975, 15: 43-47.

    Google Scholar 

  17. de Leeuw E, Callegaro M, Hox J, Korendijk E, Lensvelt-Mulders G: The Influence of Advance Letters on Response in Telephone Surveys: A Meta-Analysis. Public Opin Q. 2007, 71 (3): 413-443. 10.1093/poq/nfm014.

    Article  Google Scholar 

  18. Hembroff LA, Rusz D, Rafferty A, McGee H, Ehrlich N: The Cost-Effectiveness of Alternative Advance Mailings in a Telephone Survey. Public Opin Q. 2005, 69 (2): 232-245. 10.1093/poq/nfi021.

    Article  Google Scholar 

  19. Richardson A: Prenotification: does size matter?. International Field Directors and Technologies Conference, Del Ray Beach, FL. 2009

    Google Scholar 

  20. Iredell H, Shaw T, Howat P, James R, Granich J: Introductory postcards: do they increase response rate in a telephone survey of older persons?. Health Educ Res. 2004, 19 (2): 159-164. 10.1093/her/cyg015.

    Article  CAS  PubMed  Google Scholar 

  21. Rey E, Locke GR, Jung HK, Malhotra A, Choung RS, Beebe TJ, Schleck CD, Zinsmeister AR, Talley NJ: Measurement of abdominal symptoms by validated questionnaire: a three month recall time frame as recommended by Rome III is not superior to a one year recall time frame. Aliment Pharmacol Ther. 2010

    Google Scholar 

  22. Melton LJ: History of the Rochester Epidemiology Project. Mayo Clin Proc. 1996, 71 (3): 266-274. 10.4065/71.3.266.

    Article  PubMed  Google Scholar 

  23. Jacobsen SJ, Xia Z, Campion ME, Darby CH, Plevak MF, Seltman KD, Melton LJ: Potential effect of authorization bias on medical record research. Mayo Clin Proc. 1999, 74 (4): 330-338. 10.4065/74.4.330.

    Article  CAS  PubMed  Google Scholar 

  24. Talley NJ, Phillips SF, Melton J, Wiltgen C, Zinsmeister AR: A patient questionnaire to identify bowel disease. Ann Intern Med. 1989, 111 (8): 671-674.

    Article  CAS  PubMed  Google Scholar 

  25. Edwards P, Roberts I, Sandercock P, Frost C: Follow-up by mail in clinical trials: does questionnaire length matter?. Control Clin Trials. 2004, 25 (1): 31-52. 10.1016/j.cct.2003.08.013.

    Article  PubMed  Google Scholar 

  26. Nakash RA, Hutton JL, Jorstad-Stein EC, Gates S, Lamb SE: Maximising response to postal questionnaires--a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006, 6: 5-10.1186/1471-2288-6-5.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Ronckers C, Land C, Hayes R, Verduijn P, van Leeuwen F: Factors impacting questionnaire response in a Dutch retrospective cohort study. Ann Epidemiol. 2004, 14 (1): 66-72. 10.1016/S1047-2797(03)00123-6.

    Article  PubMed  Google Scholar 

  28. Subar AF, Ziegler RG, Thompson FE, Johnson CC, Weissfeld JL, Reding D, Kavounis KH, Hayes RB: Is shorter always better? Relative importance of questionnaire length and cognitive ease on response rates and data quality for two dietary questionnaires. Am J Epidemiol. 2001, 153 (4): 404-409. 10.1093/aje/153.4.404.

    Article  CAS  PubMed  Google Scholar 

  29. Elliott MN, Edwards C, Angeles J, Hambarsoomians K, Hays RD: Patterns of unit and item nonresponse in the CAHPS Hospital Survey. Health Serv Res. 2005, 40 (6 Pt 2): 2096-2119. 10.1111/j.1475-6773.2005.00476.x.

    Article  PubMed  PubMed Central  Google Scholar 

Pre-publication history

Download references


This study was funded by Natural History and Co-morbidities in Chronic Constipation: A Population based Study (INDUS Takeda 91292027, Talley), and was made possible by the Rochester Epidemiology Project (RO1 AR030582 from the National Institute of Arthritis and Musculoskeletal and Skin Diseases). Dr. Rey was supported by grant BA08/90038 from the Carlos III Institute, Ministry of Health, Spain.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Timothy J Beebe.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

TB participated in the study design, reviewed the data analysis, and drafted the initial manuscript. ER conceived of the study, participated in the coordination of data collection, and helped to draft the manuscript. JZ oversaw data analysis and offered significant editorial comments to the initial draft of the manscript, SJ and KL conducted the data analysis and edited the statistical analysis sections of the manuscript, GRL participated in the study design and oversaw data collection. NT participated in study design and offered significant editorial comments to the initial manuscript draft. All authors have read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Beebe, T.J., Rey, E., Ziegenfuss, J.Y. et al. Shortening a survey and using alternative forms of prenotification: Impact on response rate and quality. BMC Med Res Methodol 10, 50 (2010).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: