Shortening a survey and using alternative forms of prenotification: Impact on response rate and quality
© Beebe et al; licensee BioMed Central Ltd. 2010
Received: 23 February 2010
Accepted: 8 June 2010
Published: 8 June 2010
Evidence suggests that survey response rates are decreasing and that the level of survey response can be influenced by questionnaire length and the use of pre-notification. The goal of the present investigation was determine the effect of questionnaire length and pre-notification type (letter vs. postcard) on measures of survey quality, including response rates, response times (days to return the survey), and item nonresponse.
In July 2008, the authors randomized 900 residents of Olmsted County, Minnesota aged 25-65 years to one of two versions of the Talley Bowel Disease Questionnaire, a survey designed to assess the prevalence of functional gastrointestinal disorders (FGID). One version was two pages long and the other 4 pages. Using a 2 × 2 factorial design, respondents were randomized to survey length and one of two pre-notification types, letter or postcard; 780 residents ultimately received a survey, after excluding those who had moved outside the county or passed away.
Overall, the response rates (RR) did not vary by length of survey (RR = 44.6% for the 2-page survey and 48.4% for the 4-page) or pre-notification type (RR = 46.3% for the letter and 46.8% for the postcard). Differences in response rates by questionnaire length were seen among younger adults who were more likely to respond to the 4-page than the 2-page questionnaire (RR = 39.0% compared to 21.8% for individuals in their 20s and RR = 49.0% compared to 32.3% for those in their 30s). There were no differences across conditions with respect to item non-response or time (days after mailing) to survey response.
This study suggests that the shortest survey does not necessarily provide the best option for increased response rates and survey quality. Pre-notification type (letter or postcard) did not impact response rate suggesting that postcards may be more beneficial due to the lower associated costs of this method of contact.
Abundant evidence suggests that the conduct of survey-based investigations is becoming increasingly difficult, with response rates to all forms of data collection (mail, telephone, and face-to-face) steadily declining over the course of the past few decades [1–6]. Somewhat dated evidence suggests that the decline for mailed surveys is less than those observed in its telephone and face-to-face counterparts . This latter finding, coupled with the relative inexpense of mailed surveys compared to telephone and face-to-face , makes the mailed survey a particularly attractive method of data collection for health researchers. Nonetheless, health researchers strive to obtain the highest levels of response to their mailed surveys in an attempt to ensure the representation of their responding sample and enhance the inferential value of their survey-based investigations. Indeed, response rates to mailed surveys tend to be significantly lower than those enjoyed by telephone and face-to-face interviews . In a recent large scale systematic review of the literature on mailed surveys, Edwards and colleagues  found the likelihood of response to be affected by such factors as the use of incentives, text on the envelope encouraging the respondent to reply, interest in the topic by the potential respondent, follow-up contact, university sponsorship, questionnaire length, and pre-notification. This article investigates the impact of manipulating the latter two factors: questionnaire length and prenotification type.
1.1. Questionnaire length effects
One of the hypotheses applied to survey participation is the notion of opportunity cost . In the context of increasingly hectic lives, surveys that are perceived to take too long to complete may not be viewed favorably and may bring about diminished response. Indeed, evidence from 56 trials showed that the odds of response to a mailed survey were 60% higher in shorter versus longer questionnaires (OR = 1.64; 95% CI 1.43 to 1.87) . However, what is considered long versus short appears to have changed over time. Whereas a 12 page cut-off appeared to have differentiated long versus short in the 1970s , subsequent speculation has suggested any questionnaire longer than four pages ought to be considered long . Among physicians, response rates tend to decrease if a questionnaire exceeds a threshold of 1000 words . Some have posited a curvilinear relationship between response propensity and questionnaire length whereby the likelihood of response is lowest when the questionnaire is overly long and when it is perceived to be too short . The anticipated negative effect of a short questionnaire is thought to be motivated by a lack of importance attached to this type of survey vis-à-vis a longer and more comprehensive counterpart .
There exists some suggestive evidence in support of the notion of questionnaires being too short. For example, Asch and colleagues  found that mailed surveys with more pages had higher response rates than shorter surveys, although this response effect disappeared when length was measured by the number of questions rather than pages. Champion and Sear  found that response rates were significantly higher for a 9-page questionnaire than 3- or 6-page questionnaires. Similarly, Mond et al. found the overall response rate to be higher for their long form questionnaire (14 pages) than their short form questionnaire (8 pages). On the shorter end of the questionnaire spectrum, Goldstein  found that the odds of response to a one page questionnaire decreased by half (OR = 0.47; 95% CI 0.34 to 0.66) when a double postcard was used. Although the preponderance of evidence falls squarely on the side of using shorter versus longer questionnaires to increase response, these findings suggest that may not always be the case, especially when considering questionnaire lengths beneath the threshold of four pages.
1.2. Prenotification effects
Prenotification, or the act of contacting prospective respondents before they are mailed an actual questionnaire, has been shown to be an effective way to increase response in both telephone surveys  and mailed surveys . Prenotification works because it underscores the legitimacy of the survey, takes away suspicion, communicates the value of the survey, and evokes the principles of social exchange . For telephone surveys, a recent meta-analysis found that prenotifcation increased participation from 58 percent (no prenotification) to 66 percent (prenotification) . Prenotification may have an even larger effect in mailed surveys as Edwards et al. found that the odds of response for a mailed survey were substantially higher with prenotification (OR = 1.45; 95% CI 1.29 to 1.63) than without. However, the best method of prenotification remains unclear. Virtually all of the studies reviewed in the Edwards et al.  meta-analysis utilized letters as the form of prenotification, some utilized telephone contact, and very few investigated the effect of postcard prenotification; none directly compared letter versus postcard. If postcard prenotification is found to be equally efficacious in terms of eliciting response, then cost savings can accrue to investigators as postcards are much less expensive to mail. Lessons from the few studies comparing the relative merits of prenotification via letter versus postcard in the context of the telephone survey suggest that postcards may be as effective in increasing response as letters [18, 19], although they are slightly less likely to be read [19, 20]. However, there is not an over-abundance of research on postcard prenotification in the telephone survey area either and there have been calls for more research into postcards as a form of prenotification .
1.3. The current study
Although there has been a fair amount of research undertaken studying the effects of questionnaire length on response rates, most of it has focused on manipulations in length at the higher end of the spectrum (viz. longer than four pages). Edwards et al. indicates, "...that questionnaire length has a substantial impact on non-response, particularly when questionnaires are very short." (p. 11), but do not provide any direct comparisons of the impact of different lengths of surveys within the range of what is considered short. In addition, the extant literature on the effects of prenotification has mainly considered the effect of letters or telephone contact (versus none) as the primary prenotification vehicle. Very few have studied the viability of postcard prenotification even though use of postcards brings about rather substantial cost savings relative to other forms; what information exists comes from studies undertaken in the context of a telephone survey. How well these latter findings translate to mailed surveys is unclear. Therefore, we tested the effect of questionnaire length (2 pages versus 4 pages) crossed with prenotification type (letter versus postcard) on response rates, response times, and missing data totals in the context of a large population-based mail survey. To our knowledge, no published study has tested the effect of questionnaire length and prenotification type simultaneously in a factorial design.
This study was undertaken as part of a larger pilot study designed to determine the impact of different recall durations (3 months vs. 1 year) on individual gastrointestinal symptoms and functional gastrointestinal disorder (FGID) diagnoses. The sampling strategy and its associated power calculations were indexed off the principal aims of that parent study. Further details of this larger study and its findings can be found elsewhere . Briefly, we randomly selected 900 residents of Olmsted County, Minnesota, aged 25-65 years old using the Rochester Epidemiology Project (REP). The REP is a comprehensive medical records linkage system that captures medical data from electronic and paper medical and autopsy records for patients using the Mayo Clinic, Olmsted Medical Center, their affiliated hospitals, or one private practice provider. Because most Olmsted County residents receive their medical care from one of those providers, it is possible to conduct population-based research on disease incidence, mortality, and use of health services in the region . Importantly, from this sample frame we know the gender and age of both responders and non-responders, allowing us to assess how their distribution potentially differs across experimental conditions.
The sample was stratified by age and gender. Those that had previously participated in any gastrointestinal-related survey conducted by two of the authors (Talley, Locke) were excluded. Also excluded were subjects with significant illnesses, a major psychotic episode, mental retardation, dementia, inmates in the Federal Medical Center (a prison managed by the U.S. Federal Bureau of Prisons), and those that had previously refused general authorization to review their medical record for research (less than 4 percent of Olmsted County residents) . These exclusions were done prior to the random assignment described in the next section. The survey was mailed in July 2008.
The varied questionnaire length versions were based on the Talley Bowel Disease Questionnaire (Talley-BDQ; ). The Talley-BDQ was designed as a self-report instrument to measure symptoms experienced over the past year and to collect past medical history. For this experiment, the full 16 page Talley-BDQ was shortened to a 4 page version and then to a 2 page version aimed to comprise only those questions needed to achieve the pre-defined specific goals. For that purpose, a sequential procedure was followed: (1) variables derived from each question were listed, (2) variables needed of the specific targeted research projects were selected, (3) all unnecessary questions were deleted, (3) remaining list was reviewed by investigators, (4) remaining questions were formatted in a 4 page questionnaire, (5) questions strictly needed to achieve the objective of one project were further refined to fit 2 page questionnaire. After the shortening process, a 2 page version of the questionnaire was drawn with 18 questions (7 questions about abdominal pain and related changes in bowel habits; 9 questions about usual bowel pattern; 2 questions relative to consultation). The 4 page version of the questionnaire included 17 additional questions (one for fecal incontinence, 11 for upper GI symptoms, and 5 for medications used) plus a short version of somatic symptom checklist (SSC -6 items). With the exception of the last question, the items were identical across the first two pages of each survey. The last question on the two page survey was, however, identical to the last question on the four page survey.
The letter and postcard prenotifications contained the same text. Both identified the survey sponsor and described the purpose of the study, how subjects were chosen, the importance of responding, the anticipated completion time (10 minutes or less), and how confidentiality will be protected. The letter and postcard also asked prospective respondents to mark a box if they wished to receive a report of the study results and alerted them to the fact that a book titled "Mayo Clinic on Digestive Health" would be included in the survey packet to come as a token of appreciation. The main differences between notification type was that the letter, but not the postcard, contained a salutation to a specific individual and included the primary investigator's signature (Locke).
All subjects were sent either a letter or a postcard one week prior to mailing the survey package. A week after pre-notification, a survey package was sent to all potential respondents. The package included a cover letter, the book, a pen incentive, and one of two versions of the modified Talley-BDQ. Reminder letters, along with another survey, were sent to nonresponders 4 weeks after the first mailing. Subjects who indicated at any point that they did not want to be contacted further were excluded from the study. All consent and study procedures were approved by the Mayo Clinic Institutional Review Board; the survey data collection was conducted by the Mayo Clinic Survey Research Center.
2.3. Statistical Methods
Sample characteristics were summarized with frequencies and percentages for categorical data (gender, age group, race), means and standard deviations for age (continuous), and medians and inter-quartile range (IQR) for time to response. Response rates (RR) were calculated overall as well as within each survey condition as the number of surveys returned divided by the number of surveys sent. Time to response (among responders) was calculated as the number of days between the initial survey mailing and response date. The primary outcome for the analysis was whether or not a survey recipient responded. Overall differences in response rates between factors (survey length and pre-notification type) and characteristics (gender, age group, and race) were assessed with chi-square tests (or Fisher's exact tests where appropriate). Race was categorized as "white", "non-white" (American Indian/Alaskan Native, Black or African American, Native Hawaiian/Pacific Islander, and Asian), and "other/unknown" (those that specifically indicated "other" or chose not to disclose). Differences in time to response among responders between survey conditions were compared with pair-wise Wilcoxon rank-sum tests. As an assessment of item non-response, the percentage of respondents with missing data for each question in common to the 2-page and 4-page surveys was compared with Fisher's exact tests.
Logistic regression was used to assess the effects of survey length and pre-notification type on likelihood to respond adjusted for each other as well as for age (continuous), gender, and race. The first model included main effects only (notification type, survey length, age, gender, and race). Two-way interactions between the survey characteristics and demographics were then assessed, and the second model included a significant age-by-survey length interaction. Odds ratios with 95% confidence intervals were reported. For the second model, to illustrate how the effect of survey length differed by age, the odds ratio for survey length was calculated at different points across the age range (ages 25, 35, 45, and 55). All analyses were performed using SAS v. 9.1 software (Cary, NC). A p-value of < 0.05 was regarded as statistically significant.
3.1. Sample characteristics
Age, gender, and race distribution of sample population
2 × 2 Factorial Design
25 - 29
30 - 39
40 - 49
50 - 59
60 - 65
3.2. Response rates, response times, and missing data totals
Response rates overall and by design characteristics, by gender, race, and age
2 × 2 Factorial Design
25 - 29
30 - 39
40 - 49
50 - 59
60 - 65
3.3. Logistic regression analysis
Logistic Regression Results, odds ratios comparing likelihood of response.
Main effects only
Notify: Postcard vs. Letter
Pages: 2 vs 4
Age: 1 year increase
Sex: Female vs Male
Race: White vs non-white*
Race: White vs other/unknown
Including interaction between survey length and age
Notify: Postcard vs. Letter
Pages: 2 vs 4 (for 25-year-olds)
Pages: 2 vs 4 (for 35-year-olds)
Pages: 2 vs 4 (for 45-year-olds)
Pages: 2 vs 4 (for 55-year-olds)
Sex: Female vs Male
Race: White vs non-white*
Race: White vs other/unknown
Our study evaluated the relative effects of two key factors shown to effect participation in mailed surveys: questionnaire length and prenotification . Counter to the overall conclusions of the meta-analysis recently conducted by Edwards and colleagues , but consistent with selected prior research [25–28], we did not find a significant main effect of questionnaire length on response. We did find, however, a significant and potentially important interaction between length and age where younger individuals were more likely to respond to a longer (4 page) survey than a shorter (2 page) survey. The reasons underlying this observation are unclear. As posited by some [12, 13], it may be that shorter questionnaires convey a lack of importance and comprehensiveness required for them to be perceived as in need of completion relative to longer versions. The fact that we observed a higher response to the longer 4 page survey only among the younger population suggests that this phenomenon may be even more acute in this group. Conversely, it may be that older respondents are relatively immune to the vagaries of variations in questionnaire length as ample evidence has shown survey response rates to be highest among older citizens, irrespective of survey type . Finally, it is possible our observed questionnaire length by age interaction only manifests itself at the low end of the questionnaire length spectrum (4 pages and under). Future research in this area should attempt to replicate these findings and use a study design better suited to identify the mechanisms at work.
Our observed lack of significant differences in response to the letter versus postcard prenotification is consistent with similar findings in the telephone survey literature [18, 19]. There are no studies comparing these two forms in the mailed survey literature to our knowledge. Given the observed equivalence in the likelihood of response to the two forms, the cost savings associated with utilizing a postcard as the vehicle for prenotification versus a letter suggest that the former represents a viable option for investigators facing constraints in their financial resources. In this study, it cost $0.44 more to mail the letter than the postcard (including the cost of labor for preparing the mailings and postage fees). For the purpose of this exercise we assume the cost of printing and supplies to be similar across the two modalities and as such do not include them in the comparison. Applied to our entire study, this would have represented a total of $170.63 savings accrued to the investigator, or about $0.87 per completion, postcard vs. letter. Despite potential cost savings, future researchers wishing to use a postcard should be mindful of the evidence that the respondents remember less of what was conveyed in the postcard than in the letter [19, 20]. There were no differences across conditions with respect to item non-response or time (in days after mailing) to survey response among responders.
Certain elements of our study may limit the generalizability of our findings. First, our study design called for the use of book and pen incentives. There is rather strong evidence that even nonmonetary inducements such as these increase the likelihood of response . Therefore, our absolute response rates observed across conditions may be inflated. However, because the book and pen was offered to everyone, the veracity of our between group comparisons should not be compromised. Second, our two different forms of prenotification differed not only in terms of the medium chosen (viz. letter vs. postcard) but in the form of personal salutation to the prospective respondent. Specifically, the letter contained a personal salutation and the principal investigator's signature. As Edwards and colleagues  have shown, the presence of a personal salutation may be enough to increase the likelihood of response. As such, our letter versus postcard comparison may not fully represent an "apples to apples" comparison and our findings may be confounded by this fact. Finally, there may be a concern about the relative lack of racial/ethnic diversity of the Olmsted County population and the generalizability of the findings to other populations. However, the distributions of socioeconomic characteristics are very similar to those of U.S. whites generally, except for the percentage of the population employed in health-related services and the corresponding increase in the proportion with college or advanced degrees . Historically, there have been relatively few persons of color or Hispanic ethnicity but, like many urban centers, Olmsted County is realizing rapid changes in its racial/ethnic composition, suggesting that this may be less of an issue than in the past.
This was the first study to formally evaluate questionnaire length and prenotification in a full factorial design using multiple indicators of survey quality (i.e., response rates, time to respond, and item missing data totals). In this population-based mailed survey study, we found that none of our measures of survey quality, including response rates, varied by length of survey or pre-notification type. Differences in response rates by questionnaire length were seen among young adults who were more likely to respond to the 4-page than the 2-page questionnaire. This study suggests that the shortest survey does not necessarily provide the best option for increased response rates and survey quality. This finding, coupled with the potential of reduced accuracy of the measurement process brought about through shortening a questionnaire , suggests that future researchers hoping to increase participation in their mail survey-based investigations should be cautious in their effort to reduce survey lengths. In addition, prenotification via postcard might bring about significant cost savings over the use of letters with very little detriment to overall participation.
List of abbreviations
Functional gastrointestinal disorders
Rochester Epidemiology Project
Talley Bowel Disease Questionnaire
Somatic symptom checklist
This study was funded by Natural History and Co-morbidities in Chronic Constipation: A Population based Study (INDUS Takeda 91292027, Talley), and was made possible by the Rochester Epidemiology Project (RO1 AR030582 from the National Institute of Arthritis and Musculoskeletal and Skin Diseases). Dr. Rey was supported by grant BA08/90038 from the Carlos III Institute, Ministry of Health, Spain.
- Berk ML, Schur CL, Feldman J: Twenty-five years of health surveys: does more data mean better data?. Health Aff (Millwood). 2007, 26 (6): 1599-1611. 10.1377/hlthaff.26.6.1599.View ArticleGoogle Scholar
- Curtin R, Presser S, Singer E: Changes in Telephone Survey Nonresponse over the Past Quarter Century. Public Opin Q %R101093/poq/nfi002. 2005, 69 (1): 87-98. 10.1093/poq/nfi002.View ArticleGoogle Scholar
- de Leeuw E, de Heer W: Trends in household survey nonresponse: A longitudinal and international comparison. Survey Nonresponse. Edited by: Groves R, Dillman D, Eltinge J, Little R. 2002, New York: Wiley, 41-54.Google Scholar
- Groves R, Couper M: Nonresponse in household interview surveys. 1998, New York: John Wiley & SonsView ArticleGoogle Scholar
- Hox J, de Leeuw E: A comparison of nonresponse in mail, telephone, and face-to-face surveys: Applying multilevel modeling to meta-analysis. Quality and Quantity. 1994, 28 (4): 329-344. 10.1007/BF01097014.View ArticleGoogle Scholar
- Steeh C, Kirgis N, Cannon B, DeWitt B: Are they really as bad as they seem? Nonresponse rates at the end of the twentieth century. J Off Stat. 2001, 17: 227-247.Google Scholar
- Groves R, Fowler F, Couper M, Lepkowski J, Singer E, Tourangeau R: Survey Methodology. 2004, New York: WileyGoogle Scholar
- Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, MR000008-3
- Dillman D: Mail and telephone surveys: The total design method. 1978, New York: WileyGoogle Scholar
- Yammarino FJ, Skinner SJ, Childers TL: Understanding mail survey response behavior: a meta-analysis. Public Opin Q. 1991, 55 (4): 613-639. 10.1086/269284.View ArticleGoogle Scholar
- Jepson C, Asch DA, Hershey JC, Ubel PA: In a mailed physician survey, questionnaire length had a threshold effect on response rate. J Clin Epidemiol. 2005, 58 (1): 103-105. 10.1016/j.jclinepi.2004.06.004.View ArticlePubMedGoogle Scholar
- Eslick GD, Howell SC: Questionnaires and postal research: more than just high response rates. Sex Transm Infect. 2001, 77 (2): 148-10.1136/sti.77.2.148.View ArticlePubMedPubMed CentralGoogle Scholar
- Mond JM, Rodgers B, Hay PJ, Owen C, Beumont PJ: Mode of delivery, but not questionnaire length, affected response in an epidemiological study of eating-disordered behavior. J Clin Epidemiol. 2004, 57 (11): 1167-1171. 10.1016/j.jclinepi.2004.02.017.View ArticlePubMedGoogle Scholar
- Asch DA, Jedrziewski MK, Christakis NA: Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997, 50 (10): 1129-1136. 10.1016/S0895-4356(97)00126-1.View ArticlePubMedGoogle Scholar
- Champion D, Sear A: Questionnaire response rates: A methodological analysis. Social Forces. 1969, 47: 335-339. 10.2307/2575033.View ArticleGoogle Scholar
- Goldstein L, Friedman H: A case for double postcards in surveys. Journal of Advertising Research. 1975, 15: 43-47.Google Scholar
- de Leeuw E, Callegaro M, Hox J, Korendijk E, Lensvelt-Mulders G: The Influence of Advance Letters on Response in Telephone Surveys: A Meta-Analysis. Public Opin Q. 2007, 71 (3): 413-443. 10.1093/poq/nfm014.View ArticleGoogle Scholar
- Hembroff LA, Rusz D, Rafferty A, McGee H, Ehrlich N: The Cost-Effectiveness of Alternative Advance Mailings in a Telephone Survey. Public Opin Q. 2005, 69 (2): 232-245. 10.1093/poq/nfi021.View ArticleGoogle Scholar
- Richardson A: Prenotification: does size matter?. International Field Directors and Technologies Conference, Del Ray Beach, FL. 2009Google Scholar
- Iredell H, Shaw T, Howat P, James R, Granich J: Introductory postcards: do they increase response rate in a telephone survey of older persons?. Health Educ Res. 2004, 19 (2): 159-164. 10.1093/her/cyg015.View ArticlePubMedGoogle Scholar
- Rey E, Locke GR, Jung HK, Malhotra A, Choung RS, Beebe TJ, Schleck CD, Zinsmeister AR, Talley NJ: Measurement of abdominal symptoms by validated questionnaire: a three month recall time frame as recommended by Rome III is not superior to a one year recall time frame. Aliment Pharmacol Ther. 2010Google Scholar
- Melton LJ: History of the Rochester Epidemiology Project. Mayo Clin Proc. 1996, 71 (3): 266-274. 10.4065/71.3.266.View ArticlePubMedGoogle Scholar
- Jacobsen SJ, Xia Z, Campion ME, Darby CH, Plevak MF, Seltman KD, Melton LJ: Potential effect of authorization bias on medical record research. Mayo Clin Proc. 1999, 74 (4): 330-338. 10.4065/74.4.330.View ArticlePubMedGoogle Scholar
- Talley NJ, Phillips SF, Melton J, Wiltgen C, Zinsmeister AR: A patient questionnaire to identify bowel disease. Ann Intern Med. 1989, 111 (8): 671-674.View ArticlePubMedGoogle Scholar
- Edwards P, Roberts I, Sandercock P, Frost C: Follow-up by mail in clinical trials: does questionnaire length matter?. Control Clin Trials. 2004, 25 (1): 31-52. 10.1016/j.cct.2003.08.013.View ArticlePubMedGoogle Scholar
- Nakash RA, Hutton JL, Jorstad-Stein EC, Gates S, Lamb SE: Maximising response to postal questionnaires--a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006, 6: 5-10.1186/1471-2288-6-5.View ArticlePubMedPubMed CentralGoogle Scholar
- Ronckers C, Land C, Hayes R, Verduijn P, van Leeuwen F: Factors impacting questionnaire response in a Dutch retrospective cohort study. Ann Epidemiol. 2004, 14 (1): 66-72. 10.1016/S1047-2797(03)00123-6.View ArticlePubMedGoogle Scholar
- Subar AF, Ziegler RG, Thompson FE, Johnson CC, Weissfeld JL, Reding D, Kavounis KH, Hayes RB: Is shorter always better? Relative importance of questionnaire length and cognitive ease on response rates and data quality for two dietary questionnaires. Am J Epidemiol. 2001, 153 (4): 404-409. 10.1093/aje/153.4.404.View ArticlePubMedGoogle Scholar
- Elliott MN, Edwards C, Angeles J, Hambarsoomians K, Hays RD: Patterns of unit and item nonresponse in the CAHPS Hospital Survey. Health Serv Res. 2005, 40 (6 Pt 2): 2096-2119. 10.1111/j.1475-6773.2005.00476.x.View ArticlePubMedPubMed CentralGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/10/50/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.