Effect of paper quality on the response rate to a postal survey: A randomised controlled trial. [ISRCTN32032031]
© Clark et al; licensee BioMed Central Ltd. 2001
Received: 24 September 2001
Accepted: 17 December 2001
Published: 17 December 2001
Response rates to surveys are declining and this threatens the validity and generalisability of their findings. We wanted to determine whether paper quality influences the response rate to postal surveys
A postal questionnaire was sent to all members of the British Society of Gynaecological Endoscopy (BSGE). Recipients were randomised to receiving the questionnaire printed on standard quality paper or high quality paper.
The response rate for the recipients of high quality paper was 43/195 (22%) and 57/194 (29%) for standard quality paper (relative rate of response 0.75, 95% CI 0.33–1.05, p = 0.1
The use of high quality paper did not increase response rates to a questionnaire survey of gynaecologists affiliated to an endoscopic society.
Postal surveys are commonly used in medical research. Response rates to surveys are declining  and this threatens the validity and generalisability of their findings. It is therefore important that strategies are developed in order to reverse this trend. [2, 3] We hypothesized that the paper quality on which the postal questionnaire was printed, may effect response rates. This is because the recipient may be inclined to look upon the questionnaire more approvingly, if the quality of paper used is high, thereby increasing the chance of a response. In order to test the effectiveness of this strategy, we conducted a randomised controlled trial, as part of a survey of gynaecologists, to determine if high paper quality increases the response rate to questionnaires.
All gynaecologists identified from the British Society of Gynaecological Endoscopy (BSGE) database of members were sent a questionnaire with a covering letter and prepaid response envelope in April 2000. The questionnaire sought views about current and future research priorities in gynaecological endoscopy. Recipients were randomised to receiving the questionnaire with a covering letter printed on standard quality white paper or high quality white paper. High quality paper was defined as a weight of 100 grams and standard quality paper as a weight of 80 grams. The participants were not informed of the randomisation to paper quality. The randomisation sequence was computer generated and group allocation was concealed from the participants throughout the study. No reminders were sent. Based on the response rate from a recent gynaecological survey,  we assumed that provision of high quality paper would increase the proportion of responders by 15%, from 45% to 60%. This meant that the sample size had 80% power to detect a statistically significant difference at the level of alpha= 0.05. Relative response rates were determined and statistical significance tested for a difference in proportions. The trial results were reported according to the CONSORT guidelines. 
The use of high quality paper did not increase response rates to a questionnaire survey of gynaecologists affiliated to an endoscopic society. The low response rate in our survey may have resulted from the content our questionnaire as it enquired about research issues and so it is more likely that those with an active interest were likely to respond. However, any such selection biases should be minimised by the randomisation process and does not therefore affect the internal validity of our findings. The low response to our survey does limit the external validity or generalisability of our findings. Our power assumptions were not borne out by the response rates and some may argue that the apparent lack of an effect may be due to an inadequate sample. However, this would not explain a trend towards a lower response rate in the group allocated high quality paper. It may be that the 20 gram difference between paper weight in the two groups was too small so that recipients of 'high' quality paper did not readily distinguish it from there general day to day paperwork. It is also possible that our definition of paper quality using weight alone was inadequate and other features of stationary quality should have been used, such as colour intensity, laid paper and watermarking.
Postal surveys are widely used because they represent a cost effective method of obtaining information from large numbers of geographically disparate medical professionals about their attributes, behaviours, attitudes and beliefs. It is of concern that response rates are declining  and therefore there is a need to develop effective strategies, in addition to the content of questionnaire itself,  to counter this trend. Data from primary and secondary research have indicated that prenotifying recipients, personalising questionnaires and providing follow up letters improves response rates. [7–9] Other potentially useful techniques include the colour of questionnaires, sponsorship from academic institutions, inclusion of return envelopes and utilising monetary and non-monetary incentives. [7–9] In contrast, provision of pens,  the use of covering letters, assurances of anonymity and stating deadlines do not increase rates of return.  Studies have reported conflicting findings regarding the effect of "help the researcher" type appeals in covering letters [8, 9] and the provision of return postage, [9, 10] although the type of return postage provided appears to influence response.  To our knowledge the effect of paper quality on response rates to postal surveys has not been previously tested in a randomised controlled trial.
Given the lack of effectiveness shown in our study and the costs associated with higher quality paper in a questionnaire (approximately 35% increase in costs for higher quality paper – €66 versus €43 for 5 reams (2500 sheets) of A4 size (local National Health Service suppliers)), investigators should carefully consider the use of this particular strategy to improve response rates. If quality differences are marginal, there may not be a substantial improvement in response rates.
Tracy Bingham, Amy Godwin, Jan Godwin, Christine Lyons, Anthony Morrison and Ian West for their help in mailing the questionnaire.
Contributors: KSK generated the concept for the paper with input from TJC. TJC generated the randomisation sequence, collected the responses and analysed the data. TJC wrote the manuscript with comments from KSK and JKG. JKG is the guarantor.
Funding: University of Birmingham Interdisciplinary Research Fund and the Birmingham Women's Hospital Research and Development Programme.
- McAvoy BR, Kaner EF: General practice postal surveys: a questionnaire too far?. BMJ. 1996, 313: 732-733.View ArticlePubMedPubMed CentralGoogle Scholar
- Spry VM, Hovell MF, Sallis JG, Hofsteter CR, Elder JP, Molgaard CA: Recruiting survey respondents to mailed surveys: controlled trials of incentives and prompts. Am J Epidemiol. 1989, 130 (1): 166-72.PubMedGoogle Scholar
- Clark TJ, Khan KS, Gupta JK: Provision of pen along with questionnaire does not increase the response rate to a postal survey: A randomised controlled trial. J Epidemiol Community Health. 2001, 55: 595-596. 10.1136/jech.55.8.595.View ArticlePubMedPubMed CentralGoogle Scholar
- Clark TJ, Daniels J, Khan KS, Gupta JK: Hysterectomy with bilateral salpingo-oophorectomy: A survey of gynecological practice. Acta Obstet Gynecol Scand. 2001, 80 (1): 62-4. 10.1034/j.1600-0412.2001.800112.x.View ArticlePubMedGoogle Scholar
- Moher D, Schulz KF, Altman DG, for the CONSORT Group: The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001, 357: 1191-1194. 10.1016/S0140-6736(00)04337-3.View ArticlePubMedGoogle Scholar
- Dillman DA, Sinclair MD, Clark JR: Effects of questionnaire length, respondent-friendly design, and a difficult question on response rates for occupant-addressed census mail surveys. Public Opinion Quarterly. 1993, 57: 289-304. 10.1086/26937610.1086/269376.View ArticleGoogle Scholar
- Fox RJ, Crask MR, Jonghoon K: Mail survey response rate. A meta-analysis of selected techniques for inducing response. Public Opinion Quarterly. 1988, 52: 467-491. 10.1086/26912510.1086/269125.View ArticleGoogle Scholar
- Yu J, Cooper H: A quantitative review of research design effects on response rates to questionnaires. Journal of Marketing Research. 1983, 20: 36-44.View ArticleGoogle Scholar
- Yammarino FJ, Skinner SJ, Childers TL: Understanding mail survey response behavior. A meta-analysis. Public Opinion Quarterly. 1993, 55: 613-639. 10.1086/26928410.1086/269284.View ArticleGoogle Scholar
- Armstrong JS, Lusk EJ: Return postage in mail surveys. A meta-analysis. Public Opinion Quarterly. 1987, 51: 233-248. 10.1086/26903110.1086/269031.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/1/12/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL.