Skip to main content

Response is increased using postal rather than electronic questionnaires – new results from an updated Cochrane Systematic Review

Abstract

Background

A decade ago paper questionnaires were more common in epidemiology than those administered online, but increasing Internet access may have changed this. Researchers planning to use a self-administered questionnaire should know whether response rates to questionnaires administered electronically differ to those of questionnaires administered by post. We analysed trials included in a recently updated Cochrane Review to answer this question.

Methods

We exported data of randomised controlled trials included in three comparisons in the Cochrane Review that had evaluated hypotheses relevant to our research objective and imported them into Stata for a series of meta-analyses not conducted in the Cochrane review. We pooled odds ratios for response using random effects meta-analyses. We explored causes of heterogeneity among study results using subgroups. We assessed evidence for reporting bias using Harbord’s modified test for small-study effects.

Results

Twenty-seven trials (66,118 participants) evaluated the effect on response of an electronic questionnaire compared with postal. Results were heterogeneous (I-squared = 98%). There was evidence for biased (greater) effect estimates in studies at high risk of bias; A synthesis of studies at low risk of bias indicates that response was increased (OR = 1.43; 95% CI 1.08–1.89) using postal questionnaires. Ten trials (39,523 participants) evaluated the effect of providing a choice of mode (postal or electronic) compared to an electronic questionnaire only. Response was increased with a choice of mode (OR = 1.63; 95% CI 1.18–2.26). Eight trials (20,909 participants) evaluated the effect of a choice of mode (electronic or postal) compared to a postal questionnaire only. There was no evidence for an effect on response of a choice of mode compared with postal only (OR = 0.94; 95% CI 0.86–1.02).

Conclusions

Postal questionnaires should be used in preference to, or offered in addition to, electronic modes.

Peer Review reports

Introduction

Rationale

When collecting information from large, geographically dispersed populations, a self-administered questionnaire is usually the only financially viable option [1]. Non-responses to questionnaires reduce effective sample sizes, reducing study power, and may introduce bias in study results [2]. The Cochrane Methodology Review of methods to increase response to self-administered questionnaires has provided a much-used scientific evidence base for effective data collection by questionnaire since the publication of the first version of the review in 2003 which focused on postal questionnaires [3].

A decade ago paper-and-pencil administration of questionnaires in epidemiological studies was twenty times more common than electronic administration [4], but increased Internet access and decreasing volumes of mailed letters suggests that electronic administration has gained favour [5,6,7]. Researchers planning to collect data from participants using a self-administered questionnaire need to know how will the proportion of participants responding to a questionnaire administered electronically compare with one administered by post? We conducted further analyses of the trials included in the recently updated Cochrane Review [8] to answer this question.

Objective

To assess whether response rates to questionnaires administered electronically differ to those of questionnaires administered by post.

Methods

Data sources/measurement

We exported data of randomised controlled trials included in the updated Cochrane Review [8] from RevMan and imported them into Stata for a series of meta-analyses not conducted in the Cochrane review.

Comparisons

We focused on data from trials included in three comparisons in the Cochrane Review that had evaluated hypotheses relevant to our research objective:

  1. 1.

    Postal vs. electronic questionnaire (Cochrane Comparison 81).

  2. 2.

    Electronic questionnaire only vs. choice (postal or electronic) (Cochrane Comparison 84).

  3. 3.

    Choice (electronic or postal) vs. postal questionnaire only (Cochrane Comparison 82).

These comparisons assess: response to questionnaires administered by post compared with questionnaires administered electronically, response to a questionnaire administered electronically compared with response when including a postal response option, and response when including an electronic response option compared with response to a questionnaire administered by post only, respectively.

Data items

Outcome measures

The data obtained from each trial included the numbers of participants randomised to each arm of the trial with the numbers of completed, or partially completed questionnaires returned after all mailings (for trials including a postal questionnaire), and the numbers of participants randomised to each arm of the trial with the numbers of participants submitting the completed, or partially completed online questionnaires after all contacts (electronic questionnaire).

Other variables

Additional data were also extracted on the:

  • Year of publication of the study.

  • Risk of bias in each included study (a judgment - high, low, or unclear); we assessed the overall risk of bias in each study using the Cochrane Collaboration’s tool [9].

Effect measures and synthesis

For each of the three comparisons (2.1.1 above), we pooled the odds ratios for response in each included study in a random effects meta-analysis (to allow for heterogeneity of effect estimates between studies) using the metan command in Stata [10]. This command also produced a forest plot (a visual display of the results of the individual studies and syntheses) for each comparison. We quantified any heterogeneity using the I2 statistic that describes the percentage of the variability in effect estimates that is due to heterogeneity [11].

Subgroup analyses

We explored possible causes of heterogeneity among study results by conducting subgroup analyses according to two study-level factors: Year of study publication, and risk of bias in studies. We used a statistical test of homogeneity of the pooled effects in subgroups to assess evidence for subgroup differences. The statistical test of homogeneity used is Cochran’s Q test, where the Q statistic is distributed as a chi-square statistic with k-1 degrees of freedom, where k is the number of subgroups. If there was evidence for subgroup differences provided by the test of homogeneity, we chose the ‘best estimate of effect’ as the estimate from the subgroup of studies with low risk of bias, or the subgroup of studies published after 2012. If there was no evidence for subgroup differences, we chose our best estimate of effect based on the synthesis of all studies.

Year of study publication

From 2012, household access to a computer exceeded 40%: [5] As the odds ratios for response to questionnaires administered electronically may be associated with household access to a computer, we analysed trial results in two subgroups – before 2012 and after 2012, where we used the year of publication as an approximation of the year of study conduct.

Risk of bias

The odds ratios for response estimated in the included studies may be associated with trial quality. [12, 13] For this reason we analysed trial results in two subgroups – trials judged to be at low and at high risk of bias.

Reporting bias assessment

We assessed evidence for reporting bias using Harbord’s modified test for small-study effects implemented in Stata using the metabias command [14]. This test maintains better control of the false-positive rate than the test proposed by Egger at al [14].

Results

Study characteristics

Thirty-five studies [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49] reported 45 independent trials included in one or more of the three comparisons (Table 1). The studies were conducted in the US (n = 20), Europe (n = 13), and Australasia (n = 2). The studies included between 133 and 12,734 participants and were published between 2001 and 2020. Eight studies were judged to be at high risk of bias [16, 19, 33, 34, 42, 43, 45, 46].

Table 1 Characteristics of 35 included studies

Results of syntheses

Comparison 1 - Postal vs. electronic questionnaire

Twenty-seven trials (66,118 participants) evaluated the effect on questionnaire response of postal administration compared with electronic. [15,16,17,18,19,20, 23,24,25,26,27,28, 31,32,33,34,35,36, 38,39,40,41, 43, 44, 46,47,48] The odds of response were increased by over half (OR 1.76; 95% CI 1.34 to 2.32) using a postal questionnaire when compared with an electronic one (Fig. 1). There was considerable heterogeneity between the trial results (I-squared = 98%), but most of the studies showed response was greater with postal questionnaires than with electronic questionnaires, and the high I-squared is due to differences in the size of the benefit for postal questionnaires, rather than being due to an even spread of results between those favouring postal and those favouring electronic questionnaires.

Fig. 1
figure 1

Effect on response of mode of administration

Comparison 2 - electronic questionnaire only vs. choice (postal or electronic)

Ten trials (39,523 participants) evaluated the effect on questionnaire response of providing a choice of response mode (postal or electronic) compared to an electronic questionnaire only [20, 21, 27, 29, 30, 35, 37, 40, 42, 45]. The odds of response were increased by over half when providing a choice of response mode (OR 1.63; 95% CI 1.18 to 2.26; Fig. 2). There was considerable heterogeneity between the trial results (I-squared = 97.1%), but again, most of the studies favoured giving people the choice of response mode rather than electronic questionnaire only, and the high I-squared is due to differences in the size of the benefit for choice, rather than being due to an even spread of results between those favouring choice and those favouring electronic only.

Fig. 2
figure 2

Effect on response of choice of response mode compared with electronic only

Comparison 3 - choice (electronic or postal) vs. postal only

Eight trials (20,909 participants) evaluated the effect of providing a choice of response mode (electronic or postal) compared to postal response only [20, 22, 27, 29, 34, 35, 40, 49]. There was no evidence for an effect on response of providing a choice (OR 0.94; 95% CI 0.86 to 1.02; Fig. 3). There was moderate heterogeneity among the trial results (I-squared = 50.9%).

Fig. 3
figure 3

Effect on response of choice of response mode compared with postal only

Results of subgroup analyses

Table 2 presents the results of subgroup analyses according to the two study-level factors (forest plots of these subgroup analyses are included in supplementary figures).

Table 2 Results of subgroup analyses of according to two study-level factors

Comparison 1 - postal vs. electronic questionnaire

Year of publication

A third of studies were published before 2012 [15,16,17, 23, 24, 33, 35, 40, 47, 48]. In this subgroup of studies the odds of response were 85% greater (OR 1.85; 95% CI 1.12 to 3.06) with a postal questionnaire compared with an electronic one. In the subgroup of studies published after 2012 the effect was lower (OR 1.70; 1.19 to 2.43), consistent with our concern (Sect. 2.4.1) that higher household access to a computer from 2012 may have improved preference for electronic questionnaires, however the statistical test of homogeneity of the pooled effects in these two subgroups was not significant (p = 0.788), indicating no evidence from these studies for different effects by year of study (Supplementary Fig. 1a).

Risk of bias

Seven of the trials [16, 17, 26, 33, 34, 43, 46] were judged to be at high risk of bias and for these trials the odds of response were more than tripled (OR 3.24; 95% CI 1.68 to 6.25) using a postal questionnaire when compared with an electronic one. There was considerable heterogeneity between the trial results (I-squared = 99%).

When only the 20 trials deemed to be at low risk of bias were synthesised, the odds of response were increased by two-fifths (OR 1.43; 95% CI 1.08 to 1.89). There was also considerable heterogeneity between these trial results (I-squared = 96.8%).

The statistical test of homogeneity of the pooled effects in these two subgroups (p = 0.025) provides some evidence for greater effect estimates in studies at high risk of bias (Supplementary Fig. 1b). Our best estimate of the effect on response of mode of administration is hence from a synthesis of the studies at low risk of bias (OR 1.43; 95% CI 1.08 to 1.89). Results overall were thus confounded by risk of bias, but this did not explain the between study heterogeneity.

Comparison 2 - electronic questionnaire only vs. choice (postal or electronic)

Year of study

Half of studies were published before 2012 [35, 40, 42, 45]. In this subgroup of studies there was no evidence for an effect on response of providing a postal response option (OR 1.22; 95% CI 0.93 to 1.61). In the subgroup of studies published after 2012 there was evidence for an effect on response of providing a postal response option (OR 2.02; 95% CI 1.30 to 3.13). The statistical test of homogeneity of the pooled effects in these two subgroups was significant (p = 0.057), indicating some evidence from these studies for different effects by year of study (Supplementary Fig. 2a). This apparent preference for a postal response option in studies published after 2012 was counter to our concern (Sect. 2.4.1) that higher household access to a computer from 2012 would improve preference for electronic questionnaires. There was considerable heterogeneity between the trial results (I-squared = 98.2%), but most of the studies favoured giving people the choice of response mode rather than electronic questionnaire only, and the high I-squared is due to differences in the size of the benefit for choice, rather than being due to an even spread of results between those favouring choice and those favouring electronic only.

Risk of bias

Two of the trials were judged to be at high risk of bias [42, 45]. There was no evidence for an effect on response of a postal option in these studies (OR 1.08; 95% CI 0.43 to 2.71). When only the 8 trials deemed to be at low risk of bias were synthesised, there was evidence that the odds of response were increased when providing a postal response option (OR 1.77; 95% CI 1.23 to 2.55). There was considerable heterogeneity between these trial results (I-squared = 97.7%). The statistical test of homogeneity of the pooled effects in these two subgroups (p = 0.326) provides no evidence for different effects by risk of bias (Supplementary Fig. 2b). Our best estimate of the effect on response of providing a postal response option is hence from a synthesis of all of these studies (OR 1.63; 95% CI 1.18 to 2.26).

Comparison 3 - choice (electronic or postal) vs. postal questionnaire only

Year of study

In the subgroup of studies published before 2012 there was very weak evidence that the odds of response were lower with an electronic option (OR 0.85; 0.73 to 0.98), whereas in studies published after 2012 there was no evidence for a difference between an electronic option and postal only – perhaps due to electronic methods being more acceptable with increased computer access. The results in both subgroups were more homogeneous (I-squared = 48.5% and 7.0% respectively). The statistical test of homogeneity of the pooled effects in these two subgroups (p = 0.04) provides some evidence for different effects by year of study (Supplementary Fig. 3a). If we consider the most recent trials to better represent the situation today (i.e., greater access to computers than prior to 2012), then our best estimate of the effect on response of providing an electronic response option is from a synthesis of the studies published after 2012 (OR 1.01; 95% CI 0.93 to 1.08), i.e., no evidence for an effect.

Risk of bias

There was one study at high risk of bias [34]. Its results were entirely consistent with the results of the seven studies at low risk of bias (the statistical test of homogeneity of the pooled effects in these two subgroups was not significant (p = 0.454), Supplementary Fig. 3b).

Results of assessments of evidence for reporting bias

Comparison 1 - postal vs. electronic questionnaire

There was no evidence for small study effects (Harbord’s modified test p = 0.148).

Comparison 2 - electronic questionnaire only vs. choice (postal or electronic)

There was no evidence for small study effects (Harbord’s modified test p = 0.841).

Comparison 3 - choice (electronic or postal) vs. postal questionnaire only

There was no evidence for small study effects (Harbord’s modified test p = 0.139).

Discussion

General interpretation of the results in the context of other evidence

This study has shown that response to a postal questionnaire is more likely than response to an electronic questionnaire. It has also shown that response is more likely when providing the option for postal response with an electronic questionnaire. It has further shown that providing an electronic response option with a postal questionnaire has no effect on response. Response is thus increased using postal rather than electronic questionnaires.

A previous meta-analysis of 43 mixed-mode surveys from 1996 to 2006 also found paper and postal administration produced greater response than electronic administration [50]. Our result that providing an electronic response option to postal administration does not increase response is consistent with a previous meta-analysis of randomised trials that found that mailed surveys that incorporate a concurrent Web option have significantly lower response rates than those that do not [51].

We suggest two possible reasons for these results:

  • Paper questionnaires are more accessible than electronic questionnaires.

Although access to the Internet increased over the period during which the studies included in this study were conducted [5, 52], a ‘digital divide’ [53] persists in many populations where completion of a paper questionnaire may be possible, but completion of an electronic one may not.

  • Paper questionnaires are more personal than electronic questionnaires.

Personalised materials have been shown to increase response [54]. If participants perceive a paper questionnaire with a return envelope to be more ‘personal’ than a request to go to a website to answer some questions, we should expect a higher response with paper.

Strengths and limitations

The main strengths of this study are that our results are based on syntheses of the results of 45 randomised controlled trials that span two decades, and most of which were judged to be at low risk of bias.

There was, however, considerable heterogeneity between the results of the included studies. Our subgroup analyses did not identify any causes of heterogeneity among study results, but they did reveal confounding of the pooled result for postal versus electronic questionnaires. The unexplained heterogeneity means that we cannot be confident about the magnitude of the effects on response using postal rather than electronic questionnaires. However, from inspection of the forest plots we can be confident about the direction of these effects.

The evidence included in this review addresses ‘unit’ non-response only (i.e., return of questionnaires). ‘Item’ response (i.e., completion of individual questions) may be greater with electronic methods, but this was not addressed in this review and requires investigation in the future.

We assessed evidence for reporting bias using Harbord’s modified test for small-study effects and found no evidence for bias. This test may not be reliable given the substantial heterogeneity between the results of the included trials [55].

Due to the nature of this study (secondary analysis of a published review), there is no pre-registered protocol for the subgroup analyses provided in this study.

Implications for practice, policy, and future research

These results will help researchers and healthcare providers to improve data collection from study participants and patients, helping to maintain study power and reduce bias due to missing data in research studies. In addition to the methods already known to be effective in increasing questionnaire response [8, 56], postal questionnaires should be used in preference to, or offered in addition to, electronic modes as this will help to increase the proportion of participants that responds. It should be noted, however, that the evidence upon which this recommendation is based is from studies published between 2001 and 2020, and this may change in the future as access to the Internet increases and more people become ‘tech-savvy’. Furthermore, we consider that the certainty of the evidence provided in this study is “Moderate”, due to the unexplained heterogeneity between the results of the included studies.

Future research

Evidence on effective data collection in low- and middle-income settings is needed. Research centres such as LSHTM can embed studies within trials (SWATs) in their research in these settings to help to increase the evidence base [57].

Participation rates for epidemiologic studies have been declining [58]. Our study has presented evidence that postal questionnaires are preferable to electronic questionnaires to improve participation, but it does not tell us why. Research is still needed to advance sociological and psychological theories of participation in data collection procedures [59].

Electronic administration provides benefits for researchers over paper administration which have not been addressed by this study: A well-designed Web questionnaire can control skip patterns, check for allowable values and ranges and response consistencies, and it can include instructions and explanations about why a question is being asked [60]. These options could help to improve the completeness and quality of self-administered data collection, maintaining study power, reducing the risk of bias in study results, and saving study resources. Further research into the cost-effectiveness of electronic administration compared with postal administration in different settings will be needed to inform practice [61].

Data availability

Data extracted from included studies will be available in the forthcoming update on the Cochrane Library.

References

  1. Armstrong BK, White E, Saracci R. Principles of exposure measurement in Epidemiology. New York: Oxford University Press; 1992.

    Book  Google Scholar 

  2. Greenland S. Response and follow-up bias in cohort studies. Am J Epidemiol. 1977;106:184–7.

    Article  CAS  PubMed  Google Scholar 

  3. Edwards PJ, Roberts IG, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix L, Pratap S. Methods to increase response rates to postal questionnaires. Cochrane Database of systematic reviews 2007, issue 2. Art No : MR000008 https://doi.org/10.1002/14651858.MR000008.pub3

  4. van Gelder MMHJ, Reini W, Bretveld RW, Nel Roeleveld N. Web-based questionnaires: the Future in Epidemiology? Am J Epidemiol. 2010;172:1292–8.

    Article  PubMed  Google Scholar 

  5. Alsop T. Share of households with a computer at home worldwide from 2005 to 2019 https://www.statista.com/statistics/748551/worldwide-households-with-computer/ [Accessed 4 May 2024].

  6. Mazareanu E. United States Postal Service - mail volume. https://www.statista.com/statistics/320234/mail-volume-of-the-usps/#:text=United%20States%20Postal%20Service%20%2D%20mail%20volume%202004%2D2020&text=After%20reaching%20a%20peak%20of,to%20just%20129.2%20billion%20units [Accessed 4 May 2024].

  7. BBC. What is happening to the Royal Mail? http://news.bbc.co.uk/2/hi/business/8304722.stm [Accessed 4 May 2024].

  8. Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Woolf B, Perkins C. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2023;11(11):MR000008. https://doi.org/10.1002/14651858.MR000008.pub5.

    Article  PubMed  Google Scholar 

  9. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA, Cochrane Bias Methods Group; Cochrane Statistical Methods Group. The Cochrane collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. https://doi.org/10.1136/bmj.d5928.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Harris RJ, Deeks JJ, Altman DG, Bradburn MJ, Harbord RM, Sterne JAC. Metan: fixed- and Random-effects Meta-Analysis. Stata J. 2008).;8(1):3–28. https://doi.org/10.1177/1536867X0800800102.

  11. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–60.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Schultz, et al. Empirical evidence of Bias - dimensions of Methodological Quality Associated with estimates of Treatment effects in controlled trials. JAMA. 1995;273(5):408–12.

    Article  Google Scholar 

  13. Sterne JA, Jüni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in ‘meta-epidemiological’ research. Stat Med. 2002;21(11):1513-24. https://doi.org/10.1002/sim.1184. PMID: 12111917.

  14. Harbord RM, Harris RJ, Sterne JA. Updated tests for small-study effects in meta-analyses. Stata J. 2009;9(2):197–210.

    Article  Google Scholar 

  15. Akl EA, Maroun N, Klocke RA, Montori V, Schünemann HJ. Electronic mail was not better than postal mail for surveying residents and faculty. J Clin Epidemiol. 2005;58(4):425–9.

    Article  PubMed  Google Scholar 

  16. Basnov M, Kongsved SM, Bech P, Hjollund NH. Reliability of short form-36 in an Internet- and a pen-and-paper version. Inform Health Soc Care. 2009;34(1):53 – 8. doi: 10.1080/17538150902779527. PMID: 19306199.

  17. Bech M, Kristensen MB. Differential response rates in postal and web-based surveys among older respondents. Surv Res Methods. 2009;3(1):1–6.

    Google Scholar 

  18. Beebe TJ, Jacobson RM, Jenkins SM, Lackore KA, Rutten LJF. Testing the impact of mixed-Mode designs (mail and web) and multiple contact attempts within Mode (Mail or web) on Clinician Survey Response. Health Serv Res. 2018;53(Suppl 1):3070–83. Epub 2018 Jan 22. PMID: 29355920; PMCID: PMC6056581.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Bergeson SC, Gray J, Ehrmantraut LA, Laibson T, Hays RD. Comparing Web-based with Mail Survey Administration of the Consumer Assessment of Healthcare Providers and Systems (CAHPS R) Clinician and Group Survey. Primary health care: open access. 2013;3.

  20. Bjertnaes O, Iversen HH, Skrivarhaug T. A randomized comparison of three data collection models for the measurement of parent experiences with diabetes outpatient care. BMC Med Res Methodol. 2018;18(1):95. https://doi.org/10.1186/s12874-018-0557-z. PMID: 30236067; PMCID: PMC6149010.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Bray I, Noble S, Robinson R, Molloy L, Tilling K. Mode of delivery affected questionnaire response rates in a birth cohort study. J Clin Epidemiol. 2017;81:64–71.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Brøgger J, Nystad W, Cappelen I, Bakke P. No increase in response rate by adding a web response option to a postal population survey: a randomized trial. J Med Internet Res. 2007;9(5):e40.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Clark M, Rogers M, Foster A, Dvorchak F, Saadeh F, Weaver J, et al. A randomized trial of the impact of survey design characteristics on response rates among nursing home providers. Eval Health Prof. 2011;34(4):464–86.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Cobanoglu C, Moreo PJ, Warde B. A comparison of mail, fax and web-based Survey methods. Int J Market Res. 2001;43(4):1–15.

    Article  Google Scholar 

  25. Fluss E, Bond CM, Jones GT, Macfarlane GJ. The effect of an internet option and single-sided printing format to increase the response rate to a population-based study: a randomized controlled trial. BMC Med Res Methodol 2014;14.

  26. Fowler FJ Jr, Cosenza C, Cripps LA, Edgman-Levitan S, Cleary PD. The effect of administration mode on CAHPS survey response rates and results: a comparison of mail and web-based approaches. Health Serv Res. 2019;54(3):714–21.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Hardigan PC, Succar CT, Fleisher JM. An analysis of response rate and economic costs between mail and web-based surveys among practicing dentists: a randomized trial. J Community Health. 2012;37(2):383–94.

    Article  PubMed  Google Scholar 

  28. Hardigan PC, Popovici I, Carvajal MJ. Response rate, response time, and economic costs of survey research: a randomized trial of practicing pharmacists. Res Social Administrative Pharm. 2016;12(1):141–8.

    Article  Google Scholar 

  29. Hohwu L, Lyshol H, Gissler M, Jonsson SH, Petzold M, Obel C. Web-based Versus Traditional Paper questionnaires: a mixed-Mode Survey with a nordic perspective. J Med Internet Res 2013;15(8).

  30. Iversen HH, Holmboe O, Bjertnaes O. Patient-reported experiences with general practitioners: a randomised study of mail and web-based approaches following a national survey. BMJ open. 2020;10(10):e036533.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Jacob RT, Jacob B. Prenotification, incentives, and Survey Modality: an experimental test of methods to increase Survey Response Rates of School principals. J Res Educational Eff. 2012;5(4):401–18.

    Google Scholar 

  32. Lagerros YT, Sandin S, Bexelius C, Litton JE, Löf M. Estimating physical activity using a cell phone questionnaire sent by means of short message service (SMS): a randomized population-based study. Eur J Epidemiol. 2012;27(7):561–6.

    Article  PubMed  Google Scholar 

  33. Leece P, Bhandari M, Sprague S, Swiontkowski MF, Schemitsch EH, Tornetta P, et al. Internet versus mailed questionnaires: a randomized comparison (2). J Med Internet Res. 2004;6(3):26–33.

    Article  Google Scholar 

  34. Mauz E, Hoffmann R, Houben R, Krause L, Kamtsiuris P, Gößwald A. Population-Based Health Interview Surveys for Children and Adolescents: Methodological Study. J Med Internet Res. 2018;20(3):e64. https://doi.org/10.2196/jmir.7802. PMID: 29506967; PMCID: PMC5859740. Mode Equivalence of Health Indicators Between Data Collection Modes and Mixed-Mode Survey Designs in.

  35. Millar MM, Dillman DA. Improving response to web and mixed-Mode surveys. Pub Opin Q. 2011;75(2):249–69.

    Article  Google Scholar 

  36. Millar MM, Elena JW, Gallicchio L, Edwards SL, Carter ME, Herget KA, Sweeney C. The feasibility of web surveys for obtaining patient-reported outcomes from cancer survivors: a randomized experiment comparing survey modes and brochure enclosures. BMC Med Res Methodol. 2019;19(1):208.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Murphy CC, Craddock Lee SJ, Geiger AM, Cox JV, Ahn C, Nair R, Gerber DE, Halm EA, McCallister K, Skinner CS. A randomized trial of mail and email recruitment strategies for a physician survey on clinical trial accrual. BMC Med Res Methodol. 2020;20(1):123.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Reinisch JF, Yu DC, Li WY. Getting a Valid Survey Response From 662 Plastic Surgeons in the 21st Century. Ann Plast Surg. 2016;76(1):3–5. https://doi.org/10.1097/SAP.0000000000000546. PMID: 26418779.

  39. Sakshaug JW, Vicari B, Couper MP. Paper, e-mail, or both? Effects of contact mode on participation in a web survey of establishments. Social Sci Comput Rev. 2019;37(6):750–65.

    Article  Google Scholar 

  40. Schmuhl P, Van Duker H, Gurley KL, Webster A, Olson LM. Reaching emergency medical services providers: is one survey mode better than another? Prehospital Emerg Care. 2010;14(3):361–9.

    Article  Google Scholar 

  41. Schwartzenberger J, Presson A, Lyle A, O’Farrell A, Tyser AR. Remote Collection of patient-reported outcomes following outpatient hand surgery: a Randomized Trial of Telephone, Mail, and E-Mail. J Hand Surg. 2017;42(9):693–9.

    Article  Google Scholar 

  42. Scott A, Jeon SH, Joyce CM, Humphreys JS, Kalb G, Witt J et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Med Res Methodol 2011;11.

  43. Sebo P, Maisonneuve H, Cerutti B, Fournier JP, Senn N, Haller DM, Rates. Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based Survey: A Randomized Trial. J Med Internet Res 2017;19(3).

  44. Taylor S, Ferguson C, Peng F, Schoeneich M, Picard RW. Use of In-Game rewards to Motivate Daily Self-Report Compliance: Randomized Controlled Trial. J Med Internet Res. 2019;21(1):e11683. https://doi.org/10.2196/11683. PMID: 30609986; PMCID: PMC6682282.

    Article  PubMed  PubMed Central  Google Scholar 

  45. van den Berg MH, Overbeek A, van der Pal HJ, Versluys AB, Bresters D, van Leeuwen FE et al. Using web-based and paper-based questionnaires for Collecting Data on Fertility Issues among Female Childhood Cancer survivors: differences in response characteristics. J Med Internet Res 2011;13(3).

  46. Weaver L, Beebe TJ, Rockwood T. The impact of survey mode on the response rate in a survey of the factors that influence Minnesota physicians’ disclosure practices. BMC Med Res Methodol. 2019;19(1):73.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Whitehead L. Methodological issues in internet-mediated research: a randomized comparison of internet versus mailed questionnaires. J Med Internet Res. 2011;13(4):e109.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Yetter G, Capaccioli K. Differences in responses to web and paper surveys among school professionals. Behav Res Methods. 2010;42(1):266–72.

    Article  PubMed  Google Scholar 

  49. Ziegenfuss JY, Beebe TJ, Rey E, Schleck C, Locke GR 3rd, Talley NJ. Internet option in a mail survey: more harm than good? Epidemiology. 2010;21(4):585–6. https://doi.org/10.1097/EDE.0b013e3181e09657.

    Article  PubMed  Google Scholar 

  50. Shih TH, Fan X. Response Rates and Mode preferences in Web-Mail mixed-Mode surveys: a Meta-analysis. Int J Internet Sci. 2007;2(1):59–82.

    Google Scholar 

  51. Medway RL, Fulton J. When more gets you less - a Meta-analysis of the effect of concurrent web options on Mail Survey Response Rates. Pub Opin Q. 2012;76(4):733–46.

    Article  Google Scholar 

  52. Petrosyan A. Global internet access rate 2005–2022. https://www.statista.com/statistics/209096/share-of-internet-users-worldwide-by-market-maturity/#:text=As%20of%202022%2C%2066%20percent,access%20rate%20was%2066%20percent [Accessed 4 May 2024].

  53. Vestberg H. How can we bring 2.6 billion people online to bridge the digital divide? World Economic Forum Annual Meeting; Jan 2024. https://www.weforum.org/agenda/2024/01/digital-divide-internet-access-online-fwa/ [Accessed 4 May 2024].

  54. Scott P, Edwards P. Personally addressed hand-signed letters increase questionnaire response: a meta-analysis of randomised controlled trials. BMC Health Serv Res. 2006;6:111. https://doi.org/10.1186/1472-6963-6-111.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Harbord RM, Egger M, Sterne JA. A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints. Stat Med. 2006;25(20):3443-57. https://doi.org/10.1002/sim.2380. PMID: 16345038.

  56. Edwards P, Cooper R, Roberts I, Frost C. Meta-analysis of randomised trials of monetary incentives and response to mailed questionnaires. J Epidemiol Community Health. 2005;59(11):987–99. https://doi.org/10.1136/jech.2005.034397.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. Trial Forge Guidance 1: what is a study within a trial (SWAT)? Trials. 2018;19(1):139. https://doi.org/10.1186/s13063-018-2535-5.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Galea S, Tracy M. Participation rates in epidemiologic studies. Ann Epidemiol. 2007;17(9):643–53.

    Article  PubMed  Google Scholar 

  59. Dillman DA. Towards survey response rate theories that no longer pass each other like strangers in the night. In: Brenner PS, editor. Understanding survey methodology - sociological theory and applications. Switzerland: Springer Nature; 2020. pp. 15–44.

    Chapter  Google Scholar 

  60. Griffin DH, Fischer DP, Morgan MT. Testing an Internet Response Option for the American Community Survey. Washington: U.S. Bureau of the Census, 2001. https://www.census.gov/library/working-papers/2001/acs/2001_Griffin_01.html [Accessed 27 October 2023].

  61. Sinclair M, O’Toole J, Malawaraarachchi M, et al. Comparison of response rates and cost-effectiveness for a community-based survey: postal, internet and telephone modes with generic or personalised recruitment approaches. BMC Med Res Methodol. 2012;12:132. https://doi.org/10.1186/1471-2288-12-132.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was supported by the NIHR Evidence Synthesis Programme [Grant NIHR133238]. Conduct of the work was entirely independent of the funder.

Author information

Authors and Affiliations

Authors

Contributions

PE independently screened the search results and the full-text reports, and extracted data from eligible studies. He assisted with data entry into RevMan 5. He exported data into Stata and he conducted the meta-analyses and other statistical analyses. He drafted the manuscript. PE is guarantor for the paper.CP ran the electronic searches, independently screened the search results and the full-text reports, and extracted data from eligible studies. CP assisted with data entry into RevMan 5, and commented on the manuscript.

Corresponding author

Correspondence to Phil Edwards.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Edwards, P., Perkins, C. Response is increased using postal rather than electronic questionnaires – new results from an updated Cochrane Systematic Review. BMC Med Res Methodol 24, 209 (2024). https://doi.org/10.1186/s12874-024-02332-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-024-02332-0

Keywords