- Research article
- Open Access
Response rate of patient reported outcomes: the delivery method matters
BMC Medical Research Methodology volume 21, Article number: 220 (2021)
Patient Reported Outcomes (PROs) are subjective outcomes of disease and/or treatment in clinical research. For effective evaluations of PROs, high response rates are crucial. This study assessed the impact of the delivery method on the patients’ response rate.
A cohort of patients with a unilateral vestibular schwannoma (a condition with substantial impact on quality of life, requiring prolonged follow-up) was assigned to three delivery methods: email, regular mail, and hybrid. Patients were matched for age and time since the last visit to the outpatient clinic. The primary outcome was the response rate, determinants other than delivery mode were age, education and time since the last consultation. In addition, the effect of a second reminder by telephone was evaluated.
In total 602 patients participated in this study. The response rates for delivery by email, hybrid, and mail were 45, 58 and 60%, respectively. The response rates increased after a reminder by telephone to 62, 67 and 64%, respectively. A lower response rate was associated with lower level of education and longer time interval since last outpatient clinic visit.
The response rate for PRO varies by delivery method. PRO surveys by regular mail yield the highest response rate, followed by hybrid and email delivery methods. Hybrid delivery combines good response rates with the ease of digitally returned questionnaires.
Patient Reported Outcomes (PROs) are increasingly used both for scientific purposes and in clinical practice. PROs measure the patients’ perceived symptoms, functioning, and health-related quality of life. The use of PROs in research improves understanding the patient’s perspective on the disease, the sequelae, and therapy . In addition, using PROs in clinical practice may improve patient-clinician communication and enhance patient outcomes [2, 3]. However, the implementation of PROs in routine practice can be challenging due to technological and workflow barriers .
One such barrier can be the response rate. A low response rate can lead to the introduction of selection bias and reduce the outcomes’ external validity . In general, response rates can be improved by several methods including monetary incentives, shorter questionnaires, reminders, personally addressed invitations and delivery method [5,6,7,8]. Delivery by email is increasingly used, with both distribution and digital data entry of the answers saving costs. However, delivery by regular mail has seemed to provide better response rates over the years . Research performed in the medical context has shown that clinicians’ response rates are similar or slightly in favor of mail delivery compared to email [9, 10]. A hybrid delivery method using both mail and email might be better than either email or mail alone . Research on delivery method and patients’ response rates is scarce and often performed in small sample sizes. These studies, published between 2014 and 2017, have shown that mail delivery results in higher response rates compared to email delivery [12,13,14]. However, digital literacy has rapidly increased in recent years. For example, in Europe 87% of the people aged 16–74 years had used internet in the last 3 months in 2019 compared to 75% in 2013, and 57% in 2007 . As a result, patients’ response to email may have increased too. This study assessed three different delivery methods for PRO measures in a large cohort of patients with unilateral vestibular schwannoma.
This study was part of a larger study on long-term outcomes of vestibular schwannoma management. Vestibular schwannoma is a benign, usually not life-threatening intracranial tumor, causing symptoms such as hearing loss, tinnitus, and balance problems due to pressure on adjacent structures, and as such may have considerable impact on quality of life. A small majority of these tumors is non-progressive and in these cases active surveillance during an extended follow-up period is usually the management option of choice. In progressive tumors, surgery or radiotherapy is performed to prevent future complications such as brain stem compression or elevated intracranial pressure. After an active intervention, prolonged active surveillance ensues in these patients too, in order to identify possible recurrences.
Patients who participated in a survey study in 2014 were re-approached for participation in a survey between May and September 2020 . Both studies were performed at the Leiden University Medical Center, an expert referral center for vestibular schwannoma in the Netherlands. All patients were diagnosed with unilateral VS between 2003 and 2014. Patients with bilateral VS, other skull base pathologies or insufficient proficiency in the Dutch language to complete the questionnaires were excluded.
Several PRO measures that are used in the routine care for vestibular schwannoma care in our hospital were collected in this study. Patients received a general health-related quality of life (HRQL) questionnaire, the short form 36 (SF-36), and a disease-specific HRQL questionnaire, the Penn Acoustic Neuroma Quality-of-Life Scale (PANQOL) [17, 18]. In addition, patients were asked to complete the dizziness handicap inventory (DHI), the medical outcome study cognitive functioning scale (MOS-CFS), the decision regret scale and the productivity costs questionnaires (iPCQ) [19,20,21]. Combined, patients were asked to answer 117 questions.
Three different delivery methods were used: email, regular mail, and a hybrid of the two. These three methods were chosen because they represented the modern delivery method (email), the golden standard so far (mail) and an intermediate (hybrid) method that combines the conventional approach of mail with the advantage of digital data entry. Patients in the email group received an email invitation with a link to a digital informed consent form. After providing consent, patients were directed to digital questionnaires. Patients in the hybrid group were invited by regular mail with a letter including a unique code and a link to the digital informed consent form and the questionnaires. The regular mail group received an informed consent form, the printed questionnaires, and a pre-paid return envelope. After 2 weeks, patients received a first personally addressed reminder by email (email group) or mail (hybrid and regular mail group). After another 2 weeks, all non-responders were called once by telephone for a second reminder. This telephone call was performed by a researcher, not their treating physician. In all groups, patients could request a different delivery method. Responders were defined as patients who completed the informed consent form and opened the questionnaire.
Before introducing electronic patient records in 2011, the patients’ email address was not registered during the first visit to the hospital. Therefore, an email address was available for a minority of the patients, making randomization impossible. Patients for whom the email address was registered were assigned to the email group. Patients from whom no email address was available were randomly assigned to either the regular mail or hybrid delivery groups. Two factors, age and time since the last visit, were expected to differ between groups with and without email, since most patients without email addresses were diagnosed before 2011. To avoid confounding of the effect of the delivery method on the response rate by two factors, we matched patients in all groups for age (< 45 yrs.; 46-50 yrs.;…;81-85 yrs.;> 85 yrs) and time since the last visit (< 5 yrs.;5-10 yrs.;> 10 yrs), as is shown in Fig. 1.
The frequencies of categorical variables and means of numerical variables were calculated. Demographics of responders and non-responders were compared. Next, three analyses were performed because patients could switch delivery methods. First, a stringent analysis was performed in which switchers were considered as non-responders. Second, an intention to treat analysis was conducted in which patients were analyzed in their predefined delivery method. Third, an as treated analysis was performed in which patients who switched between delivery methods were analyzed in that category. The outcome was the response rate per group, which was analyzed using a chi-squared test. We also assessed the effect of the second telephone call reminder by a chi-squared test. In addition, the effect on the response rate of age, sex, education level, the time elapsed since the last visit (in years), and the delivery method were analyzed using logistic regression with response rate as the dependent variable. The independent variables were selected based on their reported effect on response rates in previous literature [8, 14, 22]. Furthermore, interactions between independent variables were checked and, when relevant, included in the model. Model assumptions for multicollinearity were checked by calculating the variance inflation factor (VIF) and goodness of fit was verified with a Hosmer Lemeshow test and model chi-squared test. A minimum sample size of 387 was required based on a power calculation for the primary outcome, which used the difference in response rates in previous research (effect size w = 0.2, α = 0.05, 1-β = 0.95).
All statistical analyses were performed in SPSS version 26 (Armonk, NY: IBM Corp). A p-value < 0.05 was considered statistically significant. Demographic information was available from a previous 2014 study, so there were no missing data for any demographic variables.
In total, 602 patients were approached, of which 45 (7%) refused participation, 170 (28%) did not respond, and 387 (64%) responded, as is shown in Fig. 1. Baseline characteristics of the patients in the three groups are shown in Table 1. As expected, the matching variables age and time elapsed since the last visit were equally distributed in all groups. The proportion of patients with a low level of education was higher in the mail group. Patients with a lower educational level, aged between 50 and 59 years or > 80 years, or > 5 years since the last visit were more often non-responders (Table 2).
Only 15 patients (2%) completed fewer than 80% of the total number of questions. Most incomplete responders in the email (N = 5) and hybrid (N = 6) groups seemed to have started the questionnaires and stopped at some point, without skipping items. In the mail group, incomplete responders (N = 4) skipped some questions. Because of the low number of incomplete responders, statistical analysis of differences in item or PRO level response rates or differences per PRO questionnaire could not be reliably performed.
Furthermore, 98 (16%) patients used the possibility to request a different delivery method: 74 (76%) preferred to receive a questionnaire by regular mail and 24 (24%) preferred to complete the questionnaire electronically (by email).
Figure 2 shows the results of the three performed analyses. In the stringent analysis, mail delivery resulted in statistically significantly better response rates compared to email and hybrid 57% versus 37 and 38%, respectively (χ2, p < 0.001). In the intention to treat analysis, when patients who switched delivery method were included, the response rates for patients allocated to delivery by email, hybrid, and regular mail were 45, 58 and 60%, respectively (χ2 p < 0.001).
The requests for a different delivery method resulted in a decrease in email (− 0.5%; N = -1) and hybrid delivery (− 26%; N = -53), and an increase in mail delivery (+ 28%; N = + 54), as is shown in Table 1. The response rate for the actual delivery method, shown in the as treated analysis, was 42% by email, 51% by hybrid, and 66% by mail (χ2, p < 0.001).
Reminder by telephone
After the first reminder by either email or regular mail, 248 patients (41%) still did not respond and received a reminder by telephone call. Nearly half of these (N = 123) initial non-responders answered the telephone, of whom 48% (N = 59) did participate after this telephone call, 36% (N = 45) did not respond while they said to do so in the telephone call, and 15% (N = 19) declined participation. The demographics of these groups are shown in Table 3. The response rates in the intention to treat analysis raised to 62, 67, and 64% for email, hybrid and mail, respectively (χ2 p = 0.65). In the as treated analysis the final response rates were 60, 62 and 69%, respectively (χ2, p = 0.09).
The results of the logistic regression are shown in Table 4. The stringent, intention to treat and as treated models met the model assumptions and goodness of fit tests. All models showed that the probability of responding was lower in the email delivery group. The hybrid delivery was also associated with a lower response rate in the stringent and the as treated models.
A low education level was a confounding factor in all models. Age and sex did not contribute to a lower or higher response rate, except for patients aged 60–69 years in the stringent model, who were more likely to respond.
The interaction between the time since the last visit and delivery method was close to statistical significance in the intention to treat and as treated analyses. In the stringent analysis, this interaction was statistically significant, meaning that patients whose last visit to the hospital was less than 5 years ago tended to have different response rates per delivery method than patients whose last visit was longer ago. In the mail delivery group, the response rate decreased with increasing time since last visit. In the other groups, this effect was not observed, as is shown in Fig. 3. Other interactions (i.e. between age, sex, education level, and delivery method) were not statistically significant (lenient p-values of more than 0.2) and were not included in the models.
This study suggests that email delivery might result in a lower response rate compared to delivery by regular mail or hybrid delivery. Even when patients could choose their preferred delivery method, the response rate per email remained lower than mail or hybrid delivery.
The low response rate of email delivery is consistent with prior studies on patient response [12, 14]. This is somewhat surprising as one might expect increasing digital literacy in patients with the growing digitalization of the patient journey in hospitals today. Compared to other studies, we found smaller differences between the delivery methods, despite patients’ older average age in this study. An older population might be less familiar with the internet or email, but in The Netherlands, 87% of the elderly (> 65 years) have internet access, and 72% used email in 2019. In the subgroup of 65–75 years (which comprises approximately half our study population), these percentages are even higher: 95% internet access and 83% use of email .
Sex and education level could also act as confounding factors factors on response rate or interact with delivery method. For example, in healthcare-related research amongst patients, an effect of sex is not consistently observed [24, 25]. In this study too, sex did not seem to affect the response rate or vary the response rate by delivery method (i.e., no significant interaction with delivery method). The level of education did have a significant impact on response rates, as patients with a low level of education were less likely to be responders in this study (Table 4), which is consistent with a previous report . However, the effect of the delivery method on response rate did not vary by education level. Finally, the time since last clinic visit appeared to affect the association between delivery method and response rate, as we observed a decreasing response rate with increasing time since last visit, but only in the mail delivery group (Fig. 3). This effect might be comparable to the effect of decreasing response rates with increasing follow-up periods, as reported in long-term follow-up studies, however it is unclear why this effect is only seen after mail delivery .
Although regular mail delivery had the highest response rate, there are some logistic disadvantages. To use the PROs, surveys on paper need to be digitized, which is time-consuming and error-prone. This is especially cumbersome when PROs are used in a clinical context, and feedback is expected during clinical consultation. In this light, the results of hybrid delivery are noteworthy since the response rate is close to regular mail delivery, but the PROs are completed and returned electronically. In practice, using a hybrid system could reduce the workload of digitizing PRO outcomes, with comparable response rates to surveys by mail.
In addition, a telephone call reminder can further increase response rates. In the current study, 48% of initial non-responders did respond after being reminded by a telephone call. However, the advantage of this higher response rate should be weighed against the time investment needed.
There are some inherent limitations to this study. First, it was impossible to perform a randomized trial because an email address was not available for all patients eligible for inclusion. Although the missing email addresses were caused by a different registration system in the hospital, we cannot be entirely sure that the differences between the groups are purely random. Second, the study participants were probably prone to participate in a research survey because they had already participated in a previous study in 2014. This committed population may therefore have increased response rates. Conversely, a decreased response rate may have been caused by a prolonged time interval between the survey and the last consultation, as was observed in a number of participants and was associated with a lower probability of responding in this study. Last, the PRO measures response rates found in this cross-sectional research setting may not be representative of PRO measures response rates in a clinical setting, in which PRO measures are typically collected close before or after a clinical consultation and serve a more direct clinical purpose. However, patient preferences with regard to the survey delivery method are probably equally applicable to both settings.
When using PRO measures, the response rate is an essential factor to consider. Various factors have been identified that influence the response rate, such as personally addressed invitations, shorter questionnaires, and financial incentives [7, 27, 28]. In the current study, all invitations were personally addressed, but no financial incentives or differences in questionnaire lengths were applied. In addition, we found that a reminder by letter and/or telephone call may be a particularly important factor in increasing the response rate of patients, which is in agreement with previous report on health studies . In addition, this study suggests that two other factors are of importance in patients’ response rates: the initial delivery method and the ability to choose the desired delivery method.
The effectiveness of the increasing use of PROs in healthcare stands or falls by patients completing and returning the questionnaires. This response rate can be influenced by several aspects, and the current study suggests that the route of survey delivery is an important factor. Regular mail delivery seems to perform better than email delivery in our study population but is more time-consuming, both in distribution, and in digitalization afterwards. Therefore, a hybrid delivery method in which patients receive a letter by regular mail with a code to access the survey digitally might be the best of both worlds.
Availability of data and materials
The dataset generated and analysed during the current study is available in the DANS/EASY repository: https://doi.org/10.17026/dans-xak-tvya
Patient Reported Outcome
Health Related Quality of Life
Short Form 36
Penn Acoustic Neuroma Quality-of-Life Scale
Dizziness handicap inventory
Medical Outcomes Study – Cognitive Functioning Scale
iMTA Productivity Costs Questionnaires
Basch E, Abernethy AP, Mullins CD, Reeve BB, Smith ML, Coons SJ, et al. Recommendations for incorporating patient-reported outcomes into clinical comparative effectiveness research in adult oncology. J Clin Oncol. 2012;30(34):4249–55.
Basch E. Patient-reported outcomes — harnessing patients’ voices to improve clinical care. N Engl J Med. 2017;376(2):105–8.
Kotronoulas G, Kearney N, Maguire R, Harrow A, Di Domenico D, Croy S, et al. What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in Cancer care? A systematic review of controlled trials. J Clin Oncol. 2014;32(14):1480–501.
Johnson TP. Response rates and nonresponse errors in surveys. JAMA. 2012;307(17):1805.
Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;(3):Mr000008. https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.MR000008.pub4/full.
Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak. 2017;17(1):36.
Nakash RA, Hutton JL, Jørstad-Stein EC, Gates S, Lamb SE. Maximising response to postal questionnaires – A systematic review of randomised trials in health research. BMC Med Res Methodol. 2006;6(1):5.
Shih T-H, Fan X. Comparing response rates in e-mail and paper surveys: a meta-analysis. Educ Res Rev. 2009;4(1):26–40.
Weaver L, Beebe TJ, Rockwood T. The impact of survey mode on the response rate in a survey of the factors that influence Minnesota physicians’ disclosure practices. BMC Med Res Methodol. 2019;19(1):73.
Hardigan PC, Popovici I, Carvajal MJ. Response rate, response time, and economic costs of survey research: a randomized trial of practicing pharmacists. Res Soc Adm Pharm. 2016;12(1):141–8.
Beebe TJ, Jacobson RM, Jenkins SM, Lackore KA, Rutten LJF. Testing the impact of mixed-mode designs (mail and web) and multiple contact attempts within mode (mail or web) on clinician survey response. Health Serv Res. 2018;53:3070–83.
Palmen LN, Schrier JCM, Scholten R, Jansen JHW, Koëter S. Is it too early to move to full electronic PROM data collection? Foot Ankle Surg. 2016;22(1):46–9.
Feigelson HS, McMullen CK, Madrid S, Sterrett AT, Powers JD, Blum-Barnett E, et al. Optimizing patient-reported outcome and risk factor reporting from cancer survivors: a randomized trial of four different survey methods among colorectal cancer survivors. J Cancer Surviv. 2017;11(3):393–400.
Nota SPFT, Strooker JA, Ring D. Differences in response rates between mail, E-mail, and telephone follow-up in hand surgery research. Hand. 2014;9(4):504–10.
Eurostat: Individual Internet use. In. https://ec.europa.eu/eurostat/data/database: Eurostat; 2019.
Soulier G, Van Leeuwen BM, Putter H, Jansen JC, Malessy MJA, Van Benthem PPG, et al. Quality of life in 807 patients with vestibular Schwannoma: comparing treatment modalities. Otolaryngol Head Neck Surg. 2017;157(1):92–8.
Ware JE Jr, Sherbourne CD. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med Care. 1992;30(6):473–83.
Shaffer BT, Cohen MS, Bigelow DC, Ruckenstein MJ. Validation of a disease-specific quality-of-life instrument for acoustic neuroma. Laryngoscope. 2010;120(8):1646–54.
Jacobson GP, Newman CW. The development of the dizziness handicap inventory. Arch Otolaryngol Head Neck Surg. 1990;116(4):424–7.
Bouwmans C, Krol M, Severens H, Koopmanschap M, Brouwer W, Roijen LH-V. The iMTA productivity cost questionnaire. Value Health. 2015;18(6):753–8.
Stewart A, Ware J, Sherbourne C, Wells K. Psychological distress/well-being and cognitive functioning measures. In: Stewart A, Ware J, editors. Measuring Functioning and Well-Being: The Medical Outcomes Study Approach. Durham: Duke University Press; 1992. p. 102–42.
Matthews FE, Chatfield M, Freeman C, McCracken C, Brayne C, Cfas M. Attrition and bias in the MRC cognitive function and ageing study: an epidemiological investigation. BMC Public Health. 2004;4(1):12.
Statistics Netherlands: Internet; access and use. In. https://opendata.cbs.nl/statline/#/CBS/nl/dataset/83429NED/table?fromstatweb; 2019.
Van Loon AJM, Tijhuis M, Picavet HSJ, Surtees PG, Ormel J. Survey non-response in the Netherlands: effects on prevalence estimates and associations. Ann Epidemiol. 2003;13(2):105–10.
Polk A, Rasmussen JV, Brorson S, Olsen BS. Reliability of patient-reported functional outcome in a joint replacement registry. Acta Orthop. 2013;84(1):12–7.
Westenberg RF, Nierich J, Lans J, Garg R, Eberlin KR, Chen NC. What Factors Are Associated With Response Rates for Long-term Follow-up Questionnaire Studies in Hand Surgery? Clin Orthop Relat Res. 2020;478(12):2889–98.
Edwards P. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324(7347):1183.
Vangeest JB, Johnson TP, Welch VL. Methodologies for improving response rates in surveys of physicians. Eval Health Prof. 2007;30(4):303–21.
The appointment of the first author (ON) is funded by a strategic fund of the Leiden University Medical Center. There was no impact of the funding body in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Ethics approval and consent to participate
Participants have provided written informed consent. Formal approval of the ethics committee was not required under Dutch law according to the Medical Ethics Committee of the Leiden University Medical Center. The committee did approve the data handling and privacy review of this study (N19.112).
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Neve, O.M., van Benthem, P.P.G., Stiggelbout, A.M. et al. Response rate of patient reported outcomes: the delivery method matters. BMC Med Res Methodol 21, 220 (2021). https://doi.org/10.1186/s12874-021-01419-2
- Response rate
- Delivery method
- Patient-reported outcome
- Vestibular schwannoma