- Research article
- Open Access
Improving postal survey response using behavioural science: a nested randomised control trial
BMC Medical Research Methodology volume 21, Article number: 280 (2021)
Systematic reviews have identified effective strategies for increasing postal response rates to questionnaires; however, most studies have isolated single techniques, testing the effect of each one individually. Despite providing insight into explanatory mechanisms, this approach lacks ecological validity, given that multiple techniques are often combined in routine practice.
We used a two-armed parallel randomised controlled trial (n = 2702), nested within a cross-sectional health survey study, to evaluate whether using a pragmatic combination of behavioural science and evidenced-based techniques (e.g., personalisation, social norms messaging) in a study invitation letter increased response to the survey, when compared with a standard invitation letter. Participants and outcome assessors were blinded to group assignment. We tested this in a sample of women testing positive for human papillomavirus (HPV) at cervical cancer screening in England.
Overall, 646 participants responded to the survey (response rate [RR] = 23.9%). Logistic regression revealed higher odds of response in the intervention arm (n = 357/1353, RR = 26.4%) compared with the control arm (n = 289/1349, RR = 21.4%), while adjusting for age, deprivation, clinical site, and clinical test result (aOR = 1.30, 95% CI: 1.09–1.55).
Applying easy-to-implement behavioural science and evidence-based methods to routine invitation letters improved postal response to a health-related survey, whilst adjusting for demographic characteristics. Our findings provide support for the pragmatic adoption of combined techniques in routine research to increase response to postal surveys.
ISRCTN, ISRCTN15113095. Registered 7 May 2019 – retrospectively registered.
One of the most common data collection methods used in health research is the provision of postal questionnaires, especially when seeking information from large geographically dispersed populations . Postal response rates are considered an important indicator of study quality as they can act as a metric of sample representativeness [1, 2]. Sufficiently high response rates help to reduce some forms of non-response bias, maximise sample size, and minimise research costs [2, 3]. However, adequate response rates are increasingly difficult to obtain with declining rates of participation in health research observed over time, worldwide [4,5,6].
Systematic reviews have identified effective strategies to increase postal response rates in randomised controlled trials [1, 7,8,9,10,11]. Providing incentives (money, gifts, or prize draws), pre-notifying participants, incorporating university sponsorship, using personalised messages, and sending reminders or a second copy of a questionnaire to non-respondents, have all been shown to increase participant response [1, 8, 12,13,14]. The design of a questionnaire (content, length, format) can also be altered to achieve differential effects [1, 7, 9]. However, variable effects have been found for observational studies .
Dillman’s Tailored Design Method, established in 1970s, has been one of the most common frameworks employed to design research surveys and optimise response rate [15, 16]. More recently, theoretically driven behavioural science techniques have been tested in empirical studies in an attempt to improve participant engagement and response [17,18,19,20,21]. Behavioural frameworks can act as tools for guiding and implementing applied techniques to inform content and design of written materials, such as letters and postal packaging. The ‘MINDSPACE’ Report , for example, contains a behavioural science checklist which outlines nine influences on behaviour which can be targeted in routine communications: (i) messenger (we are heavily influenced by who communicates information); (ii) incentives (responses to incentives are shaped by predictable mental shortcuts); (iii) norms (we are strongly influenced by what others do); (iv) default (we tend to follow pre-set options); (v) salience (our attention is drawn to what is novel and seems relevant); (vi) priming (we are influenced by sub-conscious cues); (vii) affect (emotional associations shape our actions); (viii) commitment (we seek to be consistent with public promises and reciprocate acts); and (ix) ego (we act in ways that make us feel better about ourselves). MINDSPACE and other behavioural frameworks are often implemented within research teams and government agencies [21,22,23]. However, there is a paucity of evidence relating to their efficacy for improving survey response in health research.
Furthermore, most studies aimed at modifying postal response rates have isolated single techniques, testing the effect of each one individually [1, 7,8,9,10,11]. Though these studies provide insight into specific explanatory mechanisms, they lack ecological validity when compared with routine practice, where multiple techniques are often combined. Also, more generally, the behavioural science literature suggests combining relevant behaviour change techniques to maximise effect sizes [24, 25].
The aim of this nested randomised controlled trial (RCT) was to evaluate whether using a pragmatic combination of behavioural science and evidenced-based techniques in a study invitation letter increased response rate to a health-related survey, when compared with a standard invitation letter. We hypothesised that the intervention letter would increase participant return of the postal survey.
A two-armed parallel RCT was nested in a cross-sectional psychological survey study of women attending cervical cancer screening in England. The nested RCT aimed to test whether an invitation letter, informed by behavioural science and evidence-based techniques (intervention), increased participant response to a survey, when compared with a standard invitation letter (control).
Participants, recruitment, and trial setting
Women aged 24 to 66, who had tested HPV-positive with normal cytology at cervical cancer screening for the first or second or third consecutive time, were recruited through two large National Health Service (NHS) clinical sites in England (NHS North London and NHS Greater Manchester). Participants who completed a survey and mailed it back to University College London (UCL) were classified as respondents, while those who did not return a mailed survey were classified as non-respondents. Recruitment occurred between 17.04.2019 and 24.01.2020.
Simple randomisation of participants was applied in a 1:1 ratio.
Trial arm allocation sequence was determined using a computerised generated random number table , which ensured concealment of allocation sequence until the moment of trial arm assignment.
Randomisation was applied by external researchers who were employed within each NHS trust to implement recruitment procedures. These external researchers organised the mailing of the surveys in each of the trial arms.
The researchers who implemented randomisation procedures were blinded to the study objectives. Although participants were exposed to the invitation letter they received, they were unaware that they were part of the nested trial. The data analyst was blinded from group allocation until after statistical analyses had been performed.
Research staff, who were external to the core team, assessed potential participants for eligibility and implemented the recruitment and randomisation procedures at the two recruitment sites. Eligible participants were allocated a unique study identifier by the external researchers, which was used to link pseudonymised survey and clinical data. Names and home addresses of eligible participants by group allocation were uploaded to a secure printing and mailing company (Docmail Ltd) who printed and mailed out the invitation packs (cover letter, information sheet, survey, and pre-paid return envelope). See Supplementary File 3 for the survey used. Potential participants had to return their completed survey to UCL using a pre-paid envelope. To maximise response rate, a reminder pack with the same documents (including the same cover letter) was mailed three weeks later. Some data was recorded directly from clinical records and transferred to UCL for all potential participants, including age, screening test result, NHS site, and Index of Multiple Deprivation score and Quintile (IMD; a multidimensional marker of area-level deprivation based on residential postcode, with quintiles based on national distributions ).
Participants in the intervention and control groups received the same questionnaire pack and information sheet; however, the cover letter which enclosed these documents differed. The intervention letter (see Fig. 1) employed a combination of techniques expected to improve response rate based on systematic review evidence [8, 9] and the applied behavioural science literature (MINDSPACE ). MINDSPACE was chosen as the behavioural science framework to guide intervention letter design because it is commonly used within UK research and policy settings and is comprehensive, bringing together several behavioural science theories in an applied format. Table 1 provides a summary of the techniques used in the intervention letter.
In contrast, the control letter (see Fig. 2) was chosen to replicate similar standard wording suggested by the Health Research Authority (HRA), which is the regulatory body for research in NHS England .
The content used in both the intervention and control letters were drafted by a behavioural scientist (EM) and then discussed and iterated as part of a stakeholder engagement panel until there was consensus. The stakeholder panel consisted of eight individuals from a range of backgrounds including academia, policy, clinical practice, third sector, and patient and public representatives. The design of the other study materials (information sheet, survey) were pragmatically informed by standard practice recommended for NHS clinical studies, in line with our HRA ethical approvals and recommendations.
The outcome was return of the survey (yes/no) within 3-months of the date of estimated screening test result delivery, which was the timeframe specified in the study protocol for the primary study.
Demographic and clinical covariates
Covariates included demographic and clinical variables, which were prespecified due to their known or anticipated relationship with response rate [29,30,31,32]. Continuous covariate variables included age (years) and IMD Score (multidimensional marker of area-level deprivation based on residential postcode). Categorical covariates included NHS site, IMD Quintile, and screening test result (first HPV+/normal test result; or second or third consecutive HPV+/normal result at 12-month follow-up screen). The covariates were available for all participants (responders and non-responders) through access to clinical health records.
As this study was a nested trial, sample size estimates were based on the primary cross-sectional study , where the total sample size approached was 2702 women. Assuming participants were randomised equally (i.e., 1351 participants in each trial arm), and a baseline response rate of 21% (based on similar research ), the sample size for this study provided 80% power and a 5% margin for type II error to detect a between-group difference in response of at least 4.5% .
Statistical analyses were performed using Stata v15 and a p-value < 0.05 was considered statistically significant. Demographic characteristics were assessed descriptively and reported for the whole sample, and for responders and non-responders.
In the univariate analysis, logistic regression was used to ascertain whether survey response (yes/no) differed between the intervention letter vs. control letter. Logistic regression also tested whether survey response (yes/no) differed between clinical test result (1st vs. 2nd or 3rd consecutive HPV-positive with normal cytology result) and NHS site (North West London vs. Greater Manchester). Linear regression was used to assess the extent to which survey response (yes) was associated with age and IMD score.
Multivariate logistic regression was performed to assess whether survey response (yes/no) differed between the intervention vs. control letter, while adjusting for age, IMD score, NHS site, and test result.
Data completeness was > 95% for all variables except IMD score and IMD Quintile (94%). We used multiple imputation, using five iterations, to account for the missing IMD data and the model included the primary outcome and socio-demographic factors, which we assumed included all predictors of missingness. Data with > 95% completeness was treated as missing in the analysis. The final models were derived by fitting a regression model including all confounders, and estimates were combined using Rubin’s rules . Sensitivity analysis was conducted comparing the complete dataset with the multiple imputed dataset, to check for differences in the results; there were no substantive differences. Results are presented using imputed data.
Ethical approvals and trial registration
HRA approval was granted on 09.01.2019 (Research Ethics Committee reference: 18/EM/0227 and Confidentiality Advisory Group reference: 18/CAG/0118). Cervical Screening Research Advisory Committee approval was granted on 15.03.2019 (ODR1819_005). Further details can be found on the ISRCTN clinical registration site (https://doi.org/10.1186/ISRCTN15113095).
In total, 2702 individuals were invited to take part and mailed a survey; 1353 were randomised to the intervention and 1349 to the control arm. The mean age of the population was 37.5 years and the majority lived in the two most deprived IMD Quintiles in England (n = 1431, 56.2%; Quintiles 1 and 2). Around three quarters of participants were recruited through NHS Greater Manchester (n = 2090, 77.4%) and most had received their first HPV-positive with normal cytology screening result (n = 2202, 81.5%). Baseline characteristics were similar between the two randomised groups, with slight differences observed for some IMD quintiles (see Table 2).
Overall, 646 participants returned the completed survey, generating a response rate of 23.9% (n = 357, 26.4%, intervention; n = 289, 21.4% control). Figure 3 displays a flow diagram of the recruitment process.
Table 3 presents demographic characteristics for responders (n = 646) and non-responders (n = 2056). Supplementary File 4 presents a table of demographic characteristics stratified by intervention vs. control group for responders.
Response in the intervention vs. control group (n = 2702)
Univariate analysis revealed higher odds of survey response in the intervention group (21.4 and 26.4% for the control and intervention group, respectively; OR 1.32, 95% CI: 1.10–1.57); those with lower IMD scores (less deprived; OR 0.99, CI: 0.99–1.00); and those with a 2nd or 3rd consecutive test result (22.9 and 28.2% for 1st and 2nd or 3rd result, respectively; OR 1.32, CI: 1.06–1.64). Participants who were older displayed higher odds of response (OR 1.01, CI: 1.00–1.02).
In the fully adjusted analyses, results were similar to the univariate analyses. We found significantly increased odds of returning a survey in the intervention group when compared with the control (aOR 1.30, CI: 1.09–1.55), in those with lower IMD scores (less deprived; aOR 0.99, CI: 0.99–1.00), and those who had received a 2nd or 3rd consecutive test result (aOR 1.29, CI: 1.04–1.61).
See Table 4 for an overview of the results.
Almost all postal questionnaire studies incorporate an invitation or cover letter. We found that applying behavioural science and evidence-based methods to routine invitation letters improved postal response to a health-related survey, after adjusting for demographic and clinical characteristics. As survey participation rates continue to decline worldwide [4,5,6], our findings provide support for the pragmatic and cost-effective adoption of combined techniques in routine research to increase postal response rates.
Consistent with previous systematic reviews evaluating the application of individual techniques, we found that combining several techniques positively influenced postal response rate [1, 7,8,9,10]. The magnitude of effect observed in our study (adjusted odds ratio of 1.30 in favour of the intervention) is higher than found in some isolated techniques which similarly carry low or minimal financial costs, such as adopting personalisation or use of non-monetary incentives (odds ratios of 1.16 and 1.13, respectively [2, 9]). However, this is not the case when compared to all cost-effective isolated techniques, such as mentioning an obligation to respond or the use of university sponsorship, which demonstrate similar or slightly larger effects (odds ratios of 1.61 and 1.32, respectively [8, 9]). Furthermore, our approach appeared to yield a lower effect size than certain more financially expensive or resource-intensive strategies, such as providing monetary incentives, use of recorded mailed delivery, and pre-notifying participants (odds ratios of 1.87–1.99, 1.76–2.04, and 1.45–1.50, respectively [8, 9]).
Ultimately, however, findings which are based on isolated techniques cannot act as a direct comparator to our study. This is partly due to differences in the content used in the control arms of studies and variations in adjustments for confounders and contexts. For example, in our study, the control and intervention letters both utilised some techniques which have been shown to increase participant response, such as providing assurance of confidentiality and a conditional incentive of financial payment for participation in an interview [8, 9]. Similarly, we provided a second copy of our questionnaire at follow-up which has been shown to improve response . Utilising these evidence-based techniques in our control letter mirrors standard research practice; however, this differs to several previous studies which avoid using techniques in control conditions or do not report control conditions. It is therefore possible that the effect sizes observed in our study could be subject to ceiling effects or reflective of additive effects. Conversely, our study questionnaire (sent to all participants) asked about a sensitive health topic and our study information sheet explicitly stated that participants could opt out, which have both been found to reduce the odds of response [8, 9]. Hence, overall, these heterogeneities in methodology and study contexts prohibit comparative conclusions relative to the previous literature.
Using a combination of techniques in our study also introduces the possibility of interaction and/or moderation effects between individual techniques, which we could not measure or test. Two or more techniques implemented in tandem may have led to differential impacts on response rate, when compared with the same techniques used in isolation. Hence, a core limitation of our pragmatic approach is that we are unable to determine optimal combinations of techniques and, similarly, whether certain combinations may have reduced response or counteracted positive effects. Further investigation is needed to test the magnitude of effects using different combinations of techniques (e.g., through adopting a factorial RCT design) and to assess the impact of potential interactions.
Response rate is known to be influenced by sociodemographic factors such as age, sex, educational attainment, ethnicity, marital status, and deprivation [5, 31, 32, 35, 36]. Living in a less deprived area (lower index of multiple deprivation score) yielded a small statistically significant effect size in favour of returning a survey in our study (adjusted odds ratio of 0.99), but we observed no effect for age in our adjusted analyses. We did not test for interaction effects between area-level deprivation and response to the survey, as this was not part of our planned analysis and due to the likelihood of issues with statistical power. Also, we did not have data on other important sociodemographic variables like ethnicity and education. Hence, even though our intervention increased survey response overall whilst adjusting for some demographic factors, we cannot rule out that bias remains for particular sociodemographic groups.
Improving response rates in survey-based studies remains a priority for health and epidemiological research. It is hoped that the gains yielded from better sample representativeness and lower non-response bias should ultimately translate into improved public and patient outcomes . Implementation of behavioural science techniques in routine research practice may offer a low-cost solution for generating higher response rates and thus enhancing quality of care.
Our study carries several limitations. Although our intervention was found to increase response to a health survey, the overall response rate remained low (23.9%). This may reflect selection bias in our sample, which could lead to an over- or- underestimation of the intervention effect when compared with the general population. Furthermore, our target population only included women attending cervical screening, limiting the applicability of our findings to more general health contexts and to men. Some research has indicated that women are more likely to respond to research studies than men [31, 36], therefore, it is possible that there may also be moderation effects for gender in interventions targeting response rates. We also only recruited through two clinical sites in England which, although covering large geographical regions, may affect the generalisability of our findings, especially when compared with other cultural or sociodemographic contexts. Lastly, as this was a nested trial within a cross-sectional survey study, our target sample size was based on the primary cross-sectional study; the sample size calculation reported in this RCT was post-hoc. Although we were appropriately powered for the main analysis, we were unable to test for potentially relevant interaction or moderation effects due to the likelihood of being underpowered.
Using a combination of easy-to-implement behavioural science and evidence-based techniques in a study invitation letter increased participant response to a health survey. The major benefit of this pragmatic approach was the absence of substantive additional research costs, like providing financial incentives or additional follow-up mailing strategies. Further research is needed to investigate the optimal combinations of techniques for increasing postal response.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Consolidated Standards of Reporting Trials
Index of multiple deprivation
International Standard Randomised Controlled Trial Number
Messenger; Incentive; Norms; Default; Salience; Priming; Affect; Commitment; Ego
National Health Service
- OR / aOR:
Odds Ratio/ Adjusted Odds Ratio
Template for Intervention Description and Replication
University College London
Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324(7347):1183.
Stoop I, Billiet J, Koch A, Fitzgerald R. Improving survey response: lessons learned from the European social survey. Revista Espanola de Investigaciones Sociologicas. 2012;1:166–70.
McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, et al. Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technol Assess. 2001;5(31):1–256.
Galea S, Tracy M. Participation rates in epidemiologic studies. Ann Epidemiol. 2007;17(9):643–53.
Mölenberg FJM, de Vries C, Burdorf A, van Lenthe FJ. A framework for exploring non-response patterns over time in health surveys. BMC Med Res Methodol. 2021;21(1):37.
Morton SM, Bandara DK, Robinson EM, Carr PE. In the 21st century, what is an acceptable response rate? Aust N Z J Public Health. 2012;36(2):106–8.
Blumenberg C, Barros AJD. Response rate differences between web and alternative data collection methods for public health research: a systematic review of the literature. Int J Public Health. 2018;63(6):765–73.
Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Methods to increase response rates to postal questionnaires. Cochrane Database Syst Rev. 2007;(2)):Mr000008.
Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;(3):Mr000008.
Nakash RA, Hutton JL, Jørstad-Stein EC, Gates S, Lamb SE. Maximising response to postal questionnaires--a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006;6:5.
van Gelder M, Vlenterie R, IntHout J, Engelen L, Vrieling A, van de Belt TH. Most response-inducing strategies do not increase participation in observational studies: a systematic review and meta-analysis. J Clin Epidemiol. 2018;99:1–13.
Cunningham-Burley R, Roche J, Fairhurst C, Cockayne S, Hewitt C, Iles-Smith H, et al. Enclosing a pen to improve response rate to postal questionnaire: an embedded randomised controlled trial. F1000Res. 2020;9:577.
Barra M, Simonsen TB, Dahl FA. Pre-contact by telephone increases response rates to postal questionnaires in a population of stroke patients: an open ended randomized controlled trial. BMC Health Serv Res. 2016;16(1):506.
Juszczak E, Hewer O, Partlett C, Hurd M, Bari V, Bowler U, et al. Evaluation of the effectiveness of an incentive strategy on the questionnaire response rate in parents of premature babies: a randomised controlled study within a trial (SWAT) nested within SIFT. Trials. 2021;22(1):554.
Dillman D. Mail and telephone surveys: the Total design method. New York: John Wiley; 1978.
Dillman D. Mail and internet surveys: the tailored design method: Wiley; 2000.
Gold N, Durlik C, Sanders JG, Thompson K, Chadborn T. Applying behavioural science to increase uptake of the NHS Health check: a randomised controlled trial of gain- and loss-framed messaging in the national patient information leaflet. BMC Public Health. 2019;19(1):1519.
Yokum D, Lauffenburger JC, Ghazinouri R, Choudhry NK. Letters designed with behavioural science increase influenza vaccination in Medicare beneficiaries. Nat Hum Behav. 2018;2(10):743–9.
Sweeney M, John P, Sanders M, Wright H, Makinson L. Applying behavioural science to the annual electoral canvass in England: evidence from a large-scale randomised controlled trial. Elect Stud. 2021;70:102277.
Sallis A, Bunten A, Bonus A, James A, Chadborn T, Berry D. The effectiveness of an enhanced invitation letter on uptake of National Health Service Health Checks in primary care: a pragmatic quasi-randomised controlled trial. BMC Fam Pract. 2016;17(1):35.
Goulao B, Duncan A, Floate R, Clarkson J, Ramsay C. Three behavior change theory–informed randomized studies within a trial to improve response rates to trial postal questionnaires. J Clin Epidemiol. 2020;122:35–41.
Dolan P, Hallsworth M, Halpern D, King D, Metcalfe R, Vlaev I. Influencing behaviour: the mindspace way. J Econ Psychol. 2012;33(1):264–77.
Michie S, West R. Behaviour change theory and evidence: a presentation to government. Health Psychol Rev. 2013;7(1):1–22.
Kok G, Gottlieb NH, Peters G-JY, Mullen PD, Parcel GS, Ruiter RAC, et al. A taxonomy of behaviour change methods: an intervention mapping approach. Health. Psychol Rev. 2016;10(3):297–312.
Michie S, Johnston M. Theories and techniques of behaviour change: developing a cumulative science of behaviour change. Health Psychol Rev. 2012;6(1):1–6.
Social Psychology Network. Research Randomizer. [Available from: https://www.randomizer.org/.
UK Government. National Statistics: English indices of deprivation 2019 2019 [Available from: https://www.gov.uk/government/statistics/english-indices-of-deprivation-2019.
NHS Health Research Authority. Informing participants and seeking consent 2019 [Available from: https://www.hra.nhs.uk/planning-and-improving-research/best-practice/informing-participants-and-seeking-consent/.
McBride E, Marlow LAV, Forster AS, Ridout D, Kitchener H, Patnick J, et al. Anxiety and distress following receipt of results from routine HPV primary testing in cervical screening: the psychological impact of primary screening (PIPS) study. Int J Cancer. 2020;146(8):2113–21.
McBride E, Marlow LAV, Chilcot J, Moss-Morris R, Waller J. Distinct illness representation profiles are associated with anxiety in women testing positive for human papillomavirus. Ann Behav Med. 2021.
Lindén-Boström M, Persson C. A selective follow-up study on a public health survey. Eur J Pub Health. 2013;23(1):152–7.
Tolonen H, Helakorpi S, Talala K, Helasoja V, Martelin T, Prättälä R. 25-year trends and socio-demographic differences in response rates: Finnish adult health behaviour survey. Eur J Epidemiol. 2006;21(6):409–15.
Brat R. Inference for Proportions: Comparing Two Independent Samples [Available from: https://www.stat.ubc.ca/~rollin/stats/ssize/b2.html.
Rubin DB. Multiple imputation for nonresponse in surveys. Canada: John Wiley & Sons Inc; 1987.
Søgaard AJ, Selmer R, Bjertness E, Thelle D. The Oslo Health study: the impact of self-selection in a large, population-based survey. Int J Equity Health. 2004;3(1):3.
Martikainen P, Laaksonen M, Piha K, Lallukka T. Does survey non-response bias the association between occupational social class and health? Scand J Public Health. 2007;35(2):212–5.
Booker QS, Austin JD, Balasubramanian BA. Survey strategies to increase participant response rates in primary care research studies. Fam Pract. 2021;38(5):699–702.
We would like to thank the NHS clinical managers and staff at the clinical sites who helped us gain HRA approvals and recruit participants. Thank you to Ruth Stubbs, Louise Cadman, Imogen Pinnell, Rona Moss-Morris, and the study Patient and Public Representatives for their feedback on the study materials. Thanks to Lauren Rockliffe and Hanna Skrobanski who helped with participant recruitment and implemented randomisation procedures. Finally, thank you to the individuals who kindly gave up their time to participate.
This study is funded by the National Institute for Health Research (NIHR) as part of a fellowship awarded to Emily McBride (DRF-2017-10-105); the views expressed in this paper are not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. The NIHR evaluated and funded the research proposed in the fellowship application but had no other role in the study design, management, or evaluation of outputs.
Jo Waller and Laura Marlow are funded by Cancer Research UK (C7492/A17219). Robert Kerrison is also supported by a Cancer Research UK Population Research Fellowship (C68512/A28209). Cancer Research UK did not fund or have any role in this study.
Ethics approval and consent to participate
Health Research Authority approval was granted on 09.01.2019 (Research Ethics Committee reference: 18/EM/0227 and Confidentiality Advisory Group reference: 18/CAG/0118). Cervical Screening Research Advisory Committee approval was granted on 15.03.2019 (ODR1819_005). In line with our ethical approvals, including Section 251 of the NHS Act 2006 approval (enabling the common law duty of confidentiality to be temporarily lifted), consent was implied through completion of a survey mailed to UCL (no separate written or verbal consent was taken).
Consent for publication
No conflicts of interest to declare.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
CONSORT 2010 checklist of information to include when reporting a randomised trial*.
The TIDieR (Template for Intervention Description and Replication) Checklist*.
Survey used in the study.
Demographic characteristics for responders by intervention vs. control group (N = 646).
About this article
Cite this article
McBride, E., Mase, H., Kerrison, R.S. et al. Improving postal survey response using behavioural science: a nested randomised control trial. BMC Med Res Methodol 21, 280 (2021). https://doi.org/10.1186/s12874-021-01476-7
- Behavioural science
- Postal response