- Open Access
Incentive delivery timing and follow-up survey completion in a prospective cohort study of injured children: a randomized experiment comparing prepaid and postpaid incentives
BMC Medical Research Methodology volume 21, Article number: 233 (2021)
Retaining participants over time is a frequent challenge in research studies evaluating long-term health outcomes. This study’s objective was to compare the impact of prepaid and postpaid incentives on response to a six-month follow-up survey.
We conducted an experiment to compare response between participants randomized to receive either prepaid or postpaid cash card incentives within a multisite study of children under 15 years in age who were hospitalized for a serious, severe, or critical injury. Participants were parents or guardians of enrolled children. The primary outcome was survey response. We also examined whether demographic characteristics were associated with response and if incentive timing influenced the relationship between demographic characteristics and response. We evaluated whether incentive timing was associated with the number of calls needed for contact.
The study enrolled 427 children, and parents of 420 children were included in this analysis. Follow-up survey response did not differ according to the assigned treatment arm, with the percentage of parents responding to the survey being 68.1% for the prepaid incentive and 66.7% with the postpaid incentive. Likelihood of response varied by demographics. Spanish-speaking parents and parents with lower income and lower educational attainment were less likely to respond. Parents of Hispanic/Latino children and children with Medicaid insurance were also less likely to respond. We found no relationship between the assigned incentive treatment and the demographics of respondents compared to non-respondents.
Prepaid and postpaid incentives can obtain similar participation in longitudinal pediatric critical care outcomes research. Incentives alone do not ensure retention of all demographic subgroups. Strategies for improving representation of hard-to-reach populations are needed to address health disparities and ensure the generalizability of studies using these results.
Prospective studies evaluating health outcomes over time depend on successful completion of follow-up assessments by enrolled participants. Longitudinal studies that rely on recontacting participants to collect additional data require more from participants than one-time cross-sectional studies. Repeated outcome measurement places an additional burden on participants and may contribute to higher levels of non-response [1, 2]. Minimizing non-response at follow-up is especially critical for ensuring continued representation of enrolled participants from hard-to-reach populations. Minimizing participant attrition in longitudinal studies ensures statistical power, demographic representation, and preserves study validity and integrity [3,4,5,6,7,8].
Incentives are frequently used to encourage participants to complete follow-up assessments. Previous studies have evaluated how strategies such as incentives and contact methods affect retention in health and epidemiological research, including cohort studies and randomized trials [9,10,11,12]. Findings from this research suggest that monetary incentives are effective, but nonmonetary incentives are not . Additionally, increasing the dollar amount of incentives has improved retention . Systematic reviews have shown that monetary incentives improve retention in randomized trials [9, 13] and prospective cohort studies .
Although types of incentive and incentive amounts have been studied, minimal attention has been given to incentive timing, i.e., whether incentives are postpaid or prepaid, in health outcomes research. Prepaid incentives are commonly used by social scientists and public opinion researchers. Prepaid incentives resulted in higher response rates than the promise of a postpaid incentives in cross-sectional [14,15,16,17] and longitudinal [18,19,20] survey research. It is less clear whether prepaid incentives are similarly effective in health outcomes studies. In prospective clinical research, postpaid incentives are more commonly used. One study that compared the effect of prepaid and postpaid incentives on retention in a randomized trial found inconsistent effects . Few investigators have examined whether prepaid incentives are more effective at retaining cohort participants for follow-up data collection . The existing literature offers no conclusive guidance on whether prepaid incentives improve retention in health outcomes research.
Survey researchers have also examined whether incentive timing influences the demographic composition of respondents. In some cases, prepaid incentives improved demographic representation in cross sectional surveys [22, 23]. In other instances, prepaid incentives skewed the representativeness of participants [24, 25]. Demographic characteristics such as lower socioeconomic status have been associated with higher attrition in prior cohort studies . These studies support the need to evaluate the impact of incentive timing on the demographics of retained participants in prospective health research. Prepaid incentives may also encourage faster response than postpaid incentives, which can shorten the data collection period or reduce the number of contact attempts [27, 28]. Outcomes such as response speed have received less attention than response rates. Factors affecting response speed in longitudinal health outcomes research have not been examined.
The purpose of this study was to determine whether prepaid incentives result in higher retention than postpaid incentives in a longitudinal study of injured children. We conducted a randomized experiment to evaluate the timing of incentive delivery on parents’ response to a six-month follow-up survey. We hypothesized that a prepaid incentive would result in a higher survey response rate compared to the promise of a postpaid incentive. A secondary aim examined whether a prepaid incentive would reduce the required number of contact attempts needed to reach participants. We also assessed whether parent and child demographic characteristics were associated with retention and evaluated whether prepaid incentives aid in the retention of a demographically representative sample.
Materials and methods
Study design and participants
We conducted a parallel, 1:1 randomized trial (“experiment”) nested within a prospective cohort study (“cohort study”) to assess the effect of incentive timing on participant response to a follow-up survey. This experiment was embedded within the “Assessment of Functional Outcomes and Health-Related Quality of Life after Pediatric Trauma Study.” The aim of this study was to identify factors associated with injured children’s functional status at hospital discharge, and the relationship between functional status at discharge with six-month functional status and health-related quality of life. Functional status—the ability to perform activities of daily living—measure six domains: mental status, sensory, communication, motor, feeding, and respiratory function . The cohort study was conducted at seven sites in the United States from March 2018 to February 2020. The Institutional Review Board at the University of Utah approved this study through a central mechanism (Approval #00105435). Additional details about the children’s injuries and the survey measures used in this study have been previously reported .
Children under 15 years in age who were treated for a serious, severe, or critical injury to one or more major body regions (head, thorax, abdomen, spine, or extremity) were eligible for enrollment in the cohort study. Patients with major burn injuries were excluded, as were children whose parents or guardians (hereafter referred to as parents) did not speak English or Spanish. Eligible children were enrolled at seven hospital sites, all level 1 pediatric trauma centers within the National Institutes of Health-funded Collaborative Pediatric Critical Care Research Network (CPCCRN). The participant of interest for this experimental study was the parent who provided consent for the child’s participation. Parents received all follow-up communications, were asked to complete the survey, and were assigned to an experimental arm.
Recruitment, follow-up protocol, and randomization
At each hospital (study site), research coordinators reviewed the daily census to identify eligible children. The cohort study sampling strategy was designed to promote equal enrollment of patients with isolated injuries in each body region. We set a goal to enroll 50 patients per study site per year, with 70% comprised of children with one injured body region and 30% children with multiple injured body regions. Every three months, we adjusted enrollment across sites to ensure enrollment in each category. The original goal was to enroll up to 840 patients into the cohort study. Statistical power was calculated to detect a difference in the proportion of parents who completed the survey between experimental arms. We initially planned to perform interim monitoring of experimental results after 210, 420, and 630 participants had completed follow-up. If interim analyses indicated one incentive type to be superior to the other, we planned to stop randomization and proceed with the superior method for subsequently enrolled participants. The cohort study ended early due to funding after enrolling 427 children.
Research coordinators obtained written informed consent and collected baseline data using medical records and standardized questionnaires administered to parents at discharge. Six months after hospital discharge, the parent who signed the consent form was asked to complete a telephone survey about the child’s current functional status and health-related quality of life. The Pediatrics Clinical Trials Office at the University of Utah made all contacts associated with the follow-up survey. These contacts consisted of a reminder letter at three months, a second reminder letter one week before the telephone survey, and a text message reminder one day before the telephone survey. To collect the survey data, at least three telephone call attempts were made on different days and times of day according to preferences parents specified at enrollment. If these attempts were unsuccessful, the study team attempted to reach a designated alternate contact to confirm the parent’s availability and contact information. If these contacts were also unsuccessful, the parent was emailed a link to an abbreviated web version of the survey two weeks after the last call attempt. If the parent did not respond to the web survey within another two weeks, medical records were reviewed to determine if six-month outcomes could be assessed from this source.
The contact protocol and materials for both experimental arms were the same except for the six-month reminder letter, which contained the intervention. This letter was either accompanied by a cash card (prepaid incentive arm) or informed the parent that a cash card would be sent after survey completion (postpaid incentive arm). After enrollment, parents were randomly assigned to one of the two incentive arms. We stratified incentive assignments within each study site and by injury type using a pre-generated randomization sequence that was created by the study biostatistician using statistical software. The sequence was concealed from all study staff except for the central research coordinator overseeing incentive delivery. The randomization sequence was reset daily by IT staff and parents were automatically assigned to a study arm as they enrolled in the study using the REDCap (Research Electronic Data Capture) platform . The study site that enrolled the participant did not know which incentive arm was assigned. Because they participated in the contact protocols and delivery of the incentives, the interviewers who administered the surveys were aware of parents’ incentive assignments.
Intervention, outcomes, and additional variables
The experimental intervention was incentive timing, categorized as a prepaid or postpaid incentive. Parents either (1) received a US$50 cash card in advance of the follow-up survey or (2) were informed that they would receive the cash card after completing the survey. Similar to a debit card, the cash card could be used at any retailer.
The primary outcome of this experiment was six-month survey completion, categorized as completed or not completed. A telephone survey response was classified as complete if sufficient information was obtained for scoring at least four of the five instruments included in the survey. The web survey was considered complete if containing enough information for scoring three out of the four instruments. A survey was classified as not completed if the parent did not respond to any of the survey requests or did not complete enough items to reach the completion thresholds. The secondary outcome was the number of call attempts needed to reach participants, as recorded by the interviewers making the telephone calls.
Another secondary aim was to assess whether demographic characteristics were associated with the primary outcome of survey completion. Available demographic variables included parental educational attainment and the parent’s preferred language (English or Spanish), household income and household size, and the race, ethnicity, age, sex, and insurance status of the child. These variables were obtained from a baseline survey.
We calculated descriptive statistics for patient and parent demographics by each experimental arm and within the entire sample. Categorical measures were summarized with counts and percentages. Continuous measures were summarized using medians and the 25th and 75th percentile values to account for non-normal distributions. We calculated the proportion of parents who completed the survey by experimental arm. To test for differences in survey completion by experimental arm, we performed logistic regression models predicting survey completion, controlling for incentive assignment and study site. To test for demographic differences in survey completion, we performed logistic regression models predicting survey completion, controlling for the demographic variable and study site. In a supplemental analysis, we assessed the effect of the interaction of each demographic variable with incentive type on likelihood of survey response. Significance was defined at p<0.05. Analyses were performed using SAS version 9.4 (Cary, NC).
Across all study sites, 835 children were assessed for eligibility in the cohort study, 654 met inclusion criteria, 493 were approached for consent, and 428 were consented to participate (See Flow Diagram, Additional file 1). One patient was excluded due to the absence of a qualifying injury, resulting in a final sample of 427 children with their consenting parents. To evaluate the effect of incentive timing, we limited the analyses to the 420 parents who were randomly assigned to an experimental arm. The 420 randomized parents were analyzed according to their originally assigned treatment arm (prepaid incentive n=204, postpaid incentive n=216).
Most parents reported a household size of three or four individuals (54.5%; Table 1). The largest share of parents had a high school education or less (35.7%) and 15.7% reported an annual household income of less than $15,000. Only 3.6% of parents were surveyed in Spanish. The median age of the injured children was 7.2, 36.9% were female, 11.2% were Hispanic, and 64.8% were white. A summary of the children’s injury characteristics is included in Additional file 2.
We obtained survey responses from 67.4% (283/420) of the parents. Survey completion did not differ based on incentive timing. The response rate for the prepaid incentive was 68.1%, and 66.7% for the postpaid incentive (p=0.61, Table 2). A median of two telephone calls was associated with successful contact with the parent. The number of telephone calls needed to reach parents also did not differ based on when the incentive was provided (p=0.22). Regardless of the incentive offered, the highest percentage of parents were reached on the first call (39.1% for the prepaid incentive, 49.6% for the postpaid incentive).
Survey response varied by demographic characteristics (Table 3). Hispanic or Latino children comprised 6.4% of respondents and 21.2% of non-respondents (p<0.001). Spanish-speaking parents were less likely than English speakers to complete the survey (0.4% of respondents compared to 10.2% of non-respondents; p<0.001). Children with Medicaid were less likely to be represented among the responses (37.8% of respondents compared to 63.5% of non-respondents; p<0.001). Children whose parents had a high school diploma or less education were under-represented among respondents (27.6%) compared to non-respondents (52.6%; p<0.001). Fewer respondents than non-respondents were from families earning less than $15,000 per year (11.7% vs. 24.1%; p<0.001). Parents reporting household sizes of five or more individuals were underrepresented among respondents (34.6% of respondents compared to 48.2% of non-respondents; p<0.013). The interaction of each demographic variable with incentive assignment did not predict on survey completion (Additional file 3). Completion patterns by demographics were similar regardless of incentive timing.
Collecting long-term health outcomes data after discharge is time-consuming and costly. The resources and optimal approaches for conducting longitudinal follow-up are not well-defined. In this study, we evaluated whether prepaid incentives retained more parents than postpaid incentives in a study of injured children. Contrary to our hypothesis, incentive timing did not influence the likelihood of survey completion. This result departs from research showing that prepaid incentives were more effective than postpaid incentives for improving cross-sectional survey completion [15,16,17] and longitudinal survey retention [18,19,20].
Several explanations may account for our results. Evidence in support of prepaid incentives comes primarily from cross-sectional survey research [15, 17]. Incentives are used in longitudinal surveys, but less is known about the optimal use of incentives to reduce attrition in these studies . Even fewer studies have addressed how incentive timing affects retention in prospective cohort studies. Incentive timing may not predict retention in longitudinal health outcomes research. Systematic reviews show that retention in health studies improves as more response-inducing strategies are incorporated [9, 11]. Strategies other than incentives include multiple reminders, varied modes of contact, and sending a second copy of a paper questionnaire to non-respondents.
The effect of incentive timing on retention may also vary based on the subject matter of a survey or across different target populations. A survey’s subject matter predicts cross-sectional survey participation . Survey response is higher when questions are interesting to the participants. In the current study, we asked parents about their children’s functional status and quality of life after injury. This subject matter is relevant to parents because of the family burden associated with a child’s long-term impairment . The relevance of the content may have affected survey completion more than incentive timing. In a similar study, parents of injured children expressed gratitude for follow-up calls evaluating their child’s status , supporting interest in this subject.
The cash card we used as an incentive could also account for our results. The effect of prepaid incentives can depend on the currency offered. A prepaid cash incentive retained more participants than a prepaid gift card in a longitudinal survey of recent high school graduates . A prepaid cash card produced a lower response rate than a prepaid check in a survey of physicians . The incentive dollar amount is also relevant to the timing of delivery. Small, prepaid cash incentives were associated with more responses than larger, postpaid incentives . A smaller prepaid incentive could have produced different results in our study. The magnitude of an incentive’s effect on survey response also depends on the survey mode. Many studies that obtained higher response with prepaid incentives used mailed paper surveys [15, 16]. The effect of prepaid incentives on response in telephone surveys has been smaller .
We also assessed whether incentive timing led to differences in the demographic composition of respondents and non-respondents. Prepaid incentives improved demographic representation in some cross-sectional studies [22, 23], but decreased representation in others [24, 25]. We found no demographic differences between the respondents and non-respondents based on incentive timing. The association of incentive timing with responses and demographic representation in survey research is an evolving area of investigation .
We observed that some demographic characteristics were associated with follow-up completion, including child ethnicity and insurance status, household income, and parental education. Others have noted the challenge researchers have in recruiting participants from all racial and ethnic groups and diverse socioeconomic backgrounds . Methods to improve participation among underrepresented populations must be tailored to address the multiple barriers faced when participating in research . Prior research suggests barriers to participation include mistrust of medical research, language barriers, and demands and inconveniences of participation [38, 39]. Community-based, tailored, and personalized recruitment efforts may facilitate continued engagement with underrepresented populations [39,40,41]. More research is needed to identify specific strategies that ensure demographic representation in health outcomes research and ensure the generalizability of the findings of this research .
We anticipated that a prepaid incentive would encourage parents to answer the study’s telephone calls and reduce the need for multiple call attempts. Prepaid incentives can reduce the level of effort required to obtain follow-up responses [27, 28]. In this study, the number of calls research coordinators placed did not vary by incentive timing. We contacted most parents on the first or second attempt regardless of incentive type. Contact at the initial attempts may be related to parents’ interest in the study or the reminder letter’s impact.
Unlike prior incentive timing experiments, the respondents in our study were proxies providing information about the enrolled children. Parent-proxy reporting of children’s quality of life or other health outcomes is frequently used in pediatrics . Compared to other medical specialties, pediatrics is more family-oriented, and parents play a larger role in healthcare decision-making . These unique circumstances may require different retention methods than studies that use direct reports from the participant.
It is difficult to retain participants in prospective cohort studies . This experiment provides guidance for designing future longitudinal studies of critically ill and injured children. Our results show that parents in longitudinal pediatric critical care studies can be retained with either a prepaid or postpaid incentive. Postpaid incentives are more commonly used in health outcomes research. In our experience, postpaid incentives are less likely to be restricted by organizational accounting policies. Postpaid incentives may be more suitable when these restrictions exist. Prepaid incentives must be carefully considered because of cost. Prepaid cash incentives are less expensive when offered in smaller amounts than postpaid incentives. Recipients of prepaid incentives delivered as a check are unlikely to cash them if not participating in the study, making them more cost-effective . Prepaid incentives can also establish goodwill and trust . Prepaid incentives do not require additional follow-up contact for delivery. Incentives enhance retention regardless of timing. Our study shows researchers have options for how to incorporate incentives.
This study has several limitations. First, although we obtained several measures of demographic characteristics, additional parental characteristics may account for group differences. The demographic characteristics in our study predicting follow-up completion were similar to those in other cohort studies and longitudinal surveys [48,49,50] suggesting relevant measures were included. Second, this study was conducted with a sample of children treated at level 1 pediatric trauma centers for a serious or greater injury. These results may not generalize to studies of children with less severe injuries or those treated in other care settings. The results may also not apply to adults or patients with other conditions. Our findings should be confirmed in other populations to evaluate generalizability. Third, our experiment did not include a no-incentive condition. Without this baseline for comparison, we could not assess how incentives encouraged survey completion regardless of timing. Because the study was closed to enrollment earlier than anticipated, we were unable to reach our originally targeted sample size.
This study assessed whether a prepaid incentive could improve retention in a longitudinal health outcomes study. Our approach provides a framework to apply and evaluate other response-inducing techniques from survey research in prospective studies. Because incentive timing did not affect retention in this study of injured children, researchers can choose either option for similar studies. Additional investigation is needed to identify methods that improve participation among underrepresented socioeconomic and ethnic subgroups. Without adequate representation, the conclusions drawn from health outcomes research may miss insights that are critical for addressing health disparities.
Availability of data and materials
The datasets analyzed in the current study will be made available in a public-use data repository.
Collaborative Pediatric Critical Care Research Network
Functional Status Scale
Research Electronic Data Capture
National Research Council. Nonresponse in social science surveys: a research agenda; 2013. https://doi.org/10.17226/18293. https://www.nap.edu/catalog/18293/nonresponse-in-social-science-surveys-a-research-agenda. Accessed 18 Jun 2019
Galea S, Tracy M. Participation rates in epidemiologic studies. Ann Epidemiol. 2007;17(9):643–53.
Marcellus L. Are we missing anything? Pursuing research on attrition. Can J Nurs Res. 2004;36(3):82–98.
Gustavson K, von Soest T, Karevold E, Røysamb E. Attrition and generalizability in longitudinal studies: findings from a 15-year population-based study and a Monte Carlo simulation study. BMC Public Health. 2012;12:e918.
McDonald B, Haardoerfer R, Windle M, Goodman M, Berg C. Implications of attrition in a longitudinal web-based survey: an examination of college students participating in a tobacco use study. JMIR Public Health Surveill. 2017;3(4):e73.
Brilleman SL, Pachana NA, Dobson AJ. The impact of attrition on the representativeness of cohort studies of older people. BMC Med Res Methodol. 2010;10:71.
Watson N, Wooden M. Identifying factors affecting longitudinal survey response. In: Lynn P, editor. Methodology of longitudinal surveys. West Sussex: Wiley; 2009. p. 157–82.
Satherley N, Milojev P, Greaves LM, Huang Y, Osborne D, Bulbulia J, et al. Demographic and psychological predictors of panel attrition: evidence from the New Zealand attitudes and values study. PLoS ONE. 2015;10(3):e0121950.
Brueton VC, Tierney JF, Stenning S, Meredith S, Harding S, Nazareth I, et al. Strategies to improve retention in randomised trials: A Cochrane systematic review and meta-analysis. BMJ Open. 2014;4(2):e003821.
Teague S, Youssef GJ, Macdonald JA, Sciberras E, Shatte A, Fuller-Tyszkiewicz M, et al. Retention strategies in longitudinal cohort studies: A systematic review and meta-analysis. BMC Med Res Methodol. 2018;18(1):151.
Booker CL, Harding S, Benzeval M. A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health. 2011;11(1):249.
Robinson KA, Dinglas VD, Sukrithan V, Yalamanchilli R, Mendez-Tellez PA, Dennison-Himmelfarb C, et al. Updated systematic review identifies substantial number of retention strategies: using more strategies retains more study participants. J Clin Epidemiol. 2015;68(12):1481–7.
Morgan AJ, Rapee RM, Bayer JK. Increasing response rates to follow-up questionnaires in health intervention research: randomized controlled trial of a gift card prize incentive. Clin Trials. 2017;14(4):381–6.
Mercer A, Caporaso A, Cantor D, Townsend R. How much gets you how much? Monetary incentives and response rates in household surveys. Public Opin Q. 2015;79(1):105–29.
Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;(3):MR000008. https://doi.org/10.1002/14651858.MR000008.pub4.
Church AH. Estimating the effect of incentives on mail survey response rates: a meta-analysis. Public Opin Q. 1993;57(1):62–79.
Singer E, Ye C. The use and effects of incentives in surveys. Ann Am Acad Pol Soc Sci. 2013;645(1):112–41.
Fumagalli L, Laurie H, Lynn P. Experiments with methods to reduce attrition in longitudinal surveys. J R Stat Soc Ser A Stat Soc. 2013;176(2):499–519.
Laurie H, Lynn P. The use of respondent incentives on longitudinal surveys. In: Lynn P, editor. Methodology of longitudinal surveys. West Sussex: Wiley; 2009. p. 205–33.
Kretschmer S, Muller G. The wave 6 NEPS adult incentive experiment. Methoden Daten Anal. 2017;11(1):7–28.
Young B, Bedford L, das Nair R, Gallant S, Littleford R, JFR R, et al. Unconditional and conditional monetary incentives to increase response to mailed questionnaires: a randomized controlled study within a trial (SWAT). J Eval Clin Pract. 2020;26(3):893–902.
LaRose R, Tsai H-YS. Completion rates and non-response error in online surveys: Comparing sweepstakes and pre-paid cash incentives in studies of online behavior. Comput Human Behav. 2014;34:110–9.
Lesser VM, Dillman DA, Carlson J, Lorenz F, Mason R, Willits F. Quantifying the influence of incentives on mail survey response rates and their effects on nonresponse error. Atlanta: Annual meeting of the American Statistical Association; 2001.
Petrolia DR, Bhattacharjee S. Revisiting incentive effects: evidence from a random-sample mail survey on consumer preferences for fuel ethanol. Public Opin Q. 2009;73(3):537–50.
Parsons NL, Manierre MJ. Investigating the relationship among prepaid token incentives, response rates, and nonresponse bias in a web survey. Field Method. 2014;26(2):191–204.
Teixeira R, Queiroga AC, Freitas AI, Lorthe E, Santos AC, Moreira C, et al. Completeness of retention data and determinants of attrition in birth cohorts of very preterm infants: a systematic review. Front Pediatr. 2021;9:529733.
Lipps O. Effects of different incentives on attrition and fieldwork effort in telephone household panel surveys. Surv Res Methods. 2010;4:81–90.
Becker R, Glauser D. Are prepaid monetary incentives sufficient for reducing panel attrition and optimizing the response rate? An experiment in the context of a multi-wave panel with a sequential mixed-mode design. Bull Methodol Sociol. 2018;139(1):74–95.
Pollack MM, Holubkov R, Glass P, Dean JM, Meert KL, Zimmerman J, et al. Functional status scale: new pediatric outcome measure. Pediatrics. 2009;124(1):e18–28.
Burd RS, Jensen AR, VanBuren JM, et al. Factors Associated With Functional Impairment After Pediatric Injury. JAMA Surg. 2021;156(8):e212058. https://doi.org/10.1001/jamasurg.2021.2058.
Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O’Neal L, McLeod L, Delacqua G, Delacqua F, Kirby J, et al. The REDCap consortium: Building an international community of software platform partners. J Biomed Inform. 2019;95:103208.
Watson RS, Choong K, Colville G, Crow S, Dervan LA, Hopkins RO, et al. Life after critical illness in children-toward an understanding of pediatric post-intensive care syndrome. J Pediatr. 2018;198:16–24.
Pinto NP, Rhinesmith EW, Kim TY, Ladner PH, Pollack MM. Long-term function after pediatric critical illness: results from the Survivor Outcomes Study. Pediatr Crit Care Med. 2017;18(3):e122–e30.
Becker R, Möser S, Glauser D. Cash vs. vouchers vs. gifts in web surveys of a mature panel study––main effects in a long-term incentives experiment across three panel waves. Soc Sci Res. 2019;81:221–34.
Pace LE, Lee YS, Tung N, Hamilton JG, Gabriel C, Raja SC, et al. Comparison of up-front cash cards and checks as incentives for participation in a clinician survey: a study within a trial. BMC Med Res Methodol. 2020;20(1):210.
Avdeyeva OA, Matland RE. An experimental test of mail surveys as a tool for social inquiry in Russia. Int J Public Opin Res. 2012;25(2):173–94.
Natale JE, Lebet R, Joseph JG, Ulysse C, Ascenzi J, Wypij D, et al. Racial and ethnic disparities in parental refusal of consent in a large, multisite pediatric critical care clinical trial. J Pediatr. 2017;184:204–8.
Cui Z, Truesdale KP, Robinson TN, Pemberton V, French SA, Escarfuller J, et al. Recruitment strategies for predominantly low-income, multi-racial/ethnic children and parents to 3-year community-based intervention trials: Childhood obesity prevention and treatment research (coptr) consortium. Trials. 2019;20(1):296.
George S, Duran N, Norris K. A systematic review of barriers and facilitators to minority research participation among African Americans, Latinos, Asian Americans, and Pacific Islanders. Am J Public Health. 2014;104(2):e16–31.
Jang M, Vorderstrasse A. Socioeconomic status and racial or ethnic differences in participation: Web-based survey. JMIR Res Protoc. 2019;8(4):e11865.
Kaiser BL, Thomas GR, Bowers BJ. A case study of engaging hard-to-reach participants in the research process: community advisors on research design and strategies (CARDS)®. Res Nurs Health. 2017;40(1):70–9.
Paskett ED, Reeves KW, McLaughlin JM, Katz ML, McAlearney AS, Ruffin MT, et al. Recruitment of minority and underserved populations in the united states: The centers for population health and health disparities experience. Contemporary Clinical Trials. 2008;29(6):847–61.
Varni JW, Limbers CA, Burwinkle TM. Parent proxy-report of their children’s health-related quality of life: An analysis of 13,878 parents’ reliability and validity across age subgroups using the PedsQL 4.0 generic core scales. Health Qual Life Outcomes. 2007;5:2.
Schor EL. Family pediatrics: report of the task force on the family. Pediatrics. 2003;111(6 Pt 2):1541–71.
Hunt JR, White E. Retaining and tracking cohort study members. Epidemiol Rev. 1998;20(1):57–70.
Wiant K, Geisen E, Creel D, Willis G, Freedman A, de Moor J, et al. Risks and rewards of using prepaid vs. Postpaid incentive checks on a survey of physicians. BMC Med Res Methodol. 2018;18(1):104.
Dillman DA, Smyth JD, Christian LM. Internet, phone, mail, and mixed-mode surveys: the tailored design method. 4th ed. Hoboken: Wiley; 2014.
Patel MX, Doku V, Tennakoon L. Challenges in recruitment of research participants. Adv Psychiatr Treat. 2003;9(3):229–38.
Wu V, Abo-Sido N, Espinola JA, Tierney CN, Tedesco KT, Sullivan AF, et al. Predictors of successful telephone follow-up in a multicenter study of infants with severe bronchiolitis. Ann Epidemiol. 2017;27(7):454–8.
Young AF, Powers JR, Bell SL. Attrition in longitudinal studies: Who do you lose? Aust N Z J Public Health. 2006;30(4):353–61.
We would like to acknowledge the contribution of the research coordinators and trauma administrative teams at each study site for assisting with the data acquisition.
This work was supported in part by the following cooperative agreements from the Eunice Kennedy Shriver National Institute of Child Health and Human Development Collaborative Pediatric Critical Care Research Network (CPCCRN), National Institutes of Health, Department of Health and Human Services [grant numbers UG1HD050096, UG1HD049981, UG1HD049983, UG1HD063108, UG1HD083171, UG1HD083170, UG1HD083166, and U01HD049934], with additional support from the National Center for Advancing Translational Sciences, National Institutes of Health [grants U24TR001597 and UL1TR002538]. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Ethics approval and consent to participate
The Institutional Review Board at the University of Utah approved this study through a central mechanism (Approval #00105435). Participants provided written informed consent to participate in the study. This study was conducted in accordance with relevant guidelines and regulations.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Millar, M.M., Olson, L.M., VanBuren, J.M. et al. Incentive delivery timing and follow-up survey completion in a prospective cohort study of injured children: a randomized experiment comparing prepaid and postpaid incentives. BMC Med Res Methodol 21, 233 (2021). https://doi.org/10.1186/s12874-021-01421-8
- Cohort studies
- Surveys and questionnaires
- Patient selection
- Random allocation