Do health care institutions value research? A mixed methods study of barriers and facilitators to methodological rigor in pediatric randomized trials
© Hamm et al.; licensee BioMed Central Ltd. 2012
Received: 10 February 2012
Accepted: 15 October 2012
Published: 18 October 2012
Pediatric randomized controlled trials (RCTs) are susceptible to a high risk of bias. We examined the barriers and facilitators that pediatric trialists face in the design and conduct of unbiased trials.
We used a mixed methods design, with semi-structured interviews building upon the results of a quantitative survey. We surveyed Canadian (n=253) and international (n=600) pediatric trialists regarding their knowledge and awareness of bias and their perceived barriers and facilitators in conducting clinical trials. We then interviewed 13 participants from different subspecialties and geographic locations to gain a more detailed description of how their experiences and attitudes towards research interacted with trial design and conduct.
The survey response rate was 23.0% (186/807). 68.1% of respondents agreed that bias is a problem in pediatric RCTs and 72.0% felt that there is sufficient evidence to support changing some aspects of how trials are conducted. Knowledge related to bias was variable, with inconsistent awareness of study design features that may introduce bias into a study. Interview participants highlighted a lack of formal training in research methods, a negative research culture, and the pragmatics of trial conduct as barriers. Facilitators included contact with knowledgeable and supportive colleagues and infrastructure for research.
A lack of awareness of bias and negative attitudes towards research present significant barriers in terms of conducting methodologically rigorous pediatric RCTs. Knowledge translation efforts must focus on these issues to ensure the relevance and validity of trial results.
KeywordsClinical trials as topic Risk of bias Pediatrics Mixed methods
“We as an institution, as a profession, don’t actually sell research as being an important thing that we do in hospitals. And it should be.”
There is a growing body of literature documenting the methodological limitations of published randomized controlled trials (RCTs) in pediatrics [1–7]. Of particular concern is the evidence that RCTs in child health are susceptible to a high risk of bias, increasing the likelihood that reported treatment benefits and/or harms are being exaggerated [8–10]. In order to ensure clinical relevance and to prevent unnecessary and wasteful research, it is crucial that measures are taken to maximize the internal validity of studies that are conducted [11, 12]. The global investment in research is enormous, with funding of $100 billion annually , plus the time and effort committed by the researchers, clinicians, and children and families. When participants agree to take part in a trial, they expect that the study will be conducted and reported to the highest standard to accurately answer the research question. When this expectation is met, trial results are important in providing children with the best possible treatment; however, when biased research is conducted instead, research dollars and professionals’ time are wasted, and the children’s contributions are unavailing.
Evidence describing the negative impact of bias on RCTs and how to minimize it is available [13–22], as is research on a number of specific challenges inherent in conducting RCTs in pediatrics [23–27], such as recruitment and consent procedures. However, the research-practice gap regarding methodological rigor in this population has not yet been addressed.
As the first step in the development of a knowledge translation strategy to address the reduction of bias in pediatric RCTs, the objective of this study was to determine and describe the barriers and facilitators that pediatric trialists face in the design and conduct of unbiased trials, with an emphasis on the Canadian context. Quantitative survey and qualitative interview data were collected to gain a broad perspective of the problem and researchers’ experiences.
We used an explanatory mixed methods design, with semi-structured interviews building upon the results of a quantitative survey. We connected data from the two phases to provide detailed descriptions of the barriers and facilitators pediatric trialists face in designing and conducting studies with high internal validity . We obtained ethical approval from the Health Research Ethics Board at the University of Alberta.
Tools used for survey development
Risk of Bias tool
Used to assess the internal validity of RCTs. It is comprised of seven domains supported by empirical evidence: sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective outcome reporting, and “other” sources of bias .
Widely used to identify general barriers to research utilization, particularly in nursing. Barriers are categorized into factors related to the individual, setting, research, and presentation [30, 32].
Developed by Cabana et al.  as part of an evaluation of barriers to physicians’ adoption of clinical practice guidelines. This framework includes 10 major factors, grouped into categories related to knowledge, attitudes, and behaviour.
Twelve methodologists and clinicians evaluated the questionnaire for appropriateness and accuracy of content and 5 national and international pediatric trialists completed pilot testing for clarity and ease of use; we made revisions based on their feedback. The survey included 23 questions and pilot testing indicated that it would take approximately 15 minutes to complete. We developed items to determine: 1) researcher knowledge and awareness of bias; and 2) perceived barriers and facilitators in conducting clinical trials.
Due to a low response rate using the sample described above (154/644; 23.9%), we expanded the survey population to recruit participants from the membership of the Maternal Infant Child and Youth Research Network (MICYRN), a Canadian network linking investigators from 17 academic health centres involved in pediatric clinical research. We identified potential respondents through a publicly available network inventory maintained by MICYRN (http://www.micyrn.ca/Networks.html) and invited all individuals listed as network contacts via email to participate in both the survey and the interview portion of the study (n=163). The survey included an item asking whether respondents would be willing to be contacted for an interview, and if so, to provide their name and preferred means of initial contact. As a result of problems with access to SurveyMonkey, we administered this wave via REDCap, an alternate secure, online application for managing surveys.
Due to low participation rates from survey respondents, we augmented recruitment with members of MICYRN with trial experience and referrals from participants and established pediatric trialists. We used purposive sampling based upon pediatric subspecialty and geographic location, aiming to reach saturation, which typically occurs around 12 participants . Interviews followed a semi-structured format built upon the results of the survey and were focused on participants’ experiences and attitudes towards conducting pediatric research and how these interacted with the appropriate design and conduct of methodologically sound trials. Each participant was sent an electronic consent form that they signed and returned via fax or email prior to the conduct of the interview. Interviews were 30 to 60 minutes and conducted by telephone by the lead author between April and July 2011. All interviews were recorded and transcribed verbatim.
We analyzed survey data descriptively, using means and standard deviations, medians and interquartile ranges (IQR), or proportions where appropriate. We used content analysis to code interviews, identifying categories in the data and patterns in beliefs and values that could help explain the potential for bias in pediatric RCTs . Coding was conducted by the lead author in consultation with the rest of the study team. We conducted qualitative data collection and analysis concurrently, following an iterative process. We integrated the survey and interview data at the data interpretation phase , using the method of connecting data . We used Stata and NVivo to manage quantitative and qualitative data, respectively.
Demographics of survey population
Total returned surveys
Professional time spent on research-related activities
Involvement in RCTs – median number of trials (IQR)
As a principal investigator
As a member of the study team
Discipline trained in*
Developmental, psychosocial, and learning problems
Mental health or psychiatry
Endocrinology and nutrition
Emergency medicine or critical care
Hematology or oncology
Allergy and immunology
General pediatrics or family medicine
Geographic region of corresponding author
Australia and New Zealand
Setting of employment*
University or academic centre
Characteristics of interview participants
Source of recruitment
Referral from participants/established trialists
Discipline(s) trained in
Medicine and Research
Geographic region of participant
Interview themes and relevance to risk of bias
Relevance to risk of bias
- Little formal training in research methods, therefore bias is likely due to a lack of knowledge of how it is introduced.
Clinical care vs. clinical research
- Decisions made clinically rather than per the trial design can lead to protocol deviations, e.g. interference with randomization sequence.
- Research is often viewed negatively in the clinical setting, leading to little value placed on following the trial protocol when it deviates from usual care.
- Demands on time and space can put research at a low priority and tasks may not be done according to protocol, e.g. ensuring safeguards are in place to maintain blinding.
- Budget constraints can limit hiring external methodological expertise if necessary; ethics requirements for methodology are inconsistent, leaving protocols subject to change.
- Blinding parents; investigators are less willing to inconvenience families with strict protocols; fewer trials has meant less competition for developing the best methodology.
- The trial will be more successful when the investigators take responsibility for generating support and ensuring rigor.
- Researcher understanding of the clinical setting facilitates the acceptance of research methods by the practitioners.
Cohesive study team
- Consulting experienced trialists and methodologists contributes to a more rigorous and well thought out study, in terms of both validity and feasibility.
- Protected research time and dedicated research staff facilitate trial design and conduct.
- Checks on the science facilitate high quality, e.g., reliable review processes and guidance from trusted third parties.
Survey findings indicated that 68.1% of respondents agree that bias is a problem in pediatric RCTs and 72.0% reported that they felt there was sufficient evidence to support the need to change some aspects of how RCTs are conducted. However, knowledge of bias among respondents was variable. There was no consistency in responses to questions which asked the respondent to rate the degree to which they agreed that a study design factor would introduce bias into a study. Identification of specific biases was strongest for sequence generation, blinding, and selective outcome reporting, while there was more uncertainty surrounding identification of problems with allocation concealment, incomplete outcome data, and “other sources of bias” (see Additional file 2). Despite this range of awareness of issues relevant to bias, 94.2% of respondents felt confident in their ability to evaluate the quality of published trials.
Perspectives on individual-level factors
Probably we don’t look at, we don’t know all the bias that can be, that can happen in a trial because we don’t check, we don’t believe there’s bias. We may miss some, we may forget some, and then do not report the bias because we don’t know it exists.
I’ve kind of learned on the job, which is why I’m not fully confident that I have all the skills.
Because there’s almost zero research training in the clinical curriculum for most clinicians these days. Like there’s almost nothing in the med school program, there’s almost nothing in the rehab program – there really needs to be somebody on the protocol who’s got a little bit more training.
Well you know it’s often when people go to write up a protocol, either they’re not totally aware of how this whole bias thing works and to them, you know the fact that you randomize people by the day of the week they present, that sounds good enough.
So you really have to take the time to engage people and be the one that’s proactive, engaging them. Because they’re busy, they might not even know what your study is unless you’re the change agent that really goes out there and talks to them about it and gets them motivated about why you think it’s important.
Conversely, a sense of ownership can contribute to a rigorous study design. Actively taking responsibility for the direction of the trial was seen as an opportunity for the investigators to generate enthusiasm, gain support, and educate colleagues about the rationale for rigorous methodology and how it impacts the ability to accurately answer the research question.
"“Listen to what [your colleagues] need to execute the study so that when you develop your protocol, you’ve built that into the approach. Or, if you couldn’t, you’ve at least had that dialogue with them about how scientifically you can’t be as flexible as might be ideal… so that they at least understand the rationale.”"
While 93.0% of survey respondents demonstrated an interest in learning about and staying current with literature describing and analyzing research methods, only 50.3% felt that they were able to do so due to other constraints. Logistical issues such as meeting institutional requirements (29.2%) and having sufficient staff (30.4%) were identified as challenges, while access to knowledgeable colleagues (92.8%) was identified as the most significant facilitator (see Additional file 2).
Perspectives on institution-level factors
I think that other people view [research] as kind of a thorn in their side. It’s something they play along with if they have to and the division head tells them they have to.
You work separately or in parallel and not necessarily the team as much, and I think that’s part of the challenge. You view the study as important, they view the results as important, but they don’t want to go through the pain of finding out the results because it impacts on what they do clinically.
Where we get into the biggest problems is if we take a person who’s very knowledgeable and very confident in how to care for patients with [disease], so they’re experts and masters in clinical care, and they just assume that that carries over into being an expert in clinical research.
So because we get donated funds, a fairly large amount of donated funds proportionally speaking in [disease], we’re able to support the personnel to perform the trials.
We have the help of the research institute and you can have a person for any kind of question or any kind of design that can help, and we have access to those kinds of resources.
I think the fact that [research network] is there enables you to think of multi-centre RCTs, whereas if it wasn’t there, you’d kind of have to go and find things from scratch. But by existing, it brings people together with shared interests and I think that that is a huge asset when it comes to even the thought of designing a multi-centre RCT. It’s like you want to design one for [research network].
[Research institute] has a great model where they require any grant that’s going out for external funding to be reviewed by three people from inside the institution.
"“I think a lot of investigators really have a hard time separating what decision they would make clinically from what decision they would make as part of a trial… because the feeling is I want to be convenient to the family, and I really know this stuff because I’m an expert in this clinical area, and I don’t think they realize that there’s a pretty clear demarcation between what you do in clinical care and what you do as research.”"
The degree to which institutional barriers were perceived as a threat to research varied according to the size of the site of the respondent. A clear distinction was noted between larger research-intensive institutions and other sites in which researchers struggled due to a lack of infrastructure. Trialists from the former viewed the conduct of research much more positively and generally felt that they had the necessary resources available to conduct rigorous trials, while those from the latter reported greater levels of difficulty positioning their research as an important part of the clinical landscape.
"“Space, resources, and training for research assistants, research managers, graduate students… there’s all sorts of hurdles and headaches around those things that I think most established clinical research programs… already know how to make the system work.”"
Similar to the survey results, cohesive study teams with positive working relationships were reported as the most significant facilitator to conducting rigorous trials. At the institutional level, this often included combining the expertise of experienced trialists, methodologists, and the staff that would be responsible for implementing the trial; however, this integration was more common at sites with more support for research. Positive relationships were also mentioned in the context of subspecialties, with productive research networks across sites enabling researchers to benefit from a collective expertise, as well as facilitating study-specific elements such as the conduct of multi-centre trials. A final facilitator, which was more prominent in certain institutions than others, was a reliable internal review process. Respondents viewed this as a major asset when it was available, but many felt that existing processes were fragmented and inconsistent.
Perspectives on policy-level factors
We all tend to want to make the budget as small as we can to increase our chances to actually get it funded and the reality is that some trials really require the full-time effort of somebody who’s got a lot of experience, and therefore comes with a price tag. And it can be hard to make the argument to ensure that you’ve got funding, right? So I think that’s where you start cutting other corners, and you don’t have the data quality, and at the end of the day, you maybe don’t have the rigorous, homerun kind of trial that you had envisioned.
(regarding ethics review at multiple sites) Most of the problem is to ask for revisions and they are not consistent one between the others. So you can have a question in one and the other one… wants a different answer.
Specific biases and pediatric-specific challenges
Addressing specific biases, survey and interview respondents reported challenges with blinding most frequently, which included the cost of providing a placebo, difficulties in blinding non-pharmacological interventions, and blinding all relevant parties, including parents. Other issues that were mentioned included difficulty getting adequate follow up in settings without an established clinician-patient relationship, parental resistance to randomization, and group imbalances due to small sample size.
Bias is a recognized concern among pediatric trialists; however they may be lacking the knowledge, willingness, or resources to properly address it. Internal validity did not emerge as a primary concern in the analyses, being overshadowed by issues related to the pragmatics of running a trial and the generalizability of the results. While these issues warrant a great deal of consideration, it is crucial that studies start out being methodologically rigorous as this is a prerequisite for generalizability.
The major barriers to minimizing risk of bias in trials were related to awareness and environment. With little emphasis on research methodology in clinical curricula, many investigators are not adequately prepared to design trials with high levels of internal validity or to recognize and attend to issues as they arise. The existing ad hoc training system very likely contributes to an emphasis on certain areas and a deficit in others, as demonstrated by the disproportionate focus by respondents on issues related to external validity (i.e., generalizability of study results), despite being questioned on issues relevant to internal validity (i.e., avoiding bias through methodologically rigorous design). Additionally, the predominantly negative attitudes surrounding the research process reinforce the acceptance of sub-optimal RCTs. While research findings may be valued, more effort is required to ensure that the importance of high quality research is recognized at all stages, and by all stakeholders. Pediatric oncology is often cited as a model in developing an environment that fosters research. Available infrastructure and consistency in study protocols has resulted in the successful integration of research and clinical care leading to marked improvements in survival and other outcomes [24, 36, 37]. By embracing research as a critical component of providing best care, rather than viewing it as an imposition, investigators and clinicians in oncology have shown that setting a standard for conducting rigorous trials is an achievable goal with tangible benefits and impressive health outcomes.
Positive relationships that support the development of an interest in research are particularly relevant in an environment in which most training is dependent on mentorship and reinforcement from experienced trialists. Within this context, clinician-scientists have a key role in bridging the gap between the worlds of research and clinical practice. Combining knowledge of proper methodology with an appreciation for the demands of the clinical setting will increase the likelihood of producing both valid and realistic trials. Research networks such as the Canadian Critical Care Trials Group (CCCTG) and Pediatric Emergency Research Canada (PERC) have been quite successful in using this strategy, facilitating high quality trials by promoting a positive culture of research, providing access to individuals with expertise, and offering support and collegiality [38, 39].
With a solid evidence base demonstrating the gaps in methodological quality in pediatric RCTs [1–10], the research agenda must now focus on knowledge translation. Using barriers and facilitators identified by the target end-users, it will be important to develop tailored strategies to overcome the gap between what is known about methodological processes and how trials are designed and conducted in practice. This is one of the stated aims of StaR Child Health, an international initiative dedicated to improving the quality of pediatric clinical research .
Strengths and limitations
An advantage of this study is that it combines the breadth of survey responses with the depth of interview responses, allowing for a detailed picture of the barriers and facilitators pediatric trialists face in the conduct of methodologically rigorous trials. While the response rate to the survey was low, bringing into question the representativeness of the sample, it was in line with evidence that both electronic surveys  and physician surveys  are associated with low responses. However, respondents represented a wide range of pediatric specialties, training backgrounds, and geographies, helping to give shape to the subsequent interviews. While they may have represented researchers with a higher level of interest in methodology, the survey responses were used to form the interviews, in which researchers with trial experience were of interest so as to be able to account for the barriers and facilitators in pediatric research with first-hand knowledge. The recruitment of additional survey participants from the membership of MICYRN slightly changed the balance of geographical representation; however the Canadian context was weighted heavily throughout the study, therefore our emphasis was unchanged. The response rate to requests for interview participation was also low, but with our expanded recruitment strategy we were able to achieve saturation. The interviews emphasized the Canadian context and therefore can be used to inform future developments within the national health care and research framework, as well as provide considerations relevant to other settings.
Clinical research is inherently challenging, but these results can be used to focus efforts on improving the validity of trials that are conducted. The evidence is clear that improvement is necessary in pediatric RCTs and a substantial body of knowledge has accumulated around how to minimize bias. Before trials can improve, though, awareness of bias and attitudes towards research must be addressed. Research must be reframed as a valuable component of health care education, practice, and decision-making.
Canadian Critical Care Trials Group
Maternal Infant Child and Youth Research Network
Pediatric Emergency Research Canada
Randomized controlled trial.
MPH was supported by a KT Canada Fellowship Award from Knowledge Translation Canada. SDS was supported by a New Investigator Award from the Canadian Institutes of Health Research and a Population Health Investigator Award from the Alberta Heritage Foundation for Medical Research. The funder played no role in the study design; in the collection, analysis, and interpretation of data; in writing the manuscript; or in the decision to submit the manuscript for publication.
- Moss RL, Henry MCW, Dimmitt RA, Rangel S, Geraghty N, Skarsgard ED: The role of prospective randomized clinical trials in pediatric surgery: state of the art?. J Pediatr Surg. 2001, 36: 1182-1186. 10.1053/jpsu.2001.25749.View ArticlePubMedGoogle Scholar
- Welk B, Afshar K, MacNeily AE: Randomized controlled trials in pediatric urology: room for improvement. J Urol. 2006, 176: 306-310. 10.1016/S0022-5347(06)00560-X.View ArticlePubMedGoogle Scholar
- Dulai SK, Slobogean BLT, Beauchamp RD, Mulpuri K: A quality assessment of randomized clinical trials in pediatric orthopaedics. J Pediatr Orthop. 2007, 27: 573-581. 10.1097/bpo.0b013e3180621f3e.View ArticlePubMedGoogle Scholar
- Uman LS, Chambers CT, McGrath PJ, Kisely S, Matthews D, Hayton K: Assessing the quality of randomized controlled trials examining psychological interventions for pediatric procedural pain: recommendations for quality improvement. J Pediatr Psychol. 2010, 35: 693-703. 10.1093/jpepsy/jsp104.View ArticlePubMedGoogle Scholar
- Nor Aripin KNB, Choonara I, Sammons HM: A systematic review of paediatric randomised controlled drug trials published in 2007. Arch Dis Child. 2010, 95: 469-473. 10.1136/adc.2009.173591.View ArticleGoogle Scholar
- Thomson D, Hartling L, Cohen E, Vandermeer B, Tjosvold L, Klassen TP: Controlled trials in children: quantity, methodological quality and descriptive characteristics of pediatric controlled trials published 1948–2006. PLoS One. 2010, 5: e13106-10.1371/journal.pone.0013106.View ArticlePubMedPubMed CentralGoogle Scholar
- DeMauro SB, Giaccone A, Kirpalani H, Schmidt B: Quality of reporting of neonatal and infant trials in high-impact journals. Pediatrics. 2011, 128: e639-PubMedGoogle Scholar
- Hartling L, Ospina M, Liang Y, Dryden DM, Hooton N, Seida JK, Klassen TP: Risk of bias versus quality assessment of randomised controlled trials: cross sectional study. BMJ. 2009, 339: b4012-10.1136/bmj.b4012.View ArticlePubMedPubMed CentralGoogle Scholar
- Crocetti MT, Amin DD, Scherer R: Assessment of risk of bias among pediatric randomized controlled trials. Pediatrics. 2010, 126: 298-305. 10.1542/peds.2009-3121.View ArticlePubMedGoogle Scholar
- Hamm MP, Hartling L, Milne A, Tjosvold L, Vandermeer B, Thomson D, Curtis S, Klassen TP: A descriptive analysis of a representative sample of pediatric randomized controlled trials published in 2007. BMC Pediatr. 2010, 10: 96-10.1186/1471-2431-10-96.View ArticlePubMedPubMed CentralGoogle Scholar
- Chalmers I, Glasziou P: Avoidable waste in the production and reporting of research evidence. Lancet. 2009, 374: 86-89. 10.1016/S0140-6736(09)60329-9.View ArticlePubMedGoogle Scholar
- Altman DG: The scandal of poor medical research. BMJ. 1994, 308: 283-284. 10.1136/bmj.308.6924.283.View ArticlePubMedPubMed CentralGoogle Scholar
- Als-Nielsen B, Gluud LL, Gluud C: Methodological quality and treatment effects in randomised trials: a review of six empirical studies. 12th Cochrane colloquium. 2004, Ottawa, Ontario, Canada: The Cochrane Collaboration, Oct 2–6Google Scholar
- Pildal J, Hrobjartsson A, Jorgensen KJ, Hilden J, Altman DG, Gotzsche PC: Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials. Int J Epidemiol. 2007, 36: 847-857. 10.1093/ije/dym087.View ArticlePubMedGoogle Scholar
- Abraha I, Duca PG, Montedori A: Empirical evidence of bias: modified intention to treat analysis of randomised trials affects estimates of intervention efficacy. Z Evid Fortbild Qual Gesundhwes. 2008, 102 (Suppl VI): 9-Google Scholar
- Von Elm E, Rollin A, Blumle A, Senessie C, Low N, Egger M: Selective reporting of outcomes of drug trials? comparison of study protocols and published articles. 14th Cochrane colloquium. 2006, Dublin, Ireland: The Cochrane Collaboration, Oct 23–26Google Scholar
- Dwan K, Altman DG, Amaiz JA, Bloom J, Chan AW, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D, Ioannidis JP, Simes J, Williamson PR: Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008, 3: e3081-10.1371/journal.pone.0003081.View ArticlePubMedPubMed CentralGoogle Scholar
- Bassler D, Ferreira-Gonzalez I, Briel M, Cook DJ, Devereaux PJ, Heels-Ansdell D, Kirpalani H, Meade MO, Montori VM, Rozenberg A, Schunemann HJ, Guyatt GH: Systematic reviewers neglect bias that results from trials stopped early for benefit. J Clin Epidemiol. 2007, 60: 869-873. 10.1016/j.jclinepi.2006.12.006.View ArticlePubMedGoogle Scholar
- Montori VM, Devereaux PJ, Adhikari NK, Burns KE, Eggert CH, Briel M, Lacchetti C, Leung TW, Darling E, Bryant DM, Bucher HC, Schunemann HJ, Meade MO, Cook DJ, Erwin PJ, Sood A, Sood R, Lo B, Thompson CA, Zhou Q, Mills E, Guyatt GH: Randomized trials stopped early for benefit: a systematic review. JAMA. 2005, 294: 2203-2209. 10.1001/jama.294.17.2203.View ArticlePubMedGoogle Scholar
- Bekelman JE, Li Y, Gross CP: Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA. 2003, 298: 454-465.View ArticleGoogle Scholar
- Lexchin J, Bero LA, Djulbegovic B, Clark O: Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003, 326: 1167-1170. 10.1136/bmj.326.7400.1167.View ArticlePubMedPubMed CentralGoogle Scholar
- Sismondo S: Pharmaceutical company funding and its consequences: a qualitative systematic review. Contemp Clin Trials. 2008, 29: 109-113. 10.1016/j.cct.2007.08.001.View ArticlePubMedGoogle Scholar
- Caldwell PHY, Butow PN, Craig JC: Pediatricians’ attitudes toward randomized controlled trials involving children. J Pediatr. 2002, 141: 798-803. 10.1067/mpd.2002.129173.View ArticlePubMedGoogle Scholar
- Cohen E, Shaul RZ: Beyond the therapeutic orphan: children and clinical trials. Pediatr Health. 2008, 2: 151-159. 10.2217/1745522.214.171.124.View ArticleGoogle Scholar
- Rheims S, Cucherat M, Arzimanoglou A, Ryvlin P: Greater response to placebo in children than in adults: a systematic review and meta-analysis in drug-resistant partial epilepsy. PLoS Med. 2008, 5: e166-10.1371/journal.pmed.0050166.View ArticlePubMedPubMed CentralGoogle Scholar
- Tishler CL, Reiss NS: Pediatric drug-trial recruitment: enticement without coercion. Pediatrics. 2011, 127: 949-954. 10.1542/peds.2010-2585.View ArticlePubMedGoogle Scholar
- Ballard HO, Shook LA, Desai NS, Anand KJ: Neonatal research and the validity of informed consent obtained in the perinatal period. J Perinatol. 2004, 24: 409-415. 10.1038/sj.jp.7211142.View ArticlePubMedGoogle Scholar
- Creswell JW, Plano Clark VL: Designing and conducting mixed methods research. 2011, Los Angeles, CA: Sage Publications, 2Google Scholar
- Higgins JPT, Green S: Cochrane handbook for systematic reviews of interventions. Version 5.1.0. 2011, The Cochrane Collaboration, Available from www.cochrane-handbook.org Google Scholar
- Funk SG, Champagne MT, Wiese RA, Tornquist EM: BARRIERS: the barriers to research utilization scale. Appl Nurs Res. 1991, 4: 39-45. 10.1016/S0897-1897(05)80052-7.View ArticlePubMedGoogle Scholar
- Cabana MD, Rand CS, Powe NR: Why don’t physicians follow clinical practice guidelines? a framework for improvement. JAMA. 1999, 282: 1458-1465. 10.1001/jama.282.15.1458.View ArticlePubMedGoogle Scholar
- Kajermo KN, Boström AM, Thompson DS, Hutchinson AM, Estabrooks CA, Wallin L: The BARRIERS scale – the barriers to research utilization scale: a systematic review. Implement Sci. 2010, 5: 32-10.1186/1748-5908-5-32.View ArticlePubMedPubMed CentralGoogle Scholar
- Guest G, Bunce A, Johnson L: How many interviews are enough? an experiment with data saturation and variability. Field Methods. 2006, 18: 59-82. 10.1177/1525822X05279903.View ArticleGoogle Scholar
- Hsieh HF, Shannon SE: Three approaches to qualitative content analysis. Qual Health Res. 2005, 15: 1277-1288. 10.1177/1049732305276687.View ArticlePubMedGoogle Scholar
- Morse J, Niehaus L: Mixed methods design: principles and procedures. 2009, Walnut Creek, CA: Left Coast PressGoogle Scholar
- Caldwell PHY, Murphy SB, Butow PN, Craig JC: Clinical trials in children. Lancet. 2004, 364: 803-811. 10.1016/S0140-6736(04)16942-0.View ArticlePubMedGoogle Scholar
- Unguru Y: The successful integration of research and care: how pediatric oncology became the subspecialty in which research defines the standard of care. Pediatr Blood Cancer. 2011, 56: 1019-1025. 10.1002/pbc.22976.View ArticlePubMedGoogle Scholar
- Marshall JC, Cook DJ, Canadian Critical Care Trials Group: Investigator-led clinical research consortia: the Canadian critical care trials group. Crit Care Med. 2009, 37 (1 Suppl): S165-S172.View ArticlePubMedGoogle Scholar
- Klassen TP, Acworth J, Bialy L, Black K, Chamberlain JM, Cheng N, Dalziel S, Fernandes RM, Fitzpatrick E, Johnson DW, Kuppermann N, Macias CG, Newton M, Osmond MH, Plint A, Valerio P, Waisman Y, PERN: Pediatric emergency research networks: a global initiative in pediatric emergency medicine. Pediatr Emerg Care. 2010, 26: 541-543. 10.1097/PEC.0b013e3181e5bec1.View ArticlePubMedGoogle Scholar
- Hartling L, Wittmeier KDM, van der Lee JH, Klassen TP, Craig JC, Offringa M, StaR Child Health group: StaR child health: developing evidence-based guidance for the design, conduct, and reporting of pediatric trials. Clin Pharmacol Ther. 2011, 90: 727-731. 10.1038/clpt.2011.212.View ArticlePubMedGoogle Scholar
- Wilson PM, Petticrew M, Calnan M, Nazarteh I: Effects of a financial incentive on health researchers’ response to an online survey: a randomized controlled trial. J Med Internet Res. 2010, 12: e13-10.2196/jmir.1251.View ArticlePubMedPubMed CentralGoogle Scholar
- VanGeest JB, Johnson TP, Welch VL: Methodologies for improving response rates in surveys of physicians. Eval Health Prof. 2007, 30: 303-321. 10.1177/0163278707307899.View ArticlePubMedGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/12/158/prepub