Skip to main content

Development of a factorial survey for use in an international study examining clinicians’ likelihood to support the decision to initiate invasive long-term ventilation for a child (the TechChild study)



The decision to initiate invasive long-term ventilation for a child with complex medical needs can be extremely challenging. TechChild is a research programme that aims to explore the liminal space between initial consideration of such technology dependence and the final decision. This paper presents a best practice example of the development of a unique use of the factorial survey method to identify the main influencing factors in this critical juncture in a child’s care.


We developed a within-subjects design factorial survey. In phase 1 (design) we defined the survey goal (dependent variable, mode and sample). We defined and constructed the factors and factor levels (independent variables) using previous qualitative research and existing scientific literature. We further refined these factors based on expert feedback from expert clinicians and a statistician. In phase two (pretesting), we subjected the survey tool to several iterations (cognitive interviewing, face validity testing, statistical review, usability testing). In phase three (piloting) testing focused on feasibility testing with members of the target population (n = 18). Ethical approval was obtained from the then host institution’s Health Sciences Ethics Committee.


Initial refinement of factors was guided by literature and interviews with clinicians and grouped into four broad categories: Clinical, Child and Family, Organisational, and Professional characteristics. Extensive iterative consultations with clinical and statistical experts, including analysis of cognitive interviews, identified best practice in terms of appropriate: inclusion and order of clinical content; cognitive load and number of factors; as well as language used to suit an international audience. The pilot study confirmed feasibility of the survey. The final survey comprised a 43-item online tool including two age-based sets of clinical vignettes, eight of which were randomly presented to each participant from a total vignette population of 480.


This paper clearly explains the processes involved in the development of a factorial survey for the online environment that is internationally appropriate, relevant, and useful to research an increasingly important subject in modern healthcare. This paper provides a framework for researchers to apply a factorial survey approach in wider health research, making this underutilised approach more accessible to a wider audience.

Peer Review reports


In recent decades, Paediatric Intensive Care Unit (PICU) mortality rates have decreased [1,2,3]. Concurrently, children with increasingly complex medical conditions are surviving and accordingly, post-PICU morbidity rates have increased [4, 5]. This has prompted a growing focus on bioethical discussions around issues such as survivability thresholds, quality of life, autonomy and other ways that the decision to initiate life sustaining technologies (such as invasive long-term ventilation (ILTV)) impact the child, their families and healthcare professionals [6, 7].

One of the most challenging issues in PICU care remains the issue of ILTV in children living with a range of complex medical needs. LTV is one of the most well-established forms of life sustaining medical technology dependence and dominates the research literature in this area [8]. While there has been an increase in children receiving non-invasive long-term ventilation (NI-LTV) the number of children initiated on ILTV has either remained static or decreased [9, 10]. Situations wherein ILTV is considered for a child living with complex medical needs are frequently the most medically challenging cases for clinicians to navigate with the child and family. The decision to initiate (or not initiate) ILTV can be an extremely challenging one for all involved [6, 11].

While the evidence-base quantifying the extent of increases in medical technology dependence has expanded, few studies have examined the liminal space between beginning to consider the initiation of technology dependence (such as ILTV) and the final decision being agreed upon [12, 13]. Findings from studies examining family and child participation in such decisions have highlighted that communication barriers, issues of trust, and a perceived lack of transparency create additional challenges for families during this difficult time [11]. Given the context of dynamic advances in medical technology, the potential for moral distress in clinicians is also an area of research coming to the fore [14].

In TechChild, we addressed the critical issues arising from the application of advances in life sustaining technology in paediatric medicine. This research programme increases insight into what influences the decision to initiate long-term technology dependence to sustain a child’s life and will develop a theory to explain the initiation of technology dependence in the context of diverse health, legal, and socio-political systems. The initial phase of TechChild involved interviews with clinicians (e.g., doctors, nurses, other MDTs, bioethicists) (n = 78) across several international hospital sites. This in-depth phenomenological investigation explored the experiences of clinicians with these children and their families during this decision-making period [6]. In the second phase of TechChild (the focus of the current paper), the investigation has shifted to examine the main influences on the decision to support (or not) the initiation of ILTV. Whilst rarely used in the clinical environment, the factorial survey technique is an exciting approach that has the capacity to address the goal of this phase of the project. Alongside providing a method that gives a wide reach in terms of sample, this approach adequately considers the complex and nuanced factors involved in the decision-making process. The factorial survey set out in this paper is unique in that it was conducted internationally and is the first that we are aware of to be undertaken on a critical care topic in paediatrics.


The primary aim of this article is to provide a narrative summary of the preparatory work undertaken to enable this next phase of TechChild. The secondary aims are to outline as a best practice example, our approach in the (1) design, (2) pretesting and (3) piloting of a factorial survey to identify the main factors that influence the decision to support initiation of ILTV. We detailed the process of stakeholder-informed refinement of survey content and the steps taken to ensure both validity and functionality in the current online environment, which is an adaptation of existing literature on this topic.

Rationale for selecting a factorial survey-based approach

The factorial survey approach is well suited as a tool for interrogation of the factors that determine the clinical decision to support (or not support) the initiation of technology dependence. It allows for random yet systematic manipulation of survey content such that the data collected from each participant becomes individually enriched and the risk of unobserved heterogeneity is reduced, and collinearity is minimised [15]. To briefly summarise its core structure, the factorial survey is a type of experimental vignette-based methodology [16] which takes the form of an ‘experiment within a survey’ [17, 18]. By identifying factors within a parameterised, controlled vignette, interchangeable levels of each factor can be randomly introduced that allow the researcher to present many iterations of the core vignette, differentiated according to the random incorporation of factor levels. This is shown in Fig. 1. The within-subjects design used in this study allows multiple responses to be collected from each respondent and analysed in a more experimental manner than is the case with a standard survey [17, 19].

Fig. 1
figure 1

Overview of basic components of a factorial survey

In developing the TechChild factorial survey, we followed the conventional phases of survey development: Phase one - design, Phase two - pretesting and Phase three – piloting [20, 21]. The work within these phases was further guided by factorial survey literature [22,23,24]. Phase 1 formally defined the survey goal and its cognate dependent variable, as well as the appropriate mode and sample. As per the requirements of the factorial survey-based approach the independent variables were then defined and constructed as factors and factor levels. These factors were subject to refinement based on expert feedback from clinicians working with ILTV, and from a statistician. Thereafter, the standardised vignette text was established, and a total vignette population constructed with determination of the number of vignette sets and vignettes per respondent required. All methods were performed in accordance with the relevant guidelines and, where relevant, these are referenced throughout the text.

On completion of phase one, the survey tool underwent several iterations before it was ready for use in data collection. Phases two and three were also guided by established survey pretesting checklists [25, 26] as well as additional checks identified in the factorial survey literature [22, 27, 28]. The study received approval from the then host institution’s Health Sciences Ethics Committee (Reference number: 190202). The content of the final survey was also reviewed by the data protection office oand deemed low risk. Figure 2 below illustrates these stages of development.

Fig. 2
figure 2

Factorial survey development in the TechChild research programme (2021)

Construction of the vignette population

Factors initially identified for inclusion were drawn from (1) available peer reviewed literature as well as (2) additional qualitative research [15, 23, 29]. Reviews of the literature (including published TechChild work [8, 12]) identified pertinent categories under which factors were grouped. Factors that emerged from the experiential interviews with clinicians (n = 78) in the first phase of the TechChild project (April 2020–November 2020) were also reviewed, and classified and refined alongside those identified from the literature. The rationale for each factor was identified in this two-pronged evidenced based way. This iterative process was a time and resource intensive exercise, which took 3 months (January–March 2021). The factors identified for potential inclusion in the survey were mapped out for presentation to the research team; and discussed at weekly team meetings. Additional specific team meetings were held at each stage of the vignette development.

Many experiences recalled by interviewees discussed the progression to invasive LTV (via tracheostomy) from NI-LTV. It was clear from the interviews with clinicians that there was often great complexity with respect to decision making regarding the transition from NI-LTV to ILTV. Hence many identifiable factors and factor levels for the survey were identified from these experiences. Accordingly, the scope of the factorial survey was narrowed to the initiation of ILTV via tracheostomy in children with complex medical needs. Both the interviews and the available literature pointed towards this type of scenario ranking highly as a source of difficulty in clinical decision-making regarding the initiation of technology dependence [4, 30, 31].

As outlined above, extensive discussion led to the identification of factors which had the potential to be included in the survey. The research team conducted a comprehensive literature and interview review relating to each factor to identify evidence-based rationales to support the inclusion of each one. The included factors were placed under one of the following categories: Child characteristics; Clinical characteristics; Family characteristics; Organisational characteristics. An initial draft of the survey was generated using this evidence-based work. Three age-based versions of the survey were generated to represent conditions across the lifespan of a child (infant; middle childhood and adolescent). The content of each survey comprised of (1) Vignettes (table of factors and vignette text) (2) A 10-point Likert response question (the dependent variable (DV)) (3) Participant demographics.

An initial table of factors and levels of factors were established for the three separate surveys. Also referred to as the vignette universe [32], the vignette population is the Cartesian product of the complete set of possible vignette permutations and the initial number for each survey is set out in Table 1.

Table 1 Initial IV (factors) population based on refinement of factor generation from literature and interviews

Pretesting content assessment

Stage one: Panel of clinical experts. Initial factors and level of factors were reviewed and critiqued by an international panel of three clinical experts who had previously engaged in the wider project as clinical consultants. All members of this expert panel had extensive clinical experience of working with children who require medical technology to sustain life. Factors were assessed for clarity, relevance, and appropriateness, with refinements made based on the panel’s combined feedback.

The clinicians were asked to comment on each factor (and level of factor) in terms of clarity, relevance, appropriateness, with an emphasis on face and external validity. Where a clinician suggested a factor should be modified or deleted, they made comments for the rationale and, where appropriate, provided alternative suggestions. Incorporating initial guidance from a statistical consultant, the research team examined the revised content. This review led to several broad revisions, summarised in Table 2.

Table 2 Overview of refinements required in the TechChild survey following review by panel of clinical experts

Based on the feedback from the clinical experts, the content was reduced to two surveys and the levels of factors as well as the standard vignette text for both surveys were refined (Table 3):

Table 3 Summary of practitioner feedback-based factor refinement of each survey

Stage two: Cognitive interview-style assessments. Additional pretesting measures were considered essential to the validation of the survey. Cognitive interview-style survey assessments were completed with clinicians (n = 3) by a member of the research team. These consultative interviews were based on Tourangeau’s four-stage model of cognitive processing [33], further guided by established frameworks or guidelines [34, 35] and adapted to work with the nuances of a factorial survey. With regards to the latter point, the standard vignette content and repeated response question across all vignettes led to the team taking a more discursive vignette-by-vignette review, as opposed to a standard item-by-item examination observed in cognitive interviews of standard surveys. The interview protocol included observation checks, general questions to encourage think-aloud feedback and scripted yet flexible probes that were utilised where appropriate.

Two members of the previous expert panel as well as an additional clinical expert completed an interview. An important component of cognitive interviewing is the identification of comprehension differences and, given the international nature of the TechChild project, this was of particular importance to the team. Interviews were conducted remotely via the Zoom Meetings platform (San Jose, CA: Zoom Video Communications Inc.; and the experts were sent instructions in advance. At the beginning of each interview, the interviewer explained how the review would proceed. The interviewer shared the screen and made notes as the interview progressed and the clinical expert considered each question. The interview was conducted as a consultation and no identifying or personal information were included in observations and notes. After three interviews, comments and suggestions were reviewed by the team and, where appropriate, the survey was amended.

Overall, the vignette format and content were reviewed favourably by all clinical experts for the survey development. In the infant survey, five factors remained unchanged, two factors underwent modification (rewording or removal of one level) and one factor was deleted. In the adolescent survey, five factors remained unchanged, three factors required minor modification and again one factor was deleted. It was agreed to remove the possibility of the diagnosis factor level ‘Rett Syndrome’ appearing alongside the factor ‘Adolescent’s expressed opinion’. The removal of this combination reduced the overall adolescent vignette population from 384 to 288 possible combinations.

In terms of the number of vignettes presented per participant, Sauer and colleagues [36] recommends limiting the number of vignettes per participant to less than 20 vignettes and no more than 11 factors to avoid cognitive overload, tiredness, boredom and/or inconsistent responses. Considering both the sensitive nature of the topic and number of factors included, eight vignettes (four from each age group) were presented during cognitive interviewing and this quantity was considered appropriate by the clinical expert interviewees. Based on the interview feedback, minor modifications were made to the vignette text to enhance flow, and the Likert scale options simplified (with the assent of the statistician) to enhance ease of response. Areas requiring amendment were categorised using Drennan’s cognitive interview field guide as a framework [37] (see Table 4).

Table 4 Overview of survey amendments required based on cognitive interview feedback (adapted from Drennan field guide [37])

Stage three: Statistical review. CW reviewed the content and associated questions informed by feedback from the cognitive interviews. The two age-based surveys were retained; one factor (age) was reduced to two levels. A simplified 4-item Likert scale was also deemed most appropriate to encourage clearer decision-making by respondants [38]. No changes were made to the questions on the demographic profile of the participants. The changes were reviewed and confirmed by clinical experts and the researchers. A summary of the revised cartesian product of survey factors is set out in Table 5.

Table 5 Summary of cognitive interview-based factor refinement of each survey

Face validity

The entire vignette population for each of the surveys was generated using Python software (Python 3.9; Python Software Foundation, 2021). The vignettes were generated in this way to remove the randomising function of Qualtrics and to ensure that all vignettes were reviewed. Each vignette was assessed by two reviewers (a clinician and an academic) (n = 576).

Software, usability, and accessibility testing

The survey was set up and programmed using the Qualtrics platform (Qualtrics, Provo, UT). More detailed guiding information regarding this set up is set out in supplementary file 1.Footnote 1 The survey was then assessed using the Qualtrics accessibility checklist and the team consulted with the Disability Office to confirm the survey passed accessibility standards. The survey content and format were amended where appropriate, for example the removal of a progress bar and the use of an accessibility-compliant font. Some accessibility improvements were not possible due to the nature of the survey design. For example, inclusion of a back button was not compatible with the randomiser function.


Pilot study

An online international pilot study was completed in September 2021 with the purpose of assessing feasibility and identifying any possible issues that could negatively impact on data collection. The pilot was completed with a convenience purposive sample group who were members of the target population. All qualified clinical health professionals with experience of working with children at the time of the initiation of technology dependence were included. The survey (which included in a link with a PIL and informed consent form) was distributed via a gatekeeper, who worked as a nurse specialist in a large university hospital, to healthcare professional colleagues working in a PICU environment. The survey was also snowballed from this group (n = 18). As advised by the TechChild consultant statistician, data collection for the pilot study continued until sufficient data was gathered to assess the feasibility of the survey as well as appropriateness of the data format for analysis. This version of the survey included an optional open comment box after each item for any feedback and pilot participants were also invited to comment on the survey itself.

Of the 18 participants enrolled in the pilot study, 13 (72%) provided a response to at least seven of the eight vignettes. For 12 of the 18 participants (72%), a ≥ 90% vignette completion rate was obtained, suggesting no signal indicative of vignette saturation (i.e., the number of vignettes was appropriate). Feedback from the gatekeeper suggested that the most likely reason for an incomplete response was that the individual started the survey at work and was interrupted. The median length of time taken to complete the survey (those with a > 90% survey completion rate) was 10 minutes (mean = 18.2 minutes, SD = 17.6). This indicated that the instruction to participants that the survey would take approximately 10–15 minutes was accurate. Completion times did not indicate any specific issues. The demographic characteristics of participants who completed this section of the study (n = 12) are set out in Table 6.

Table 6 Demographic characteristics of in the pilot study

Seven of the 12 participants who completed the survey recorded comments. Only one participant commented on all of the vignettes. Thus, whilst a forced response option on the comments section may increase contextual information gathered on individual vignettes, our concern was that it may also adversely affect completion rates. If participants who chose not to comment were forced to contribute, given the open-ended nature of the comment question, they may choose to leave the study rather complete a section they did not want to answer.

Most comments focused on their response to a particular vignette rather than any issues with the survey, highlighting the value of including an option to add a comment box in the final survey. Some participants used the comments box to summarise the pertinent aspects of the vignette and others took the opportunity to explicitly set out their rationale for their response:

“Poor prognosis but family on board” (Respondent 9)

“I think you need to take the adolescent's opinion into account” (Respondent 4)

Where a need for additional information was indicated (n = 2), there were different opinions on what additional information might be useful. Further information on quality of life or social environment as well as additional clinical information were noted on individual vignettes:

“Need to explore how LTV will change quality of life, for the better or not” (Respondent 3)

“I think to fully decide on this I would want to have more information regarding the child’s development including physical function and cognitive function” (Respondent 9)

One respondent noted that after completing the demographic section, the importance of a family’s religious beliefs came to mind. Religious beliefs and many other important contributors were considered for inclusion by the research team. However, considering the complexities and nuances of these issues the decision to include the factor “Parental agreement/disagreement” was taken with this in mind because such individual family circumstances were often cited in the context of reasons for parental disagreement with the team.

Only two participants commented on the survey itself. These comments were minor issues with functionality (e.g., more than one option on some demographics questions could be selected) that were resolved. To ensure the pilot achieved the objectives, it was assessed based on existing pilot checklists [39].

The content of the final survey for distribution is set out in Table 7. This comprises the table of factors that were randomly interchanged for each participant in the main study along with the standard vignette text, response question and comment box following each vignette and the demographic questions.

Table 7 Content of final survey ready for distribution


Traditional survey methods were not considered sufficient to identify the greatest influences on a clinician’s decision to support, or not support, the initiation of ILTV. In this methodological paper we adapted existing methodology for online use internationally with healthcare professionals who care for children at the time when ILTV initiation is being considered. Each stage of the factorial survey development and validation process has been set out, resulting in a field-ready tool that is feasible, appropriate, ethical and relevant.

This article contributes to the factorial survey literature by informing researchers of the practical steps involved when developing their own factorial survey in the healthcare area.

The development of, and pretesting approach to, a survey depends on the individual needs of the study. Whilst some aspects of a factorial survey are more complex (such as the interchangeable factors randomly presented to participants), other aspects of the design are easier to assess (for example the use of the same background vignette text and response question across vignettes). In the context of complex care medicine, the development and finalisation of factors/levels of factors was extremely time consuming compared to the other aspects of the survey development. In the current study, a great deal of consultation, discussion and subsequent refinement of the initial list of factors was required to produce a meaningful vignette population that is clinically relevant yet does not cognitively overburden the participant and lead to the use of heuristics [32, 36]. Each factor was considered both independently and relative to the other factors. The decision to include each level of factor and exclude others was a painstaking process, for example the ages and diagnoses chosen, consideration of novel therapies, and the family cultural/social characteristics.

Conversely, other aspects of pretesting were perhaps less burdensome than in other survey studies. There is debate in the literature on the appropriate sample size for cognitive interviews. Some studies suggest that similar numbers of participants to those in the pilot studies are the ideal; whilst other studies question this approach in terms of feasibility but also in terms of contribution [26, 39]. In reality, there is no consensus on optimal sample size; and critical appraisal by an experienced research team is required to determine an appropriate approach [37]. Similarly, the pilot study’s design, specific to a factorial survey, meant that informative analysis of small-scale data would be limited. Thus, the purpose of the pilot in our study was primarily to examine the feasibility and appropriateness of the survey, in addition to establishing that the suitability of the data format extracted from Qualtrics (Qualtrics, Provo, UT) would be suitable for the required analysis. This was particularly important to establish given the complex set up of the factorial survey design.

Limitations and future directions for research

The factorial survey is a valuable tool in that it allows the flexibility to examine a multitude of factors in different ways. However, the nature of the factorial survey method also means that some formal tests of validity and reliability recommended in survey development, such as inter-rater reliability, test-retest and internal consistency, were not feasible; either because of the design or the sample or were inappropriate given the nature of the construct under examination.

This attribute of flexibility also means that the design features of a factorial survey across studies can differ substantially. This makes the approach of using a factorial survey sometimes challenging to appraise, compared to other factorial survey studies. It is particularly challenging to use by researchers who do not have a statistical or software background. The fast-paced development of survey administration tools such as Qualtrics (Qualtrics, Provo, UT) and RedCap (Research Electronic Data Capture) has limited the contribution of even relatively recent scientific papers on the factorial survey methodology, in terms of design, development and procedure. Some researchers have developed their own method of programming to address design limitations of standard software tools [40] a challenge for researchers who do not possess programming or software skills. Indeed, as alluded to in the paper, every stage of the development of the survey was time, skill and resource intensive for the TechChild team and thus this approach may not be a feasible method for researchers with less resources and support. Finally, this paper addresses many validity issues relevant to our current project but acknowledges that generalisability is limited to other factorial surveys that have similar design features.


Developing a factorial survey for use in the paediatric critical care setting is novel. This paper explains the processes involved in the development of a factorial survey for the online environment that is appropriate, relevant, and useful on a subject which is becoming increasingly important in modern healthcare. This approach is potentially appropriate for use in other healthcare settings where decisions are made about sensitive issues. More in-depth information regarding the design, development and validity of different factorial survey designs are needed to support researchers in determining the needs of their study. The inclusion of more pretesting information in studies improves the ethical standards and design quality of a survey, thereby serving to protect participants, as well as increase confidence and trust in the research process.

Availability of data and materials

The datasets generated and analysed during the current study are not publicly available due to the broader TechChild research programme is still in progress. Where possible this data will become available from the corresponding author on reasonable request.


  1. See Supplementary File 1



Invasive Long-Term Ventilation


Non-invasive long-term ventilation


Paediatric Intensive Care Unit


  1. Burns J, Sellers D, Meyer E, Lewis-Newby M, Truog R. Epidemiology of death in the PICU at five U.S. teaching hospitals. Crit Care Med. 2014;9(42):8.

    Google Scholar 

  2. Matsumoto N, Hatachi T, Inata Y, Shimizu Y, M T. Long-term mortality and functional outcome after prolonged paediatric intensive care unit stay. Eur J Pediatr. 2019;178:6.

    Article  Google Scholar 

  3. Moynihan K, Alexander P, Schlapbach L, Millar J, Jacobe S, Ravindranathan H, et al. Epidemiology of childhood death in Australian and New Zealand intensive care units. Intensive Care Med. 2019;45:10.

    Article  Google Scholar 

  4. Pavone M, Verrillo E, Onofri A, Caggiano S, Chiarini Testa MB, Cutrera R. Characteristics and outcomes in children on long-term mechanical ventilation: the experience of a pediatric tertiary center in Rome. Ital J Pediatr. 2020;46(1):12.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Pollack M, Banks R, Holubkov R, Meert K. Long-term outcome of PICU patients discharged with new, functional status morbidity. Pediatr Crit Care Med. 2021;22(1):27–39.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Alexander D, Quirke M, Doyle C, Hill K, Masterson K, Brenner M. The meaning given to bioethics as a source of support by physicians who care for children who require long-term ventilation. Qual Health Res Accepted. 2022;32(6):916-28.

  7. Murphy Salem S, Graham R. Chronic illness in pediatric critical care. Front Pediatr. 2021;9:686206.

  8. Brenner M, Alexander D, Quirke M, Eustace-Cook J, Leroy P, Berry J, et al. A systematic concept analysis of ‘technology dependent’: challenging the terminology. Eur J Pediatr. 2021;180(1):1–12.

    Article  PubMed  Google Scholar 

  9. Walsh A, Furlong M, Mc Nally P, O'Reilly R, Javadpour S, Cox DW. Pediatric invasive long-term ventilation-a 10-year review. Pediatr Pulmonol. 2021;56(10):3410–6.

    Article  PubMed  Google Scholar 

  10. McDougall C, Adderley R, Wensley D, Seear M. Long-term ventilation in children: longitudinal trends and outcomes. Arch Dis Child. 2013;98(9):660–5.

    Article  PubMed  Google Scholar 

  11. Edwards J, Panitch H, Nelson J, Miller R, Morris M. Decisions for long-term ventilation for children: perspectives of family members. Ann Am Thorac Soc. 2020;17(1):72–80.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Alexander D, Quirke M, Berry J, Eustace-Cook J, Leroy P, Masterson K, et al. Initiating technology dependence to sustain a child’s life: a systematic review of reasons. J Med Ethics. 2021;0:8.

    Google Scholar 

  13. Alexander D, Eustace-Cook J, Brenner M. Approaches to the initiation of life-sustaining technology in children: a scoping review of changes over time. J Child Health Care. 2021;25(4):509–22.

    Article  PubMed  Google Scholar 

  14. Brindley PG. Psychological burnout and the intensive care practitioner: a practical and candid review for those who care. J Intensive Care Soc. 2017;18(4):270–5.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Jasso G. Factorial survey methods for studying beliefs and judgments. Sociol Methods Res. 2006;34(3):334–423.

    Article  Google Scholar 

  16. Sheringham J, Kuhn I, Burt J. The use of experimental vignette studies to identify drivers of variations in the delivery of health care: a scoping review. BMC Med Res Methodol. 2021;21(1):81.

    Article  PubMed  PubMed Central  Google Scholar 

  17. McDonald P. How factorial survey analysis improves our understanding of employer preferences. Swiss J Sociol. 2019;45(2):24.

    Article  Google Scholar 

  18. Aguinis H, Bradley K. Best practice recommendations for designing and implementing experimental vignette methodology studies. Organ Res Methods. 2014;17(4):351–71.

    Article  Google Scholar 

  19. Dülmer H. The factorial survey: design selection and its impact on reliability and internal validity. Sociol Methods Res. 2016;45(2):304–47.

    Article  Google Scholar 

  20. US Government Accountability Office. GAO learning Centre: pretesting survey participant manual. USA: US Government; 2013. Accessed 15 Jun 2021

    Google Scholar 

  21. Wolf C, Joye D, Smith TW, Fu Y. The SAGE handbook of survey methodology. London: SAGE Publications limited; 2016.

    Book  Google Scholar 

  22. Brenner M, Drennan J, Treacy MP, Fealy GM. An exploration of the practice of restricting a child's movement in hospital: a factorial survey. J Clin Nurs. 2015;24(9–10):1189–98.

    Article  PubMed  Google Scholar 

  23. Atzmüller C, Steiner PM. Experimental vignette studies in survey research. Methodology. 2010;6(3):128–38.

    Article  Google Scholar 

  24. Taylor J, Lauder W, Moy M, Corlett J. Practitioner assessments of ‘good enough’ parenting: factorial survey. J Clin Nurs. 2009;18(8):1180–9.

    Article  PubMed  Google Scholar 

  25. Artino A, Durning S, Sklar D. Guidelines for reporting survey-based research submitted to academic medicine. Acad Med. 2018;93(3):337–40.

    Article  PubMed  Google Scholar 

  26. US Census Bureau. In: Bureau UC, editor. Census Bureau Standard: Pretesting questionnaires and related materials for surveys and censuses; 2003. Accessed 3 Jun 2021.

    Google Scholar 

  27. McElhinney H, Taylor BJ, Sinclair M. Judgements of health and social care professionals on a child protection referral of an unborn baby: factorial survey. Child Abuse Negl. 2021;114:104978.

    Article  Google Scholar 

  28. Sattler S, Escande A, Racine E, Göritz AS. Public stigma toward people with drug addiction: a factorial survey. J Stud Alcohol Drugs. 2017;78(3):415–25.

    Article  PubMed  Google Scholar 

  29. Wallander L. 25 years of factorial surveys in sociology: a review. Soc Sci Res. 2009;38(3):505–20.

    Article  Google Scholar 

  30. Flanagan F, Healy F. Tracheostomy decision making: from placement to decannulation. Semin Fetal Neonatal Med. 2019;24(5):101037.

    Article  PubMed  Google Scholar 

  31. Gergin O, Adil EA, Kawai K, Watters K, Moritz E, Rahbar R. Indications of pediatric tracheostomy over the last 30 years: has anything changed? Int J Pediatr Otorhinolaryngol. 2016;87:144–7.

    Article  PubMed  Google Scholar 

  32. Sauer C, Auspurg K, Hinz T, Liebig S, Schupp J. Methods effects in Factorial surveys: an analysis of respondents’ comments, interviewers’ assessments, and response behavior. Berlin: German Socio-Economic Panel Study (SOEP); 2014. Contract No.: 629/2014

    Google Scholar 

  33. Tourangeau R. Cognitive science and survey methods: a cognitive perspective. In: Jabine T, Straf M, Tanur J, Tourangeau R, editors. Cognitive aspects of survey design: building a bridge between disciplines. Washington, DC: National Academies Press; 1984. p. 73–100.

    Google Scholar 

  34. Drennan J. Cognitive interviewing: verbal data in the design and pretesting of questionnaires. J Adv Nurs. 2003;42(1):57–63.

    Article  PubMed  Google Scholar 

  35. Boeije H, Willis G. The cognitive interviewing reporting framework (CIRF): towards the harmonization of cognitive testing reports. Methodology. 2013;9(3):87–95.

    Article  Google Scholar 

  36. Sauer C, Auspurg K, Hinz T, Liebig S. The application of factorial surveys in general population samples: the effects of respondent age and education on response times and response consistency. Surv Res Methods. 2011;5(3):14.

    Google Scholar 

  37. Drennan J. Using cognitive interviewing in health care research. In: Curtis E, Drennan J, editors. Quantitative Health Research: issues and methods. Maidenhead: McGraw-Hill Education; 2013. p. 277–92.

    Google Scholar 

  38. Abdul-Rahman Barakji F. Scales, Forced Choice. Allen, M. (Editor) The SAGE Encyclopedia of Communication Research Methods. 2017 [ebook]. Thousand Oaks.

  39. Willis G. Questionnaire Pretesting. In: Wolf C, Joye D, Smith T, Fu Y, editors. The SAGE Handbook of Survey Methodology. London: SAGE Publications Ltd; 2016. p. 359–81.

    Chapter  Google Scholar 

  40. Witry M, St Marie B, Viyyuri B, Windschitl P. Factors influencing judgments to consult prescription monitoring programs: a factorial survey experiment. Pain Manag Nurs. 2020;21(1):48–56.

    Article  PubMed  Google Scholar 

Download references


The authors would like to thank the survey participants and would like to acknowledge the Qualtrics customer support team in their support of the TechChild research team in using the platform to set up the factorial survey.


This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 803051).

Author information

Authors and Affiliations



MB conceptualised the TechChild research programme within which this study sits, she designed the survey, performed the literature-based clinical review and protocol of factors (upon which the rationale for inclusion of factors in the survey was based), face-checked the vignette population analysed and interpreted data from each phase of the testing and was a major contributor to each development phase of the survey and in writing the manuscript. MQ designed the survey, coordinated data collection, set up the platform software, analysed and interpreted the feedback and data from each phase of the study and was a major contributor to each development phase of the survey and in writing the manuscript. DA designed the survey, face validity checked vignettes, completed usability testing and was a major contributor to each development phase of the survey and the writing of the paper. JG contributed to the set up the software platform and Python software set up, face checked vignettes and was a major contributor to the writing of the manuscript. JB, PL, LP and KM assessed, refined and/or revised the clinical factors and completed cognitive interviewing of the factorial survey and contributed to the writing of the paper. KM also face checked the final vignette population. CW contributed to the design, statistical approach, database design and set up. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Maria Brenner.

Ethics declarations

Ethics approval and informed consent to participate

The study received approval from the Trinity College Dublin Faculty of Health Sciences Ethics Committee (Reference number: 190202) which was the host institution at the time of data collection for the pilot study. The content of the final survey was also reviewed by the data protection office and it was deemed low risk. All participants were provided with a project summary, PIL and informed consent form which informed those interested in participating of their rights. As the study was anonymous at source, continuation with the survey and submission via Qualtrics was deemed as informed consent. All methods were performed in accordance with the relevant guidelines and, where relevant, these are referenced throughout the text.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Quirke, M.B., Alexander, D., Masterson, K. et al. Development of a factorial survey for use in an international study examining clinicians’ likelihood to support the decision to initiate invasive long-term ventilation for a child (the TechChild study). BMC Med Res Methodol 22, 198 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: