Skip to main content

Creating and administering video vignettes for a study examining the communication of diagnostic uncertainty: methodological insights to improve accessibility for researchers and participants

Abstract

Background

Studying clinician-patient communication can be challenging, particularly when research seeks to explore cause-and-effect relationships. Video vignettes – hypothetical yet realistic scenarios – offer advantages to traditional observational approaches by enabling standardisation and manipulation of a clinician-patient encounter for assessment by participants. While published guidelines outline stages to create valid video vignette studies, constructing high quality vignettes which are accessible to a wide range of participants and feasible to produce within time and budget restraints remains challenging. Here, we outline our methods in creating valid video vignettes to study the communication of diagnostic uncertainty. We aim to provide practically useful recommendations for future researchers, and to prompt further reflection on accessibility issues in video vignette methodology.

Methods

We produced four video vignettes for use in an online study examining the communication of diagnostic uncertainty. We followed established guidelines for vignette production, with specific consideration of how these might be applied pragmatically to save time and resources. Scripts were pilot-tested with 15 laypeople, and videos with 14 laypeople; pilot-testing involved both quantitative and qualitative analysis.

Results and discussion

We demonstrate the usefulness of existing guidelines, while also determining that vignette production need not necessarily be expensive or time-consuming to be valid. Our vignettes were filmed using an iPhone camera, and featured a physician rather than a professional actor; nonetheless, pilot-testing found them to be internally and externally valid for experimental use. We thus propose that if care is taken in initial script development and if pragmatic choices are made regarding filming techniques and pilot-testing, researchers can produce valid vignettes within reasonable time and budget restraints. We also suggest that existing research fails to critically examine the potential benefits and harms of online video vignette methodology, and propose that further research should consider how it can be adapted to be inclusive of those from underserved backgrounds.

Conclusions

Researchers creating video vignette studies can adapt the video vignette development process to suit time and budget constraints, and to make best use of available technology. Online methods may be harnessed to increase participant accessibility, but future research should explore more inclusive vignette design.

Peer Review reports

Introduction

Various approaches exist for the study of doctor-patient communication [1, 2]. Observational studies of real doctor-patient interactions are not always feasible. Observing sensitive or emotive communication may be logistically and ethically challenging [3]. Moreover, observations do not allow for controlled manipulation of variables: they can explore correlations between communication behaviours and different outcomes measures, but they rarely explore causation [2, 4].

Vignette studies provide a useful alternative. A vignette is a “short, carefully constructed description of a person, object, or situation, representing a systematic combination of characteristics” [5]. Hypothetical yet realistic scenarios are shown to participants, who are then invited to respond [6]. Responses reveal participants’ beliefs, attitudes, judgments, knowledge, or intended behaviours with respect to the vignette context [5]. In experimental vignette studies, controlled modification of key variables (while keeping the remaining content of the vignettes constant) enables researchers to infer causal relationships [7, 8]. Manipulating one aspect of communication in isolation allows for greater standardisation compared with observational studies of real consultations [7, 9].

‘Analogue patients’ (APs) are often used in healthcare communication studies. APs watch or read vignettes depicting an interaction with a healthcare professional, and imagine themselves in the position of the patient [10,11,12,13,14]. Vignette studies using APs can also be helpful in overcoming ceiling effects, which occur in studies using real patients who are unwilling to criticise their own doctors [9, 15]. This may be particularly important when emotive measures (e.g. trust) are being examined – social desirability effects might result in real patients feeling pressured to give their own doctors higher values, positively skewing results.

The vignettes themselves can be presented using a range of modalities: written text (a narrative or a script), cartoons, pictures or videos [5, 7]. Written vignettes have been used to study doctor-patient communication [10, 16], but have been criticised for potential low external validity [17]. Video vignettes may facilitate better participant engagement, and increasingly have been used to study health communication [3]. Creating valid video vignettes is not, however, a straightforward process: many diverse factors must be considered, from developing verbal manipulations in a script to determining which camera angles to use.

Until the last decade, there was little evidence-based guidance or practical instruction for researchers developing their own video vignettes; even recently published vignette studies often fail to clearly report various methodological stages [3, 6, 7]. To provide practical guidance for researchers, Hillen et al. published recommendations on how to create valid video vignettes [3]. They suggested five phases: deciding if video vignettes are appropriate; developing a script; developing valid manipulations; converting the script to video and finally administering the videos. Other papers describe in detail the development of video vignettes, in healthcare research [18,19,20,21,22] as well as other areas [23,24,25].

Although such publications have provided researchers with guidance, creating and implementing video vignettes can still be a “daunting task”, not least due to the cost and logistics involved in producing realistic videos [3]. Additionally, researchers planning video vignette studies must consider diversity and inclusion. Increasingly, online methods are recognised as helpful in delivering video vignette studies. If carefully designed, they present an opportunity to increase accessibility for participants from underrepresented groups, but if not, they risk being exclusive and results under-representative. Ultimately, online video vignettes are a potentially valuable method for studying doctor-patient communication, but only if they are accessible to both researchers and to participants from a range of backgrounds.

Here, we outline our application of Hillen’s guidance to create video vignettes for a study examining the communication of diagnostic uncertainty. We detail our methodology with the intention of helping other researchers develop video vignettes. In reflecting upon our methodological choices, we provide insights into how video vignettes studies can be more accessible: both to researchers (by demonstrating that high-quality video vignettes can be produced within resource-limited environments), and to participants (by suggest ways in which the online delivery of vignettes can be adapted to be more inclusive to those from underserved groups).

Study context and aims

Background and wider research programme

Uncertainty is inherent to medicine – particularly in the diagnostic process [26,27,28] – yet issues surrounding the communication of diagnostic uncertainty to patients remain relatively underexplored. Although the GMC recommends that doctors explain to patients when they are uncertain about a diagnosis, they do not provide detail on how it might be done [29]. The study described here is part of a wider multidisciplinary programme of research, examining the practical, legal and ethical issues surrounding how diagnoses are formed, communicated and recorded.

As part of this research, we initially conducted two systematic reviews examining the communication of diagnostic uncertainty in primary care [30] and acute secondary care [31]. These demonstrated that research is limited by a lack of consensus on how diagnostic uncertainty is defined or measured, and found evidence for variation in how diagnostic uncertainty is communicated to patients in practice.

The vignette element of this research (Communication Of Diagnostic Uncertainty Study [CODUS]) involved two stages: 1) initial study involving doctors using written vignettes (CODUS 1), and 2) video vignette study involving patients (CODUS 2). Figure 1 provides an overview of these.

Fig. 1
figure 1

Overview of CODUS 1 and CODUS 2 studies

The methods and results of CODUS 1 are detailed elsewhere [32]. In this first study, we found significant variation in the communication of diagnostic uncertainty: some doctors went into detail about the uncertainty surrounding the diagnosis, while others did not explicitly acknowledge uncertainty at all. Participants described various and often conflicting justifications for their behaviours. Notably, we found doctors had differing opinions on the impact that communicating diagnostic uncertainty might have on their patients: some felt it might have a negative impact on the therapeutic relationship or patient anxiety, while others felt the reverse.

Rationale behind the current video vignette study (CODUS 2)

In light of these results, a second study (CODUS 2) aimed to examine the effects on patients of communicating diagnostic uncertainty in two varying clinical scenarios (see Fig. 2 for study design). For this, we developed four video vignettes (two depicting a headache scenario, and two for a change in bowel habit scenario). For each scenario, we developed one vignette depicting high communicated diagnostic uncertainty, and one depicting low communicated diagnostic uncertainty. All other aspects of communication were kept constant between the different conditions – the study aimed to isolate the communication of diagnostic uncertainty and investigate its impact on patients.

Fig. 2
figure 2

CODUS 2 study design

Methods and pilot-testing results

Here we outline our methods in developing the video vignettes, with emphasis on the steps we took in producing high quality vignettes despite time and budget restraints. Figure 3 provides an overview of the actions we took in developing the video vignettes against the research phases proposed by Hillen et al. [3] The development of the scripts and the pilot testing took place from February 2022 to October 2022; the main study data collection took place from December 2022 to March 2023.Footnote 1

Fig. 3
figure 3

Phases of creating video vignettes and actions taken in our study (adapted from van Vliet et al (2013)) [18]

Stage 1: deciding if using video vignettes is appropriate

In observational studies of real consultations, specific communication behaviours cannot be isolated and manipulated. In contrast, vignette methodology permits controlled manipulation of certain elements (such as the degree to which diagnostic uncertainty is communicated). As such, for our research questions, vignette methodology offers an advantage over observational studies of real consultations.Footnote 2

The use of APs avoids ethical issues which might be associated with using real patients [3]. There is a theoretical concern – with some limited supporting evidence – that communicating uncertainty might have a negative impact on patient trust, satisfaction and perception of doctor competence [10, 33,34,35]. Using APs allows manipulation of the communication of diagnostic uncertainty without harming real patients.

Vignette methodology has some limitations. Vignettes may never be identical to real consultations: although APs can effectively put themselves in the position of the patient in the vignettes [9, 36], this is unlikely to be completely equivalent to a real patient responding to the communication. Doctors often tailor their communication to the specific patient – in vignette studies there can be no such adaptation of communication content, and nor can there be any discussion between the AP and the doctor.

Considering these strengths and limitations, we concluded that vignettes would be the optimal methodology for addressing our research questions.

Stages 2 and 3: developing a script with valid manipulations

We developed four scripts: high and low communicated diagnostic uncertainty for both the ‘change in bowel habit’ vignette (V1A and V1B), and for the ‘headache’ vignette (V2A and V2B) (see Table 1).

Table 1 Initial scripts

Doctor monologue vs. conversation

Existing studies vary in the use of doctor monologue [10] vs. scripted doctor-patient conversation [18, 19, 37]. Including both patient and doctor may create a more naturalistic vignette, but may negatively impact the external validity (as APs may find it more difficult to realistically imagine themselves in the role of the patient). Some studies indicate that this is more challenging if the vignette depicts a patient with distracting different characteristics to themselves (e.g. different age or gender) [3, 17]. This is theoretically grounded in the similarity-identification hypothesis: the notion that identification is increased by similarity between audience members and characters [38].

Although this idea is intuitively compelling, it currently lacks conclusive empirical support: “the empirical evidence regarding the similarity hypothesis is mixed and combined with the strong theoretical and intuitive appeal of this hypothesis a more definitive investigation is needed” [38]. In a systematic review on narratives used to convey health messages, a few studies reported a higher persuasiveness when characters in the narrative were similar to those watching it, but most found no differences [39]. A more recent study examining the effect of gender in vignette studies found no effect of gender congruence on self-reported video engagement [40].

This choice was discussed at two PPI group meetings; participants felt using a doctor only monologue might make it easier for participants to imagine themselves in the patient’s position.

Developing an introduction

Vignette studies often contain an introduction to familiarise participants and provide background information. It may be written, an audio voiceover, or a video sequence using an actor introducing themselves as the patient [41].

A study comparing the use of a written vs. an audiovisual introduction demonstrated greater cardiovascular response when watching the latter, but did not find any differences in self-reported engagement or in perceived realism [41]. This study also assessed the impact of showing participants a conversation between a doctor and a patient, vs. a doctor monolog. Notably, participants who had an audiovisual introduction and who watched the doctor monologue version of the vignette had a lower emotional engagement than with the written introduction. The authors concluded that “researchers who do not want to show the patient at all during the video-vignette consultations should consider using a written introduction” [41].

We therefore developed written introductions. We used lay terms to increase comprehensibility, and included background information including symptoms experienced, patient location, and a clear timeline.

Standard script development

We developed a standard script for each scenario, using common elements from CODUS 1 transcripts to enhance ecological validity. Most responses followed a similar structure: initial introduction and reassessment of the clinical situation, explanation of the investigation results, discussion about the likely diagnosis with a suggested plan, and safety-netting. Two researchers – CC (a doctor working in internal medicine) and TH (an anthropologist) read though the transcripts and noted common phrases, such as “your investigations are very reassuring”, to use verbatim in the CODUS 2 standard scripts. We scripted the whole vignette, leaving no space for ad-libbed portions (to ensure that the only difference between the conditions would be the communication of diagnostic uncertainty).

Developing manipulations

The standard scripts were then manipulated to create high vs low communicated uncertainty conditions.

Drawing directly from CODUS 1 transcripts, we created a table with quotations demonstrating high vs low communicated diagnostic uncertainty. Again, by using examples of what doctors had actually said, we aimed to make the scripts as ecologically valid as possible. As the communication of diagnostic uncertainty is not a simple construct, we chose to vary multiple verbal segments. Our ‘high uncertainty’ scripts accumulated verbal segments from multiple doctors. We discussed and iterated these scripts, aiming to balance realism with manipulation success: we aimed to make manipulations which were distinguishable, without descending into caricature [3, 42].

The resulting high communicated uncertainty scripts were longer because of increased discussion. Length discrepancies have been noted in other vignette studies [18, 19]. Following others, we decided not to correct for length differences because they reflect real consultations. Explicitly explaining diagnostic uncertainty would likely take more time than not disclosing it, so keeping the length discrepancy is more realistic [43].

To isolate the impact of delivering information content about diagnostic uncertainty to patients, we only manipulated verbal elements; non-verbal communication (e.g. eye contact, body position, expressions) were kept as similar as possible between the videos. We note that it is impossible to entirely separate non-verbal and verbal communication – for example, the tone in which information is imparted and the speaker’s body language will naturally be somewhat influenced by the information itself [3]. Therefore, we emulated Gehenne et al.’s approach, aiming to keep the non-verbal behaviour broadly similar between different vignettes, but congruent with the content of the consultation [19].Footnote 3

Refining scripts using expert opinion

Consulting relevant experts can help to establish realism at the script development phase [3]. We shared our scripts with a consultant gastroenterologist and consultant neurologist respectively. Small changes were made in response to their feedback – for example, the wording of V1B script was altered to include terms the gastroenterology consultant commonly uses in explaining an IBS diagnosis. Both experts felt that the scripts were medically accurate and believable.

Pilot-testing scripts

Video vignette studies frequently use pilot-testing but vary in the extensiveness of the process [19, 44]. Of note, published guidelines do not stipulate how many pilot participants are required. Our approach balanced the usefulness of feedback with the potential logistical challenges of repeated or extensive pilot-testing.

We undertook pilot-testing with a convenience sample of 15 participants who met the inclusion/exclusion criteria for the main study (mean age 34.8 years, range 19–67 years). They were laypeople without medical expertise, and were given all the scripts to read in a randomised order. Script-pilot testing focused on testing internal validity: were the manipulations sufficiently distinct in their communication of diagnostic uncertainty?

The communication of diagnostic uncertainty is a complex construct [45], which lacks a universal definition or validated tools for its measurement [46]. We were thus unable to replicate other vignette studies which have used validated multi-item instruments to test internal validity, as no such instruments exist for the construct (the communication of diagnostic uncertainty) [19]. Instead, we emulated a study which examined the communication of prognostic uncertainty, which used a single item to test manipulation sucess [47]. We asked participants how explicit the discussion of uncertainty surrounding the diagnosis was, using an 11-point scale (from 0 “not at all” to 10 “very”). For both the sets of scripts, the high uncertainty communication scripts were perceived as displaying significantly greater explicit communication of diagnostic uncertainty (p < 0.001) (Table 2).

Table 2 Script pilot-testing (two-tailed Wilcoxon Signed-Rank test, 0.05 significance level)

Additionally, we garnered general feedback on the scripts, including comments on realism and understandability. Small changes were made in response, for example to reduce jargon.

Stage 4: Converting the scripted consultations to video

Filming video vignettes can be prohibitively expensive and time-consuming, particularly if using professional film crews or consulting script advisors. Moreover, these costs can rapidly increase if pilot-testing of videos produces unsatisfactory results, necessitating amendments and re-shooting. Our decisions regarding filming and pilot-testing the videos were shaped by these realities.

Filming the vignettes: camera angle and production

Various camera angles have been used in existing video vignettes studies, for example showing only what the patient sees (facing the doctor), or alternating between the patient and the doctor [3]. Some studies have suggested that it is preferable to use alternate camera angles to increase perceived realism and emotional engagement [19, 41].

Since the COVID-19 pandemic, there has been a significant increase in telemedicine, including the use of video consultations [48]. We chose to use a doctor-only camera angle, (the doctor addressed the camera as though it is a patient), to replicate these increasingly common video consultations. As participants would likely complete the study online using their own devices, we hoped that the similarity to a video consultation would enable them to better imagine themselves in the position of a patient.

We filmed the vignettes using an iPhone, a tripod and a teleprompter application. Multiple takes of each vignette version were filmed. We chose not to pilot-test different takes due to time constraints – we instead selected the best take for each vignette condition and used these in pilot-testing (see below). These were chosen based on global assessments by the author team, taking into account the overall quality, naturalism of the delivery and consistency of affect and tone across the different vignette conditions.

Each vignette was filmed as one continuous take to avoid any transitions between clips which may have been distracting. This approach was more appropriate within our resource constraints as we did not contract a professional recording/editing team.

Use of actors

Some existing video vignette studies have used actors, while others have used real clinicians. As Hillen et al. discuss, there are potential advantages and disadvantaged to both: real clinicians may be more naturalistic and adept with medical terminology, while actors may be more comfortable in front of camera and better able to deliver consistency in style [3].

Based upon a previous vignette study in which pilot participants found an actor more realistic [18], we initially chose to employ a professional actor to play the doctor. The actor had previously worked with medical students in communication skills teaching. All four vignettes were filmed over the course of one day. Small amendments to make the script flow more naturally were made in response to feedback from the actor.

Unfortunately, early informal pilot-testing with a convenience sample of 10 laypeople suggested that the realism of the videos produced from this first day of filming was inadequate. These pilot-testers were shown the vignettes and asked for general feedback on their realism: they reported the actor to be unnatural in their tone and non-verbal communication. We subsequently reshot the vignettes using a medical doctor. We showed videos to the same sample of pilot-testers (without disclosing that this was a real doctor rather than an actor), and again asked them for general feedback on realism. We particularly asked them to compare the re-shot vignettes with the original versions. They universally preferred the videos with the doctor, describing them as more realistic. Subsequently we proceeded with these in the formal pilot-testing phase (see below).

As in our experience, amending scripts and reshooting videos in response to feedback can be essential to vignette validity. We urge researchers to account for possible reshooting costs when planning studies to avoid having to decide between compromised validity and excessive costs.

Pilot-testing videos

We pilot-tested videos with a convenience sample of 14 laypeople who met the proposed inclusion/exclusion criteria for the main study (mean age 34.9, age range 19–69). To replicate the conditions of the main study, each participant was shown the introductory text, before watching either videos V1A and V1B or V2A and V2B. Each pilot tester thus watched 2 videos, randomised to watch the A or B video first.

We used simple numerical scales to assess both internal and external validity, which have been used in pilot-testing in previous healthcare communication video vignette studies [49].

  1. (1)

    Internal validity testing (manipulation check): Participants rated on a 11-point scale how explicitly the doctor discussed any uncertainty surrounding the diagnosis, from 0 “not at all” to 10 “very explicitly”. For both sets of videos, the high uncertainty communication scripts were perceived as displaying significantly greater explicit communication of diagnostic uncertainty (p < 0.05) (Table 3).

  2. (2)

    External validity testing (realism): Participants rated on a 11-point scale how believable the doctor was, and how believable the content was, from “not at all” to “very”. They also rated on a 11-point scale to what extent they were able to imagine themselves in the position of the patients, ranging from “not at all” to “very strongly”. Participants rated the realism and their ability to imagine themselves as the patient highly (Table 4). Qualitative feedback was positive: participants praised the realism of the doctor character and the quality of the recordings.

Table 3 Video pilot-testing internal validity (two-tailed Wilcoxon Signed-Rank test, 0.05 significance level)
Table 4 Video pilot-testing external validity testing

As pilot-testing results were positive, no further changes were made to the videos.

Additional file 1: Appendix 1 displays the final scripts side-by-side to clearly demonstrate differences between them. Copies of the videos can be supplied on request.

Stage 5: Administering the videos

Choosing viewers

Previous vignette studies examining health communication have used healthy volunteers as analogue patients [14, 50,51,52,53,54], and evidence suggests acceptable external validity of using healthy volunteers as APs [36]. One study demonstrated no difference in engagement with vignettes between disease-naïve and actual patients after age-matching, suggesting no difference in ecological validity for studies using disease-naïve volunteers vs patients [55].

Accordingly, we chose to recruit ‘healthy’ volunteers (that is, members of the general public as opposed to people from specific patient groups) as APs. Moreover, recruiting sufficient participants to ensure that the study was adequately powered was important, and we felt this more achievable with healthy volunteers. We aimed to recruit a diverse sample of participants regarding age, gender and ethnicity: anyone aged 18 or over, currently living in the UK was eligible. Participants were not compensated for their participation. We excluded medical doctors/students because we felt their medical knowledge and experience might influence results and produce conclusions less generalisable to the wider population.

Online setting

We administered the videos using Thiscovery, an online platform developed by The Healthcare Improvement Studies (THIS) institute. Participants watched the videos and completed questionnaires on their own electronic devices, without researcher supervision. To mitigate against external distractions and influences, we instructed participants to watch the videos on their own, at a time unlikely to be disturbed. Although participants could access the study on mobile phone devices, we advised them to use a larger screen if possible (ideally a computer/laptop or tablet) to make the experience more immersive.

Number of videos per viewer

Previous studies have varied in the number of vignettes watched per participant: from one [13, 14, 56, 57], to two [50, 58, 59], to four [47].

Our study was a randomised crossover trial, in which participants sequentially watched either V1A and V1B, or V2A and V2B (Fig. 2). As all our participants watched two videos, they were able to directly compare them and indicate a preference in communication style. Each participant acted as their own control, increasing power: crossover trials require lower sample sizes than parallel-group trials to meet the same criteria in terms of type I and type II error risks [60,61,62]. It is, however, important to acknowledge that within subjects designs may artificially inflate effects: in real healthcare settings, patients are very unlikely to experience two similar consultations in such a way.

Carryover effects – when the effect of the first treatment continues until the next period and alters the effect of the next treatment – can be a problem for crossover trials [61]. In our study, there was a risk that watching the first video may prime participants to think differently about the second video. This is a problem if the carryover effect from watching video A first differs from the carryover effect from watching video B first. Here, watching the high communication video first might prime participants to focus on uncertainty more closely than watching the low communication video first.

To combat carryover effects, crossover studies often include ‘washout periods’. Some vignette studies have used these (e.g. a distraction task involving looking at an aquarium while listening to classical music) [47]. Despite limited evidence to suggest that such distraction tasks are effective in reducing carryover, we included one given the potential benefit and lack of obvious harms (even if ineffective). We designed a task which did not involve mental arithmetic so as not to bias against certain groups (e.g. less educated or numerically confident). Between watching videos, participants were presented with three pairs of photographs, and were asked to choose their preferred i. place for a picnic, ii. place for a walk, and iii. place to enjoy the view.

Informing/debriefing participants

To avoid providing participants with cues which might change how they respond, participants were blinded to the study hypotheses. In the participant information sheet the aims of the study were described as examining the effects that different types of doctor communication might have on patients. We did not mention the concept of ‘diagnostic uncertainty’ prior to participation. After watching both videos and completing the questionnaires, we provided participants with a debriefing statement explaining the study aims and hypotheses.

Discussion

We have outlined our application of published guidelines to produce four video vignettes used to study the communication of diagnostic uncertainty. We wish to emphasise two aspects of our methodology: firstly, the creation of high-quality vignettes in a cost-effective manner and, secondly, the use of an online platform. We argue that these points are particularly important in making video vignette methodology more accessible – both to researchers, and to diverse patient populations. Below, we draw from our experience to make recommendations for researchers designing video vignette studies.

A pragmatic approach to producing videos

The creation of video vignettes can be time-consuming and costly, particularly if filming involves professional actors and/or film crews. These costs can quickly multiply if reshooting is required following pilot testing. Thus, although video vignettes are a useful tool in studying healthcare communication, they may appear inaccessible to researchers with limited resources.

This paper demonstrates that producing and using video vignettes need not necessarily be expensive and time-consuming. We filmed our videos with readily available and user-friendly technology, and we pursued a ‘video consultation’ style; this eliminated the need for extensive technical knowledge or equipment. We also carefully developed scripts and took a pragmatic approach to pilot-testing, saving both pilot testers’ and researchers’ time.

Approach to script development and pilot-testing

Existing vignette studies have taken a variety of approaches to pilot-testing: some have only pilot-tested scripts, others have tested scripts and videos, and some have not reported any formal pilot-testing [3]. There is wide variation in the number of pilot participants – ranging from ten in one study [44], to 116 laypeople and 46 cancer patients in another [19]. Notably, published guidelines on video vignette methodology do not state how extensive pilot-testing needs to be, and little research has specifically addressed this question.

Recruitment can be challenging in health communication research, and pilot-testing with large numbers may not be feasible. Furthermore, pilot-testing with more participants than is necessary raises ethical issues – we should avoid using participants’ time unless their involvement will positively impact the study.

We achieved good results with relatively small pilot numbers: 15 script pilot-testers and 14 video pilot-testers. The final videos were internally valid (the manipulations in uncertainty communication were perceived by participants as intended by the research team), and externally valid (they were rated as realistic and participants reported that they were able to adequately imagine themselves in the position of the patients).

These positive pilot-testing results may reflect the fact that we took steps at an early stage in the design process to ensure validity. Other studies have taken various approaches to initial script development: some have based them upon real consultations (recordings [11, 63] or direct real-time observations [22]), while others have used experts’ input [19]. A strength of our approach was the use of data from CODUS 1 to develop the scripts. This – combined with the use of expert input – helped us to create initial scripts that were reflective of real patient-doctor communication.

While the optimal number of pilot-testers is unclear, it is likely that the more steps taken early in the development of vignettes, the less extensive the pilot-testing needs to be. Our results suggest that if care is taken early in the development of the vignette scripts to maximise ecological validity, it is possible to produce high quality vignettes with relatively modest pilot-testing. This approach has dual cost and time-saving potential: first, researchers do not need to recruit excessive numbers of pilot testers; second, resulting videos are more likely to be valid, therefore reducing the likelihood of needing to re-shoot.

Filming considerations

Existing studies have used professional film crews, actors and script-writing experts, with some filming over several days with multiple cameras to produce the videos [22, 23, 64]. Such extensive processes, although commendable, may not be accessible for researchers with smaller budgets or less time. Importantly, our results show that they are not necessary to produce valid vignettes.

We filmed our vignettes using a single camera angle on an iPhone; we did not employ any professional camera crew. Our pilot-testing demonstrates that good quality videos can be produced with relatively minimal equipment and without input from filming experts. Moreover, APs are increasingly familiar with video consultations, so less formal, single (face-on) camera angle videos may be more realistic.

Although it may have been more naturalistic for the doctor to have memorised the text (as opposed to reading from the teleprompter), we had limited time and we wanted to ensure that there was no deviation from the scripts we had already pilot-tested. We were unable to find any published data comparing the realism of memorisation vs reading from a teleprompter, but this is something future researchers could consider if time permits reliable memorisation of the scripts.

Despite suggestions that an actor might be more realistic, we found otherwise. Costs may be saved by using volunteers (for example, from the research team or their contacts) in place of professional actors. Alternatively, researchers who decide to use an actor might find it beneficial to perform a screen test with preliminary pilot-testing initially, before employing them for the entire series, to save unnecessary costs.

Accessibility and diversity considerations.

Online vignette studies may increase accessibility for participants, but current research largely overlooks accessibility considerations. With increasing use of online methods [65], it is unsurprising that many studies have delivered video vignettes online [11, 20, 56, 66]. However, many vignette studies fail to report the study setting (the viewing location and its arrangement – for example, whether participants participated entirely online by watching vignettes on their own devices, or whether they attended an in-person viewing). Of those that do, few justify these choices [3], and even fewer critically reflect on their inclusion or accessibility implications [67]. This reflects the lack of research considering how online video vignette studies may be more or less accessible to participants from different groups.

Below we evaluate the little existing research on accessibility and online methods, and urge researchers to consider how they might account for the needs of underrepresented groups (for example, those with hearing or visual impairment) in vignette studies. For instance, working with stakeholder groups to co-design different conditions related to accessibility, and utilising online platforms to recruit greater sample sizes, may increase inclusivity [67].

Diversity considerations in online recruitment and vignette delivery

Although online technologies can widen participation in research, they can also create barriers to participation by favouring those with good digital literacy and access [65, 68, 69]. Similarly, although social media recruitment enrolment may be effective, consideration of representation is needed: evidence suggests that social media recruitment might yield a less demographically diverse sample [70]. It is notable that most literature focuses on research methods like online surveys or interviews [69, 71]; critical conversations around the ethics of online vignette studies are needed.

Video vignettes and accessibility

When creating our videos we consulted the literature on accessibility. Our choices reflect attempts to balance varying participant needs with logistical constraints the need to ensure validity in results.

For video vignette studies, alternative forms of consuming video content – such as subtitles or closed captions, or providing a written alternative which can be read in Braille or screen-readers – could make the research more inclusive [68]. One study in Belgium reported use of subtitles [20], and another recommended piloting with community stakeholders familiar with accessibility concerns [67]. There is, however, some concern that changing the modality of vignettes may change how participants interact with them, making interpretation of results challenging. For example, in our study, enabling a rewind/replay function and subtitles would have allowed those who are hard of hearing to better engage with the content. We did not do this, due to concerns that watching the video multiple times (something which cannot be done in real consultations), or reading the words on screen might alter the interpretation of the content of the consultation. Such potential differences would pose difficulties for comparing results between those who listened to the consultation and those who read captions.

Such concerns are not, however, evidence-based. A recent study investigating the impact of vignette modality (written, audio and video) showed no effect on engagement, recall, trust, satisfaction and anxiety [40]; this suggests that changing the modality of vignettes depending on participant need (e.g., adding subtitles) may not have any detrimental impact on research validity. In fact, data quality may actually be enhanced if researchers can design for inclusion by providing alternative forms of the same vignette without compromising validity, as wider groups may be included [68]. Nonetheless, further research is needed to critically explore these issues.

Overall, there is limited conclusive guidance on how to reduce barriers to participation in vignette studies for those from underserved groups – those from minority backgrounds or who have disabilities like dyslexia, visual/hearing impairment and learning disabilities [67]. Further research into how inclusive research design (specifically, exploration of whether alterations to the mode of the vignette influences outcomes) might influence results is needed.

Conclusion

Four video vignettes manipulating the communication of diagnostic uncertainty were created and validated for experimental use. Our reflections provide practically useful recommendations of how to make video vignettes more accessible both to researchers and to participants from a range of backgrounds.

We propose that it is possible to produce high quality vignettes without an overly complex or expensive development procedure, potentially increasing accessibility for researchers with budget/time constraints. We highlight the potential benefits of online methods in improving accessibility for participants but suggest a need to acknowledge and explore how to reduce the barriers to participation in online vignette studies for those from underserved groups.

Availability of data and materials

Copies of the scripts developed in this study can be found in Additional file 1: Appendix 1. Copies of the final video vignettes produced can be provided on request (please contact the corresponding author).

Notes

  1. The findings of the main study are being prepared for publication in a separate manuscript. This paper is intended to detail the development of the vignettes and the decisions around the study design, rather than report the results of the main study itself.

  2. In a separate study, we have also been exploring real consultations to gain a greater understanding of the whole diagnostic process; our observations in three UK hospitals corroborate our CODUS 1 findings that there is significant variation in the way diagnostic uncertainty is communicated.

  3. We did not formally assess or measure non-verbal behaviour as part of our pilot-testing, instead relying on global assessments by the author team (the authors reviewing different takes, and selecting those where they felt the non-verbal communication was similar across vignettes). As part of the review process, both anonymous reviewers suggested that we could have shown muted versions of the vignettes to naive pilot-testers, for example using a nonverbal immediacy scale and comparing it across conditions to ensure there were no significant differences in nonverbal behaviours. We thank them for this suggestion, and although this is not an approach we used in this study, we would encourage researchers to consider it in future vignette studies.

References

  1. Cox C, Fritz Z. What is in the toolkit (and what are the tools)? How to approach the study of doctor–patient communication. Postgraduate Med J. 2023;99(1172):631–8.

    Article  Google Scholar 

  2. Tarbi EC, Blanch-Hartigan D, van Vliet LM, Gramling R, Tulsky JA, Sanders JJ. Toward a basic science of communication in serious illness. Patient Educ Couns. 2022;105(7):1963–9.

    Article  PubMed  Google Scholar 

  3. Hillen MA, van Vliet LM, de Haes HC, et al. Developing and administering scripted video vignettes for experimental research of patient–provider communication. Patient Educ Couns. 2013;91(3):295–309.

    Article  PubMed  Google Scholar 

  4. Hall JA. Some observations on provider–patient communication research. Patient Educ Couns. 2003;50(1):9–12.

    Article  PubMed  Google Scholar 

  5. Atzmüller C, Steiner PM. Experimental Vignette Studies in Survey Research. Methodology. 2010;6(3):128–38.

    Article  Google Scholar 

  6. Tremblay D, Turcotte A, Touati N, et al. Development and use of research vignettes to collect qualitative data from healthcare professionals: A scoping review. BMJ Open. 2022;12(1):e057095.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Sheringham J, Kuhn I, Burt J. The use of experimental vignette studies to identify drivers of variations in the delivery of health care: a scoping review. BMC Med Res Methodol. 2021;21(1):1–17.

    Article  Google Scholar 

  8. Cook TD, Campbell DT, Shadish W. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin; 2002.

    Google Scholar 

  9. Van Vliet LM, Van Der Wall E, Albada A, et al. The validity of using analogue patients in practitioner–patient communication research: systematic review and meta-analysis. J Gen Intern Med. 2012;27(11):1528–43.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Bhise V, Meyer AN, Menon S, et al. Patient perspectives on how physicians communicate diagnostic uncertainty: an experimental vignette study. Int J Qual Health Care. 2018;30(1):2–8.

    Article  PubMed  Google Scholar 

  11. Blanch-Hartigan D, van Eeden M, Verdam MG, et al. Effects of communication about uncertainty and oncologist gender on the physician-patient relationship. Patient Educ Couns. 2019;102(9):1613–20.

    Article  PubMed  Google Scholar 

  12. Cousin G, Schmid Mast M, Jaunin-Stalder N. When physician-expressed uncertainty leads to patient dissatisfaction: A gender study. Med Educ. 2013;47:923–31. https://doi.org/10.1111/medu.12237.

    Article  PubMed  Google Scholar 

  13. Cuevas AG, O’Brien K, Saha S. Can patient-centered communication reduce the effects of medical mistrust on patients’ decision making? Health Psychol. 2019;38(4):325.

    Article  PubMed  Google Scholar 

  14. Medendorp N, Visser L, Hillen M, et al. How oncologists’ communication improves (analogue) patients’ recall of information. A randomized video-vignettes study. Patient Educ Couns. 2017;100(7):1338–44.

    Article  CAS  PubMed  Google Scholar 

  15. Dowsett S, Saul J, Butow P, et al. Communication styles in the cancer consultation: preferences for a patient-centred approach. Psycho-Oncol J Psychological Soc Behav Dimensions Cancer. 2000;9(2):147–56.

    CAS  Google Scholar 

  16. Zwaanswijk M, Tates K, van Dulmen S, et al. Communicating with child patients in pediatric oncology consultations: a vignette study on child patients’, parents’, and survivors’ communication preferences. Psychooncology. 2011;20(3):269–77.

    Article  PubMed  Google Scholar 

  17. Hughes R, Huby M. The application of vignettes in social and nursing research. J Adv Nurs. 2002;37(4):382–6.

    Article  PubMed  Google Scholar 

  18. Van Vliet LM, Hillen MA, van der Wall E, et al. How to create and administer scripted video-vignettes in an experimental study on disclosure of a palliative breast cancer diagnosis. Patient Educ Couns. 2013;91(1):56–64.

    Article  PubMed  Google Scholar 

  19. Gehenne L, Christophe V, Eveno C, et al. Creating scripted video-vignettes in an experimental study on two empathic processes in oncology: Reflections on our experience. Patient Educ Couns. 2021;104(3):654–62.

    Article  PubMed  Google Scholar 

  20. Ceuterick M, Bracke P, Van Canegem T, et al. Assessing provider bias in general practitioners’ assessment and referral of depressive Patients with different migration backgrounds: Methodological insights on the use of a video-vignette study. Community Ment Health J. 2020;56(8):1457–72.

    Article  PubMed  Google Scholar 

  21. Forth FA, Hammerle F, König J, et al. The COPE-Trial—Communicating prognosis to parents in the neonatal ICU: Optimistic vs. PEssimistic: study protocol for a randomized controlled crossover trial using two different scripted video vignettes to explore communication preferences of parents of preterm infants. Trials. 2021;22(1):1–21.

    Article  Google Scholar 

  22. Labrie N, Van Dulmen S, Kersten MJ, et al. Effective information provision about the side effects of treatment for malignant lymphoma: Protocol of a randomized controlled trial using video vignettes. JMIR RES Protoc. 2019;8(5): e12453.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Chen N, Hsu CH, L. Pearce P. Developing video vignettes for tourism research: protocol and quality indicators. J Travel Res. 2022;61(8):1828–47.

    Article  Google Scholar 

  24. Piwowar V, Barth VL, Ophardt D, et al. Evidence-based scripted videos on handling student misbehavior: The development and evaluation of video cases for teacher education. Prof Dev Educ. 2018;44(3):369–84.

    Google Scholar 

  25. Liyanapathirana NS, Samkin G, Low M, et al. Developing written and video vignettes for ethical decision-making research. New Zealand Journal of Applied Business Research. 2016;14(2):29–41.

    Google Scholar 

  26. Simpkin AL, Schwartzstein RM. Tolerating uncertainty—the next medical revolution? New Engl J Med. 2016;375(18):1713–5.

  27. Alam R, Cheraghi-Sohi S, Panagioti M, et al. Managing diagnostic uncertainty in primary care: A systematic critical review. BMC Fam Pract. 2017;18:79. https://doi.org/10.1186/s12875-017-0650-0.

    Article  PubMed  PubMed Central  Google Scholar 

  28. O’Riordan M, Dahinden A, Akturk Z. Dealing with uncertainty in general practice An essential skill for the general practitioner. Qual Prim Care. 2011;19:175–81.

    PubMed  Google Scholar 

  29. General Medical Council. Guidance on professional standards and ethics for doctors: Decision making and consent. Manchester: General Medical Council; 2020.

    Google Scholar 

  30. Cox C, Miller B, Kuhn I, et al. Diagnostic uncertainty in primary care: What is known about its communication, and what are the associated ethical issues? Fam Pract. 2021. https://doi.org/10.17863/CAM.65222.

  31. Hart J, Cox C, Kuhn I, et al. Communicating diagnostic uncertainty in the acute and emergency medical setting: A systematic review and ethical analysis of the empirical literature. Acute Med. 2021;20(3):204–18.

    Article  CAS  PubMed  Google Scholar 

  32. Cox C, Hatfield T, Fritz Z. How and why do doctors communicate diagnostic uncertainty: an experimental vignette study Health Communication [Submitted]. 2023.

  33. Johnson CG, Levenkron JC, Suchman AL, et al. Does physician uncertainty affect patient satisfaction? J Gen Intern Med. 1988;3:144–9. https://doi.org/10.1007/BF02596120.

    Article  CAS  PubMed  Google Scholar 

  34. Ogden J, Fuks K, Gardner M, Johnson S, McLean M, Martin P, Shah R. Doctors expressions of uncertainty and patient confidence. Patient Educ Couns. 2002;48(2):171–6.

    Article  PubMed  Google Scholar 

  35. Politi MC, Clark MA, Ombao H, et al. Communicating uncertainty can lead to less decision satisfaction: A necessary cost of involving patients in shared decision making? Health Expect. 2011;14:84–91. https://doi.org/10.1111/j.1369-7625.2010.00626.x.

    Article  PubMed  Google Scholar 

  36. Blanch-Hartigan D, Hall JA, Krupat E, et al. Can naive viewers put themselves in the patients’ shoes?: reliability and validity of the analogue patient methodology. Med Care. 2013;51(3):e16–21.

    Article  PubMed  Google Scholar 

  37. Hillen M, De Haes H, Stalpers L, et al. How can communication by oncologists enhance patients’ trust? An experimental study. Ann Oncol. 2014;25(4):896–901.

  38. Cohen J, Weimann-Saks D, Mazor-Tregerman M. Does character similarity increase identification and persuasion? Media Psychol. 2018;21(3):506–28.

    Article  Google Scholar 

  39. De Graaf A, Sanders J, Hoeken H. Characteristics of narrative interventions and health effects: A review of the content, form, and context of narratives in health-related narrative persuasion research. Rev Commun Res. 2016;4:88–131.

    Article  Google Scholar 

  40. Visser LN, van der Velden NC, Smets EM, van der Lelie S, Nieuwenbroek E, van Vliet LM, Hillen MA. Methodological choices in experimental research on medical communication using vignettes: The impact of gender congruence and vignette modality. Patient Educ Couns. 2022;105(6):1634–41.

    Article  PubMed  Google Scholar 

  41. Visser LN, Bol N, Hillen MA, et al. Studying medical communication with vIDeo vignettes: a randomized study on how variations in vIDeo-vignette introduction format and camera focus influence analogue patients’ engagement. BMC Med Res Methodol. 2018;18(1):1–12.

    Article  Google Scholar 

  42. Heverly MA, Fitt DX, Newman FL. Constructing case vignettes for evaluating clinical judgment: an empirical model. Eval Program Plan. 1984;7(1):45–55.

    Article  Google Scholar 

  43. Hillen MA, de Haes HC, Verdam MG, et al. Does source of patient recruitment affect the impact of communication on trust? Patient Educ Couns. 2014;95(2):226–30.

    Article  PubMed  Google Scholar 

  44. Swenson SL, Buell S, Zettler P, et al. Patient-centered communication. J Gen Intern Med. 2004;19(11):1069–79.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Han PKJ. Conceptual, Methodological, and Ethical Problems in Communicating Uncertainty in Clinical Evidence. Med Care Res Rev. 2013;70:14–36. https://doi.org/10.1177/1077558712459361.

    Article  Google Scholar 

  46. Bhise V, Rajan SS, Sittig DF, et al. Defining and Measuring Diagnostic Uncertainty in Medicine: A Systematic Review. J Gen Intern Med. 2018;33:103–15. https://doi.org/10.1007/s11606-017-4164-1.

    Article  PubMed  Google Scholar 

  47. Mori M, Fujimori M, van Vliet LM, et al. Explicit prognostic disclosure to Asian women with breast cancer: A randomized, scripted video-vignette study (J-SUPPORT1601). Cancer. 2019;125(19):3320–9.

    Article  PubMed  Google Scholar 

  48. Ahmed S, Sanghvi K, Yeo D. Telemedicine takes centre stage during COVID-19 pandemic. BMJ Innovations. 2020;6(4):1–3.

    Article  Google Scholar 

  49. van Osch M, van Dulmen S, van Vliet L, et al. Specifying the effects of physician’s communication on patients’ outcomes: A randomised controlled trial. Patient Educ Couns. 2017;100(8):1482–9.

    Article  PubMed  Google Scholar 

  50. Mori M, Fujimori M, Hamano J, et al. Which Physicians’ Behaviors on Death Pronouncement Affect Family-Perceived Physician Compassion? A Randomized, Scripted Video-Vignette Study. J Pain Symptom Manage. 2018;55(2):189–97. e4.

  51. Sep MS, van Osch M, van Vliet LM, et al. The power of clinicians’ affective communication: how reassurance about non-abandonment can reduce patients’ physiological arousal and increase information recall in bad news consultations. An experimental study using analogue patients. Patient Educ Couns. 2014;95(1):45–52.

    Article  PubMed  Google Scholar 

  52. Nishioka M, Okuyama T, Uchida M, et al. What is the appropriate communication style for family members confronting difficult surrogate decision-making in palliative care?: A randomized video vignette study in medical staff with working experiences of clinical oncology. Jpn J Clin Oncol. 2019;49(1):48–56.

    Article  PubMed  Google Scholar 

  53. van Osch M, Sep M, van Vliet LM, et al. Reducing patients’ anxiety and uncertainty, and improving recall in bad news consultations. Health Psychol. 2014;33(11):1382.

    Article  PubMed  Google Scholar 

  54. Visser LN, Hillen MA, Verdam MG, et al. Assessing engagement while viewing video vignettes; validation of the Video Engagement Scale (VES). Patient Educ Couns. 2016;99(2):227–35.

    Article  PubMed  Google Scholar 

  55. Visser LN, Tollenaar MS, Bosch JA, et al. Analogue patients’ self-reported engagement and psychophysiological arousal in a video-vignettes design: Patients versus disease-naïve individuals. Patient Educ Couns. 2016;99(10):1724–32.

    Article  PubMed  Google Scholar 

  56. Hillen MA, de Haes HC, van Tienhoven G, et al. All eyes on the patient: the influence of oncologists’ nonverbal communication on breast cancer patients’ trust. Breast Cancer Res Treat. 2015;153(1):161–71.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Zwingmann J, Baile WF, Schmier JW, et al. Effects of patient-centered communication on anxiety, negative affect, and trust in the physician in delivering a cancer diagnosis: A randomized, experimental study. Cancer. 2017;123(16):3167–75.

    Article  PubMed  Google Scholar 

  58. McKinstry B. Do patients wish to be involved in decision making in the consultation? A cross sectional survey with video vignettes. BMJ. 2000;321(7265):867–71.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  59. Rhondali W, Perez-Cruz P, Hui D, et al. Patient–physician communication about code status preferences: A randomized controlled trial. Cancer. 2013;119(11):2067–73.

    Article  PubMed  Google Scholar 

  60. Putt ME, Chinchilli VM. Nonparametric approaches to the analysis of crossover studies. Stat Sci. 2004;19(4):712–9.

    Article  Google Scholar 

  61. Lim C-Y, In J. Considerations for crossover design in clinical study. Korean J Anesthesiol. 2021;74(4):293.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Wellek S, Blettner M. On the proper use of the crossover design in clinical trials: part 18 of a series on evaluation of scientific publications. Dtsch. 2012;109(15):276.

    Google Scholar 

  63. Burt J, Abel G, Elmore N, et al. Understanding negative feedback from South Asian patients: an experimental vignette study. BMJ Open. 2016;6(9):e011256.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Saha S, Beach MC. The impact of patient-centered communication on patients’ decision making and evaluations of physicians: a randomized study using video vignettes. Patient Educ Couns. 2011;84(3):386–92.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Bailey J, Mann S, Wayal S, Hunter R, Free C, Abraham C, Murray E. Sexual health promotion for young people delivered via digital media: a scoping review. Public Health Res. 2015;3(13):1–119.

    Article  Google Scholar 

  66. Medendorp NM, Hillen MA, Visser LN, et al. A randomized experimental study to test the effects of discussing uncertainty during cancer genetic counseling: different strategies, different outcomes? Eur J Hum Genet. 2021;29(5):789–99.

    Article  PubMed  PubMed Central  Google Scholar 

  67. McInroy LB, Beer OW. Adapting vignettes for internet-based research: eliciting realistic responses to the digital milieu. Int J Soc Res Methodol. 2022;25(3):335–47.

    Article  Google Scholar 

  68. Carter SM, Shih P, Williams J, et al. Conducting qualitative research online: Challenges and solutions. Patient-Patient-Centered Outcomes Res. 2021;14(6):711–8.

    Article  Google Scholar 

  69. Davies L, LeClair KL, Bagley P, et al. Face-to-face compared with online collected accounts of health and illness experiences: A scoping review. Qual Health Res. 2020;30(13):2092–102.

    Article  PubMed  Google Scholar 

  70. Benedict C, Hahn AL, Diefenbach MA, et al. Recruitment via social media: advantages and potential biases. Digital health. 2019;5:2055207619867223.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Thunberg S, Arnell L. Pioneering the use of technologies in qualitative research–A research review of the use of digital interviews. Int J Soc Res Methodol. 2022;25(6):757–68.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the whole Thiscovery Team for their role in delivering the online vignettes. We would also like to thank Jenni Burt for her incredibly helpful comments on an earlier draft of the paper.

Funding

This research was funded in whole, or in part, by the Wellcome Trust 208213/Z/17/Z. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. All authors are based in The Healthcare Improvement Studies Institute (THIS Institute), University of Cambridge. THIS Institute is supported by the Health Foundation, an independent charity committed to bringing about better health and healthcare for people in the UK. Caitriona Cox, lead researcher, is a National Institute for Health Research (NIHR) academic clinical fellow.

The views expressed in this article are those of the authors and not necessarily those of the NHS, the NIHR, the Health Foundation or the Wellcome Trust. The study received approval from the University of Cambridge Psychology Research Ethics Committee.

Author information

Authors and Affiliations

Authors

Contributions

CC and ZF conceptualised the paper and developed the methodologies. CC, ZF and TH were involved in the development of the vignettes, in their pilot-testing and in the analysis of the results. JM was responsible for the development and implementation of the online delivery of the vignettes. CC produced the first draft of the paper. TH, ZF and JM contributed to the review and editing of the paper. The final draft was contributed to by all authors, and is the result of their close collaboration.

Corresponding author

Correspondence to Caitríona Cox.

Ethics declarations

Ethics approval and consent to participate

All experimental protocols were reviewed and approved by the University of Cambridge Psychology Research Ethics Committee [reference: PRE.2022.065], and all methods were carried out in accordance with relevant guidelines and regulations. Pilot participants provided informed consent to participate.

Consent for publication

Not applicable – the manuscript does not include information or images that could lead to the identification of study participants.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Appendix 1.

Vignette scripts and introductory text.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cox, C., Hatfield, T., Moxey, J. et al. Creating and administering video vignettes for a study examining the communication of diagnostic uncertainty: methodological insights to improve accessibility for researchers and participants. BMC Med Res Methodol 23, 296 (2023). https://doi.org/10.1186/s12874-023-02072-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-023-02072-7

Keywords