GOSCEs are a more cost-and resource-effective [20, 21] variation on the traditional objective structured clinical experience (OSCE), which is a standard component in undergraduate and graduate medical education curricula and is used in many healthcare fields, including midwifery [22], physical and occupational therapy [23], nursing [24], and research [22, 25]. In a GOSCE learners move in groups through stations. In each station, one group member has the opportunity to lead the interaction with the standardized patient (a professional actor trained to play a patient), while others have a role observing [26, 27]. After the encounter, there is an opportunity for standardized patient (SP) perspective, peer debrief and feedback from a faculty observer which strengthens the GOSCE’s value as a formative assessment tool [20, 21, 28, 29]. This model, of concrete experience (with SP) followed by reflective observation (with the group and faculty member) is modeled after David Kolb’s experiential learning theory [30].
Two experienced medical educators with expertise in performance-based training and assessment designed the quality improvement, GOSCE experience with input from an IRB expert, a geriatrician, and a PhD researcher specializing in recruitment and retention in clinical trials, in consultation with leaders of community organizations for older adults. The scenarios for the simulated recruitment of older adults were 1) a Black woman with hearing impairment 2) a white woman and her family member both present with differing views on research participation, and 3) a Black man with concerns about participating given a history of racism in medical research. Cases were developed based on a literature review of facilitators and barriers to clinical trial recruitment and a focus group of 5 research staff who regularly recruit older adults to clinical trials. 4 SPs (one for each case, with two SPs for one case) were recruited from the pool of SPs employed by the medical school; they received 4 h of training (2 on the case portrayal, 2 on the checklist completion) by a physician with experience in performance-based assessment. Each item on the checklist is scored on a 3-point scale (not done, partly done and well done, each with a behavioral anchor describing the point). For example, “did not discuss risks,” “discussed risks BUT did not check for understanding or questions,” discussed risks AND checked for understanding or questions.’ The checklist is the standard model used in medical education at our site over the past 15 years and has good reliability and validity [31]. Competencies we aimed to assess include: building trust and rapport with participant; assessing understanding and capacity to consent; and presenting information. Cases were shared with experts in the clinical trial recruitment field and modified accordingly until expert consensus on the cases was reached.
In this GOSCE, the convenience sample included 45 research staff at a large, urban hospital, both staff who routinely recruit older adults to participate in clinical trials and staff who are involved in recruitment training and strategy. The GOSCE was conducted five times, each with 9 participants. The total cost of each GOSCE was 800 US dollars. GOSCE participants had been recruiters for less than two years. Using a virtual conferencing platform, this 2-h GOSCE included three sections: 1) a 30 min orientation to GOSCE with discussion on challenges and facilitators to obtaining consent and a model for effective recruitment; 2) three 20 min GOSCE stations with Standardized Participants (SPs) with 10 min for the interview and 10 min for immediate feedback; and 3) a 25 min group debrief to review experiences, highlight best practices and help integrate new approaches into behavioral repertoires. After the initial orientation, Research staff rotated in groups of three through three stations, each with an SP whom they needed to recruit to the “trial. An observing faculty member provided immediate feedback after this simulated recruitment effort and led a debrief with all three members of the group. During this debrief, the SP completed an 11-item communication checklist evaluating the learner on a series of behaviorally anchored communication skills; after finishing the checklist, the SP would join the debrief and share additional feedback based on the checklist. Each learner had opportunities for active learning and observational learning. After these encounters, the group came back together with all the faculty observers for a group discussion and debrief. A handout on best practices, developed by the team of medical educators and researcher with experience in clinical recruitment, was shared for participants to use in their future work.
The program was evaluated by research staff through a 36-question survey including retrospective pre-post items (i.e. learners completed the survey after the GOSCE reflecting on skills before and after GOSCE) that used 3 and 4-point Likert scale questions (i.e. not at all skilled to very skilled, low educational value to high educational value). Free text questions about experience with the cases and lessons learned from the training were also included. Questions addressed: 1) self-assessed change in skill after the workshop; 2) new discoveries in recruitment; and 3) overall educational value. The survey was adapted from a standard educational evaluation survey used in simulation training at our institution to elicit acceptability, relevance and change in knowledge and attitudes. Effectiveness of program was evaluated through Kirkpatrick’s training evaluation model Level 1 (Reaction) and Level 2 (Learning) [32].
This survey was distributed through an anonymous link post-GOSCE. None of the faculty members involved in the cases and feedback session are the supervisors of any of the participants of the GOSCE. Descriptive statistics and a nonparametric sign test for median differences on changes in self-reported skills were computed for survey items. Free text responses were assessed using a qualitative thematic analysis.