Case study: we can move
We draw on the evaluation of We Can Move (WCM) in Gloucestershire, England [30] to provide an example for how REM has been applied. We Can Move brought together multiple organisations and sectors from across the county (e.g., local government authorities, NHS trusts, clinical commissioning groups, voluntary and community sector organisations, communities, and the public) to increase the opportunities for the local population to increase physical activity. It achieved this by adopting a systems approach. A core aim of WCM was to influence key organisations and leaders to mobilise assets across Gloucestershire in order to re-design and influence how the system works (that which impacts population physical activity).. The programme was co-ordinated and facilitated by Active Gloucestershire [31]. The National Institute for Health Research Applied Research Collaboration West (NIHR ARC West) was commissioned to evaluate WCM between April 2019 and April 2021. Ethical approval for this evaluation was granted by the Faculty of Health Sciences, University of Bristol (Ref. 91145).
Ripple effects mapping and the wider evaluation
The lead researcher (JN) was embedded within Active Gloucestershire for 1 day per week, which enabled them to develop a thorough understanding of WCM and its intended aims [32]. It was through this immersive process and conversations with the programme team that REM was identified as a potentially useful and feasible method. REM was initially piloted in one specific project, before being applied to the wider WCM programme in December 2019. It is important to note that REM was one method situated within the larger evaluation of WCM.
Ripple effects mapping
Preparation
Ripple Effects Mapping is a participatory method and data are collected through stakeholder workshops. The list below provides an overview of the preparatory work that was required. Further details are available in Online Supplement I.
-
Planning the content of the REM workshop
-
Deciding on a preferred format (face-to-face, online, or blended)
-
Planning the logistical aspects of the workshop
-
Planning for additional data collection
-
On the day preparation activities
Stakeholder recruitment
Chazdon et al. [25] recommend that direct (e.g., implementation staff and participant) and indirect (i.e., those influenced as a by-product of the intervention) stakeholders are recruited. However, given that the core implementation team involved almost 20 people, a pragmatic decision was taken to deliver the initial workshop solely for this group. The group included employees of Active Gloucestershire (the organisation facilitating WCM) and key collaborators (e.g., local government authority and clinical commissioning groups). Further workshops were run separately for specific projects and wider stakeholders were invited to attend these. For example, a separate REM session was delivered with a group of community members who were integral to one specific project within WCM.
The initial ripple effects mapping workshop
This section is presented in two parts to describe what happened a) during the initial REM workshop and b) following the workshop.
During the initial workshop
The initial workshop was delivered over 2.5 h. The majority of time was allocated to mapping the WCM impacts. Two researchers were present, one facilitated the workshop and the other made observational notes. See Online Supplement I for further information.
Presentation (20 min): The background to REM, the rationale for its use, and an example of an REM output were presented to the group.
Outline the REM process (10 min): The facilitator presented an overview of the process for the REM workshop.
REM activity (Two hours): Participants divided into smaller, self-selected groups. Groups typically included three to five people, all of whom were familiar with the particular project / area of work being discussed. All participants worked on more than one project. There was sufficient time within the two-hour activity for each sub-group to work on two to three REM outputs, with each output corresponding to the respective project / area of work. The facilitator guided the group through the two-hour activity.
The first 10–15 min was allotted to the team-based discussions, underpinned by several of the appreciative inquiry principles as suggested by Chazdon et al. [25]. The purpose of these discussions was to discover what participants consider to be successful within WCM, or their best experiences, as part of WCM. They were encouraged to think about the relationships between stakeholders in the system, as well as their own projects and areas of work. They were then asked to think about what made these achievements or impacts possible, and whether these achievements came about in an expected or unexpected manner. Group members had these conversations in pairs. This activity was a gentle introduction to thinking about the ripple effects of their work, and the people and organisations that they may have had an impact upon. Data were not formally recorded at this point.
Each sub-group was provided with a large sheet of paper with a timeline drawn on it for the second stage of the workshop (mapping the impacts, 90 min approx.). The timeline is a notable addition to the approach of Chazdon et al. [25] as it allows the evaluation to understand the length of time required for certain impacts to arise (Online Supplement II). Here, timelines spanned from April 2018 (when WCM began) to December 2019 (when the first workshop was delivered). They were asked to reflect upon their work throughout that time, and to note key activities or actions accordingly against the timeline. They were then asked to think about the impacts that occurred following on from these activities or actions. Arrows were drawn between activities and impact(s) to illustrate the “ripple effect.” One participant acted as a scribe to visualise the REM output.
The third stage (reflecting on the impacts), which was delivered concurrently to stage two, involved participants further reflecting on their activities and impacts. As such, they were asked to consider the following: 1) who has been impacted upon (e.g., community members, organisations, system leaders); 2) how many people have been impacted; 3) whether there has been any financial implications associated (e.g., further funding generated); 4) if the impacts were intended or unintended; 5) what else may have contributed to these impacts; 6) whether their work links with wider WCM work or that of other organisations; and 7) if there are any recurring trends being observed across their REM output. Further detail was then added to the REM output. Throughout the workshop, each sub-group had time to work on two or three REM outputs. The facilitator moved between the groups to provide further assistance where required and to ask probing questions surrounding the REM output. Figure 2 provides an example REM output.
The fourth stage (most and least significant changes, 10 min approx.) involved participants identifying their most and least significant changes in the REM outputs. It was important to reflect on the least significant changes because these may denote actions and activities which required a lot of time and resource, but subsequently lead to little meaningful impact. When time permitted, participants were able to reflect upon why these activities lead to negligible impacts through discussion.
The last stage of the workshop (group feedback and learning, 10 min approx.) was for the group to reflect upon REM as a process. The group were asked questions such as: a) what have you learnt about your work from the REM outputs? b) who else could be involved in these REM workshops in the future? and c) having reflected upon your work and its associated impact, do you believe you are focusing on the right things?
Following the initial workshop
The WCM core team created 12 REM outputs which covered various elements of the WCM programme. Immediately after the workshop, the researcher took a picture of all outputs to create a digital record. Each REM output was then systematically inputted into an online software package, Vensim (Fig. 3). The researcher contacted the WCM team if hand-written text was not legible to avoid misinterpretation. Outputs required between 30 and 60 min each to input into the Vensim software.
Follow up ripple effects mapping workshops
Chazdon et al. [25] suggests that REM is used to evaluate an intervention or project once its implementation is complete – with several published examples available [26,27,28]. However, we saw this to be problematic for two reasons. First, we believed that it might lead to an overly positive representation of programme activities and impacts. Second, we wanted to understand how systems approaches adapt in response to changes in the system. In our view, completing the REM output in a single workshop at the end of implementation would limit the potential for these adaptative and emergent properties of the system to be captured.
During the follow up workshops
We planned three follow up REM workshops in April 2020, July 2020 and November 2020. These workshops followed an abridged and simplified process compared to the initial REM workshop, which was adopted for all follow up workshops. As the groups became more familiar with the REM method, the time required to complete the follow up workshops reduced. Each workshop lasted between 60 and 90 min dependent on the volume of activity and impact that had occurred since the previous workshops.
Follow up workshops were completed online using Microsoft Teams as we were unable to meet in a face-to-face format due to COVID-19 mitigation measures. Workshops were organised with people who were involved in the creation of the REM outputs (i.e., the sub-groups) rather than the whole WCM team (as per the initial workshop). These sub-groups were able to update multiple REM outputs within the allotted time of the follow up sessions.
Preparing for the online workshops: The researcher familiarised themselves with the REM output (i.e., Vensim file) prior to the workshop and created a series of questions to ask the group about their REM output. The questions addressed three aims: 1) to seek further clarification on previous impacts and activities; 2) to update previous impacts and activities; and 3) to understand new impacts and activities that had not previously been discussed. If impacts and activities had ceased, then the researcher asked why this happened. These questions sought to avoid the REM outputs solely focusing on positive impacts and activities.
Approximately 2 weeks before the online workshop, the researcher contacted participants via email to explain what the workshop would consist of and to ask them to prepare for the workshop. They were also sent a copy of the electronic REM output to assist their preparation and to ensure that the output reflected the previous workshop discussion. This preparation was important in ensuring the online workshop was efficient.
During the online follow-up workshop: The researcher commenced the workshop by stating its aims and asked the group if they consented for the workshop to be recorded. Video-conferencing software (Microsoft Teams) allowed for the video, as well as the audio, to be recorded; this option was preferable to audio-only as the researcher could see which element of the REM output the discussion relates to. The researcher’s role was two-fold in the workshop. First, they guided the group through the series of questions related to their REM output, and ensured that all members had an opportunity to contribute to the discussion. Probing questions were also used to elicit further information. Second, the researcher captured the responses of the group and added this to the REM output on Vensim (screen sharing was enabled to allow the group to see the REM output being updated). The researcher did not need to capture all information given that the session was being recorded. Throughout the online workshops, the researcher continuously fed back their interpretation of what was said to the group to ensure the accuracy of the REM output.
Following the workshops
The researcher watched the workshop recording and refined the REM output in Vensim, and additional information was added to the output where required. On several instances, the researcher recontacted participants to seek additional clarification on the information included in the REM output. The detail within these outputs develops over time, as can be seen in the example in the results (Fig. 5) which was created over five iterations.
Analysis of the ripple effects mapping outputs
Chazdon et al. [25] recommend that a deductive content analysis is applied, underpinned by the Community Capitals Framework. However, we opted for a largely inductive content analysis to explore the patterns within the REM data rather than trying to code them against a predetermined framework. Our justification was that an inductive approach would help better understand the complexity of a systems approach. To do this, we used two sequential processes: 1) identification of “impact pathways”, and 2) a content analysis of the impact pathways. Analysis was undertaken after the final iterations of the REM outputs were completed.
The research team immersed themselves in the data to identify impact pathways, i.e., chains of actions, activities and impacts within the REM output. Figure 4 provides a simple example of two impact pathways. Impact pathways predominantly served to facilitate the content analysis. We found that the process of applying a content analysis became easier having identified the impact pathways as they enable the REM data to be coded in the context of the impact pathway(s). The identification of impact pathways was completed in Microsoft PowerPoint, and we used different coloured boxes to demarcate the various pathways within the REM outputs. A PDF was created of the REM output with the finalised impact pathways which was imported into NVivo 12.
The data within the impact pathways were then systematically coded in NVivo 12 using content analysis [33]. All data were subject to coding, and we coded one impact pathway at a time. Where similar data were found in the output (e.g., a similar type of activity or impact), we applied a previous code – this enabled us to start building up a numerical, as well as descriptive, overview of the data. More than one code could be applied to a data extract. After coding three or four impact pathways, we began organising the codes into preliminary themes (i.e., clusters of codes which help to describe the phenomenon being observed). The data within subsequent impact pathways were coded against these preliminary themes; however, where data did not fit these themes, new codes were created.
Whilst much of the analysis was inductive, we had a set of specific questions that we aimed to answer. These included the estimated reach of the projects and the programme, the type of people and organisations involved in WCM, the length of time for impacts and activities to occur, and the financial implications of certain impacts and activities. As such, we created themes that related to these questions and coded data accordingly. This process allowed us to provide quantitative answers to these particular research questions / foci.