Skip to main content

Table 3 Value related to the features/qualities of process evaluation knowledge

From: What do we want to get out of this? a critical interpretive synthesis of the value of process evaluations, with a practical planning framework

PROCESS EVALUATION VARIABLES

POTENTIAL IMPACTS ON THE VALUE OF PROCESS EVALUATION KNOWLEDGE

Credibility

Accuracy

Completeness

What is evaluated?

Arguments that process evaluations should be standardised to include set components and enable easier cross-study comparison [1, 5, 24, 57, 58]

Potential for incorrect conclusions to be drawn when insufficient or incorrect processes/participants are included [1, 31]

Not taking temporal dimensions into account risks inaccurate interpretation of findings [59]

Arguments that process evaluations which conceptualise context, mechanisms of action, and implementation as uni-dimensional, static, and linear may lead to inaccurate conclusions [40, 46, 59,60,61]

Potential for sampled participants/sites to all have had similar experiences so findings do not reflect experiences of whole sample [62]

Arguments for all process evaluations including certain ‘essential’ components [4, 24, 57]

Arguments against ‘tick-box’ approach to deciding on components [63]

Arguments for stakeholder involvement in selection of processes and participants [1, 44, 64]; potential to miss information through solely basing choices on researcher views [64, 65]

Importance of including outcome evaluation processes as well as intervention processes [12, 66,67,68]

Arguments that meaningful interpretation of findings requires analysis of all processes [69, 70]

Potential for researchers to only be directed to ‘showcase’ sites [33]

Problems using qualitative findings from small numbers of sites to make universal changes to interventions [10]

Arguments that process evaluation methods should take account of changes over time, including evolving context [63], intervention teething problems [38, 71], and learning curve effects [55], continuation of intervention beyond trial [4]

Debate between using logic models [1] and more complex theoretical models [63, 72,73,74] to theorise interventions

Advocation of using a complex systems perspective to take into account broader systems in which interventions take place [75]

Debates about how fidelity should be conceptualised [1, 76, 77]

Potential to gain richer understanding through aspects often not investigated, including impact by interaction and emergence [33] and relational dynamics [61]

How are processes evaluated?

Doubt from triallists over the credibility of qualitative findings [43], qualitative findings not being properly integrated [78], issues judging whether qualitative or quantitative data are more reliable [79]

Difficulties applying nuanced and diverse qualitative findings to interventions developed as uniform in an RCT [10]

Potential for rapid qualitative methods to preserve depth of analysis while also providing timely actionable findings [80]

Some qualitative approaches felt to have stronger explanatory capability than others, such as ethnography [34], and the use of theoretical explanatory frameworks [55]

Speculative links between factors identified qualitatively and outcomes may not be accurate [68]

Potential misleading findings from post-hoc analyses [81, 82]

Data collection tools being unable to capture different eventualities of what actually happened [41]

Ability of methods to uncover the unknown [11, 36, 46, 65, 67]

Qualitative process evaluations being designed to be subservient to trials [71], avoiding looking for problems [43], framing questions around researchers’ rather than participants’ concerns [83], being undertaken as separate studies [71]

Challenges of developing tools to capture all aspects of tailored flexible interventions [41]

Practical conduct

 

Bias introduced during participant recruitment—selective gatekeeping [26], overrepresentation of engaged participants [32, 71, 84]

Intervention staff collecting data may introduce bias [1, 40, 48, 71, 82]

Routine practice data incomplete or poor quality [12, 40]

Low interrater reliability [85], inconsistency between researchers covering different sites [41]

Participants may be more willing to honestly express concerns if researchers are separate from the trial [38, 43, 72]

Potential for socially desirable narratives [67, 86], recall bias [48, 87], memory limitations [59], inattentive responding [59], and intentional false reporting [59]

Analysis of qualitative data with knowledge of outcomes may bias interpretation [13, 88] and result in data dredging [81]

Participants as co-evaluators can strengthen evaluation through gaining richer information [89]

Qualitative data analysis without knowledge of outcomes may prevents useful exploration of unexpected outcomes [10, 13]

Participants not returning accurate/timely data – in particular lack of motivation in control sites [41]

Dissemination

 

Limited discussion of quality, validity, and credibility in publications [9, 40, 63, 90]

Sometimes not published [1, 78, 91], with no justification of why elements were published over others [71]

Process evaluation publications divorced from outcome publications [9, 12, 54, 63, 78, 92]; lengthy time periods between publications [12]