Skip to main content

Table 1 Evaluating generic implementation strategies

From: Pragmatic trials and implementation science: grounds for divorce?

Step one. Regard randomised trials as ‘case studies’ evaluating just one out of a kaleidoscope of different configurations in which a knowledge transfer scheme may be implemented. Deepen the case study by using multiple methods – examine not only the trial outcomes but also the underlying interpretative processes which may account for intended and unintended outcomes.

Step two. Change the unit of analysis from the programme to the programme theory. Complex interventions are enormously difficult to duplicate but programme theories are readily transportable; the same ideas recur over and again in the world of programme planning. Interrogating these generic programme theories – rather than specific interventions – opens the door to more generalisable findings.

Step three. All knowledge transfer initiatives have mixed fortunes. Change emphasis in outcome analysis from the measurement net effects to the inspection of heterogeneity of treatment effects (HTE), including instances of programme failure.

Step four. Use ‘within-case’ analysis to raise and test hypotheses on why the implementation theory works only for some practitioners. Use ‘cross-case’ analysis to raise and test hypotheses on why the implementation theory works only in some institutional contexts. Repeat indefinitely.

  1. Source (author)