Within the context of the ever-burgeoning expectation that medical research becomes translational, that is, that it informs ‘real’ practice in ‘real’ medical settings (clinics, hospitals and other facilities), the search for ‘practices’ that ‘work’ is serious. Of course this quest can be hampered by the inexorable pursuit of demonstrable clinical effects of interventions founded upon gold standard methods. This invariably means that clinical effects demonstrated through the rigours of randomised controlled trials (RCTs) maintain a mesmerizing grip on the definitions and descriptors of what stands for ‘high quality’ research and the often-lofty claims to truth. And yet when it comes to, for example, nutritional RCTs and reviews of RCTs investigating malnutrition in acutely unwell, multi-morbid elderly inpatients, (in this case hip fracture), the relevancy and applicability of findings to patients and those who treat them are often found wanting. Limited understanding and application of malnutrition screening and diagnostic measures by those with limited or misdirected nutritional knowledge have led to the under-diagnosis or misdiagnosis of malnutrition in highly cited RCTs and systematic reviews of these [1]. Nutrition intervention studies in hip fracture routinely report highly constrained research environments in order to demonstrate clinical effect of highly controlled, often one-dimensional interventions in the presence of these confounders, despite the recognised need for these patients to receive ‘comprehensive’ (that is, multidisciplinary and multimodal) care [2]. Further to this, this predisposition of patients to multiple comorbidities and complications dictates the exclusion of those perhaps most likely to benefit from interventions in order to clearly demonstrate cause and effect in the purest of senses. A review of RCTs in any select elderly inpatient population prone to comorbidity regardless of disease or intervention type will invariably demonstrate substantial selection (or recruitment) bias; and at least in the case of nutritional studies, the recruitment of younger, and more generally homogenous less morbid patients may mask the effect of nutritional interventions on outcomes. It is therefore not surprising that RCTs and reviews of these have failed to clearly define consistent and adequate evidence to guide bedside nutritional care in patients with acute hip fracture [1].
This is another significant concern when it comes to patient care. RCTs clearly provide the bulk of ‘evidence’ for evidence based medicine or EBM (see later where we distinguish this term from evidence based practice or EBP). Williams and Garner [3] suggest that EBM has become a stick by which clinicians are beaten by policy makers (they also implicate academics here). Hotopf [4] is more withering in his criticism arguing that RCTs are more often than not designed to address either the wrong question or questions so narrow that the solutions provided are of minimal help to the clinician thereby limiting capacity for quality care and appropriate treatment. Though Hotopf is not in favour of abandoning RCTs altogether, he does argue that they need to be expanded to be of any practical use [4]. His suggestion is that RCTs be expanded along pragmatic lines. We discuss the value of pragmatic trials and their relationship with action research (AR) later, but are quick to note that both RCTs and pragmatically focused studies have a rightful place within the spectrum of research. For now however, suffice it to say it appears that the time is right for clinical interventions to be informed by a broader church. To this end, this paper will present a case for AR as both intervention and research method. Not only is this consistent with the purer purposes of AR (social change and improvement of practice), it also offers a way forward for medical researchers and practitioners to be more closely aligned in the pursuit of improved quality of health care. As part of the case we advance here, we provide evidence of such an approach undertaken in a hospital based clinical environment specializing in the treatment of hip fracture.
Some history and the central tenets of AR
Later we include a detailed account of how AR was used in this study. For now however a potted AR history or background seems appropriate, especially as this history articulates with the broader social concerns in addition to the clinical realities of modern health care. Detailed accounts of the structural procedures complete with diagrammatic representations of the cyclical nature of the AR process are ample [5–7]. This paper is more about how AR can be regarded as a legitimate clinical intervention and how it may be usefully applied in the broadest translational sense.
We begin by providing some antecedents from the early beginnings and original intentions of AR to its more recent manifestations in health-based research. This brief chronicle places the root of AR as a democratic form of social practice and alludes to its shift to a technical process of improving professional practice with little or no social intent. We acknowledge that this type of evaluative work serves a purpose, though it is generally regarded as a deviation from the social mission of AR. At a time when public service (including public health) is framed by a neoliberal discourse, with emphasis on the individual, competition, and market driven policy, we are concerned that AR could be narrowly used as a simple analytical or even benignly reflective tool. However we would argue that in clinical and medical settings not only is this inadequate, it would squander the opportunity AR offers to truly observe, improve, and evaluate routine clinical practice as it happens with the capacity to bring about change through the cyclical protocol; and further, to engage both patients and those involved in their care in this process.
Though some trace the origins of AR back to John Dewey’s work in the 1930s [8], this tends to focus more what Schön (1983) might call reflection-on-action [9]. This epistemological lineage is not insignificant; however Kurt Lewin is generally credited with coining the term action research [10]. He came to prominence in the 1940s for his work, which attempted to bring together “the experimental approach of social science with programs of social action in response to major social issues of the day” (p.29) [11]. In particular, Lewin saw AR as having great potential to improve the position and wellbeing of minority groups in post-war America, and his work within ‘race relations’, (a term that must seem somewhat anachronistic), across the country is a testament to his commitment to social betterment [10]. His skill in setting up projects that involved deep levels of consultation through the now well-recognised steps and spiraled progressions of AR was a major achievement of the time and universally acknowledged today. He was well recognised for work with groups where, through a ‘planning – fact-finding – execution’ sequence, behaviour changes could be brought about that would advantage the group whether it was food consumption practices or group solidarity and factory worker rights in an increasingly industrialized and urbanized America. The connection of this history to health care may not be immediately obvious. However, malnourished elderly hip fracture inpatients may be considered as stranded skeletons hiding in hospital closets quietly waiting to be rescued [12]. In other words this group tends to be somewhat marginalized within the context of health care and to that end, the applicability of Lewin’s work to this field appears justified.
There is no doubt that this democratized approach to social inquiry as advocated by Lewin and others around the time challenged the scientific orthodoxy of the day, and in many respects in spite of its broader acceptance in contemporary research circles, it probably still does.
AR in medicine, health and clinical practice
It is not difficult to see how AR might apply to patient care in clinical settings. As Vallenga et al. suggest AR is “suitable because it is a form of research enabling practitioners and consumers to participate in the development of knowledge which they themselves will subsequently use or will be used in their care” (p.81) [8]. In other words the conditions of and for clinical care are best observed, developed, implemented, refined and evaluated through the collaborative and cyclical reflective structures shared between practitioner teams and the patient. What is especially important about Vallenga et al’s position here is that it frames the idea of ‘practice’ in a particular way. Practice as it is being used here is very much about the co-construction of knowledge that can be recruited to affect changes to care. In other words it is ‘care’ that is the practice, not the technical components of the procedures to bring it out. This approach clearly brings into focus the patient and further highlights the need for those who care for them to dynamically work together across systems, processes, and interventions to procure effective multidisciplinary and multimodal care. So as McTaggart describes, practice is not some automated procedure, repetitive task, or the implementation of a technique (or even policy for that matter) [13]. Hence when we talk about AR as a way to improve practice we would caution being lulled into a liberal discourse of autonomy, which tends to undermine any sense of responsibility. Habermas has a term that perhaps is more fitting [14]. He talks of ‘mature autonomy’. As McTaggart points out, this co-joining of autonomy and responsibility fits more closely with the idea of practice as a social endeavor and in the case of this study, that responsibility was both to and with patients as a way of improving and reporting quality care within the constraints of routine clinical practice [13]. As McIntosh suggests this approach to the improvement of practice requires a shift from evidence-based medicine that invariably establishes a series of routines and interventions based on clinical trials to evidence based practice that is better informed by pragmatic trials [15]. At the same time McIntyre would argue that this distinction between evidence based medicine and evidence base practice is an example of how the idea of practice should be separated from the idea of institutions [16]. In the case of this study we could argue that medicine (or indeed health) might be perceived as an institution (given its power this is not too much of a stretch of the imagination). Care, on the other hand, is the practice that goes on before, within, and beyond the context of that institution (be it in a hospital clinic or health centre). We explore this relationship later in the paper.
A vehicle for delivering relevant, applicable, and measurable healthcare improvements
As Whitehead, Taket & Smith suggest, AR is increasingly regarded as not only a legitimate approach to research in health and medical settings, but also as a particularly effective process for supporting organisational change [17]. Within clinical settings changes to organisational practice are the key to improved health care. Indeed as we have already suggested, clinical or randomised trials can be somewhat insensitive to the nuances of service delivery in health where clinical significance is of far greater importance than statistical significance where the outcomes are patient related and change is demonstrable; no matter how small or large and predominantly based on the clinicians knowledge of the patient [18]. However as Whitehead, Taket & Smith continue, AR has been slow to catch on in the context of heath research, probably for the reasons of perceived weakness to which we alluded to earlier [17]. This poses a serious problem since as Meyer suggests, “barriers to the uptake of the findings of traditional quantitative biomedical research in clinical practice are increasingly being recognised” (p. 178) [19]. As Meyer says, the attraction of AR is that it represents a form of inquiry whereby researchers work with and for people, rather than conduct research on them, and in this sense it represents a form of democratized research entirely consistent with Lewin’s original premise. In other words it is the style of the research rather than its methods that are different. This also dispels the myth that AR is confined to or synonymous with qualitative research. In this project a number of data gathering techniques was used through the AR process including various numerical measures all of which contributed to a broad canvas of patient care, the changes in practice and the consequences of those changes.
Connections to pragmatic trials
Discussions of pragmatism invariably start with the question “will a proposed intervention work in (so-called) real life?” Hence where explanatory trials measure symptoms or markers, pragmatic trials have as their focus a range of outcomes that focus on the patient’s wellbeing. In other words pragmatic trials using a range of data sources aim to work with patients rather than ‘on them’. As we have already argued, this is entirely consistent with AR. As Patsopoulos [20] points out, rather than distinguishing between explanatory and pragmatic trials (PT), it might be better to see them as points on a continuum; both indispensible but different in what they reveal. Hence the naturalistic (a word sometimes used synonymously with pragmatic – though we are cautious here) setting of a clinic provides opportunities to see what works in practice [20].
It is interesting to note that in the 2013 Australian Government Strategic Review of Health and Medical Research, pragmatic trials (much less AR!) barely warrant a mention [21]. There is an acknowledgement in the document that a broader range of research activities should be encompassed and that practitioner research should be encouraged. However, these are hardly ringing endorsements of alternative paradigms, serving to underline the somewhat tepid enthusiasm for research paradigms that sit outside of RCT convention. Baker [22] acknowledges that part of the problem is that there is “no immutable formula for successful implementation of innovations” (i30). Yet he makes a case to suggest that, (for example), case study methods are under-utilised as a way of bringing about change to care practices, and that the knowledge created through evidence based practice solutions have to be built upon the way such solutions can be implemented. In essence he is arguing for research and practice to be far more closely aligned; indeed we might even argue that they are one and the same thing. This is entirely consistent with an AR approach to change.
However, a shift in research paradigms may be emerging. Traditional models have defined RCTs as the gold standard design for evidence generation, and limited pragmatic, qualitative, prognostic or observational studies to pawns in the hierarchy of evidence [23, 24]. However, clinicians, funding bodies, academics and publishers may be beginning to detect chinks in the armour of RCTs, and in response to the need to economically justify research agendas and support evidence based practice, are starting to recognise the need to also consider studies focused towards relevancy and applicability rather than overwhelmingly prioritising analyses of cause and effect under highly controlled, artificial conditions [25]. This climate change is promoting, in line with Lewin’s [10] theory, the unfreezing of accepted norms and acknowledgement that a variety of study designs focused towards the patient, the question(s) under investigation, and applicability and relevancy of the research to routine clinical practice; rather than continuing to promote highly reductionist or post-positivist focused research as the standard by which all others are judged [26]. One such illustration is an extension to the Consort Guidelines for reporting pragmatic clinical trials which provide a clear mandate for clinicians and researchers to justify pragmatically focused trials as both meaningful and relevant to patients, clinicians, and the broader healthcare community [27].
An AR study: nutritional care in hip fracture
As an example of how pragmatically focused AR can be used in clinical settings, not only to change practice but to develop an evidence base for change, we present here a study conducted from November 2010 to September 2012. The setting is a clinical environment under change to develop best practice whilst incorporating generally understood conventions for care based on scientific trials. We allude to the results and outcomes in general terms as these have been published and presented elsewhere [28–31]. Of equal importance in our view are the processes by which care practices were developed and adjusted on the basis of the AR cycles.