Over the last two decades, there has been a significant increase in the availability of Internet interventions for unhealthy alcohol use [1,2,3]. This work is important because unhealthy alcohol use causes significant harm to the individual and to society, and is one of the primary contributors to the modifiable burden of disease globally [4]. Further, the large majority of people who drink in an unhealthy fashion never access treatment, including the receipt of brief interventions in primary healthcare settings [5]. Fortunately, many people with unhealthy alcohol use appear interested in other options to address their alcohol consumption [6], stimulating research on alternate ways to provide access to care [3].
A challenge with evaluating Internet interventions for unhealthy alcohol use, as with interventions for many other modifiable health behaviours, is the need for the recruitment of large numbers of participants. This is because the interventions generally have fairly small effect sizes, thus requiring a large sample to have enough power to test for an impact of the intervention [1, 2]. Further, while randomized controlled trials (RCTs) are a powerful technique to provide evidence as to whether an intervention is effective, each individual trial, no matter its quality, is of limited use as a definitive indicator of whether an intervention works. Multiple trials, ideally from independent research groups, are required to build an adequate evidence base of the effectiveness of any intervention. Once such an evidence base is established, interventions also benefit from continued development to increase their impact, to make them more attractive to the participant, or to establish their efficacy in specialized populations (e.g., those with co-occurring disorders).
Such research is expensive and generally time consuming. It would be extremely valuable to the development of an evidence base of Internet interventions to have a readily available, high quality, and low cost source of participants for research trials. While online advertisements (e.g., on Facebook or through Google AdWords) can be successful, this success can be variable over time and the cost of advertisements can easily build up. Other areas of the social sciences have started to rely heavily on crowdsourcing platforms, such as Mechanical Turk (MTurk), for recruiting people to participate in online surveys and in experimental trials that can be conducted online [7, 8]. Much has be written on the strengths and weaknesses of MTurk as a ready source of participants, but there is general agreement that there is the potential, given the number of people registered as ‘workers’ on MTurk (upwards of 500,000), to recruit large numbers of participants for a relatively low cost (e.g., US$1 is generally regarded as good pay for completing a 10 min survey).
While there are many published manuscripts reporting on research employing MTurk workers as participants, there is only very limited research to-date on whether MTurk might also be a good source of participants for Internet intervention research [9]. Previous studies have demonstrated that it is possible to recruit participants with clinically relevant symptoms through MTurk, such as participants who report drinking in an unhealthy fashion [10]. We sought to establish whether such participants might be willing to engage with Internet interventions and whether these participants can then be followed up over time. The eventual goal was to identify a source of large numbers of participants that could be employed during the development and preliminary evaluation stage of Internet interventions. To this purpose, we conducted an initial randomized controlled trial (RCT) employing MTurk workers who reported unhealthy alcohol use as participants and an online brief intervention that had demonstrated efficacy in seven previous randomized trials [11,12,13,14,15,16,17]. The assumption was that the intervention was an active one and that an absence of observed effect of the intervention with MTurk workers could be used as evidence that MTurk was a poor source of participants (and any evidence of impact of the Internet intervention among MTurk worker participants would indicate that this might be a good source of participants for future trials). Briefly, our initial study found that we could quickly register relevant participants for a trial (425 in 3 h), that follow-up rates at 3 months were good (85%), and that there was some limited evidence of impact of the Internet intervention on levels of alcohol consumption among participants [17]. There were also some indications of the limitations of MTurk as a source of participants, as the majority (62%) of participants asked to access the intervention did not do so. Given the promise and limitations of the initial trial, we sought to systematically replicate the Internet intervention trial using MTurk participants. Each of the two additional RCTs incorporated methods to encourage participants to engage with the intervention materials. The experience with recruiting the large number of participants for these trials, and others, is reported elsewhere [9]. The current manuscript reports on the outcome of two of these RCTs and the lessons learned from this experience.