Our study reveals that most (but not all) authors of systematic reviews are using reference management software tools. Despite this, only a small minority are reporting this usage in their published studies. Further, there appears to be no apparent relationship between choice of reference management software and perceived functionality or usability. Nor was software choice associated with the degree to which usage was reported or not reported in published studies.
Currently, there are more than 28,000 scholarly journals in active publication [10]. Collectively, these journals publish approximately 1.8 million articles per year [10]. Given the trend, across disciplines, towards integrating evidence into daily practice, it is unsurprising that an increasing number of systematic reviews are being funded and produced. Previous research has demonstrated that authors typically search multiple electronic databases to identify studies relevant for systematic reviews [11–14]. Estimates of overlap between these databases have ranged between 8% and 60% [11–14]. Software programs that can facilitate the collection, organization and removal of duplicate studies from database searches are invaluable in the production of large-scale systematic reviews.
The most significant finding from this study was the disparity between stated and reported software use. Few (4.8%) authors who had used reference management software reported this information in their published manuscripts. As a point of contrast, 76.9% of the authors in our sample reported on the statistical software packages used to analyze study results. A recent study on the use of statistical software in health services research also found that a significant number of authors were including statistical software information in their published manuscripts [15]. This discrepancy in reporting may be the result of an absence of guidelines which specifically address this aspect of reporting. Whereas The International Committee of Medical Journal Editors’ Uniform Requirements for Manuscripts Submitted to Biomedical Journals recommend that authors “specify the computer software” when describing the statistical methods they employed to analyze study data, no mention is made, here or elsewhere, of the importance of reporting on the use of reference management software [16].
Although the PRISMA checklist for the quality reporting of systematic reviews does not specifically suggest that authors report on reference management software usage, its use can impact the number of unique studies identified and reported in a systematic review [17]. A failure to identify duplicate records captured in database searches may result in an over-reporting of irrelevant studies. Similarly, an over-reliance on reference management software to identify and remove duplicate records from a reference database may cause relevant studies to be overlooked. As such, reference management software can have a real impact on the quality of a completed review.
Whereas numerous studies have been published citing the benefits of reference management and other sofware programs in the production of systematic reviews, the degree to which these programs have been adopted by the research community is largely unknown [2, 3, 6, 9, 18–22]. In 2007, Senarath published a research letter on the reference management software experiences of 22 researchers in Sri Lanka [8]. Although informative in terms of highlighting the functions and perceived advantages of this software, to our knowledge, ours is the first study to explore reference management software usage and reporting among authors of systematic reviews.
Our study has caveats and limitations. Although we were able to achieve a strong response rate of 70% from our email survey, a telephone survey could have yielded a higher response rate. Furthermore, researchers who declined to participate in our study may have different perspectives on reference management software usage than those who participated. Secondly, as mentioned previously, we limited our study sample to clinical reviews published in ACP Journal Club. This sampling frame may have resulted in the identification of published systematic reviews that have a better-than-average quality of reporting. Thirdly, an analysis of systematic reviews produced in languages other than English, and other disciplines, such as education or social work, could yield different findings. Finally, with the exception of one open-ended question, our survey did not seek to identify perceived benefits and deficits with respect to individual software programs.
An overriding caveat to our study, and its implications in terms of the importance of reporting reference management software usage, is that there is, as yet, no firm evidence to suggest that reference management software can have a positive impact on the quality of completed reviews. That said, the “value of a systematic review depends on what was done, what was found, and the clarity of reporting” [17]. As stated by Moher and colleagues “the reporting and conduct of systematic reviews are, by nature, closely intertwined” [17]. As such, a detailed transparent plan for the identification and selection of component studies, including the use or lack of use of reference management software, is an important element of any review protocol. It can reflect the methodological expertise of the authors, and the transparency, reproducibility and, ultimately, the quality of the study that has been produced.