This article has Open Peer Review reports available.
Identifying nurse staffing research in Medline: development and testing of empirically derived search strategies with the PubMed interface
© Simon et al; licensee BioMed Central Ltd. 2010
Received: 3 March 2010
Accepted: 23 August 2010
Published: 23 August 2010
The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline.
A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries).
The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%).
As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach.
PubMed/Medline contains more than 18 million references. The identification of relevant literature in this wide-ranging source is of great importance to researchers in remaining up-to-date with the latest developments in the field of interest, as well as in conducting comprehensive literature reviews. "Search filters are collections of search terms intended to capture frequently sought research methods, such as randomized controlled trials, or aspects of health care" . While this definition includes methods filters for certain common research methods such as randomized controlled trials (RCTs) [2–5] and systematic reviews [3, 6–9], the identification of relevant literature in fields with less standardized methods such as health services research remains a cumbersome task. Furthermore, methods filters need to be complemented with terms of the topic of interest to identify the relevant literature. The development of the topic-specific part of the search strategy usually consists of an arbitrary selection of terms. Few studies have been conducted with the aim of systematically identifying this topic-specific part [10–12]. An approach guiding the selection of relevant terms could help researchers develop search strategies in a more objective and systematic manner for both topic and methods-related searches.
Nurse staffing research investigates the association between nurse staffing parameters and nursing and patient outcomes . The basic question in nurse staffing research is which nurse-to-patient ratios result in high-quality patient care. Although previous research in this field has largely been observational in nature, a wide range of statistical methods and data sources are used , which makes it difficult to identify the relevant literature effectively. Empirically tested search strategies support the identification of literature in an effective and efficient manner [1, 15], and are used in searches conducted in the production of systematic reviews and in the creation of automatic e-mail updates with PubMed's My NCBI.
To date, the development of empirically tested search strategies has been focused on identifying certain study types, such as RCTs and systematic reviews. Most research on search filters has tested the developed search strategy against a defined set of references from a hand search (gold standard) or other systematic reviews (quasi-gold standard) [1, 15–24]. An approach based on a set of relevant references identifying appropriate terms and then testing the developed strategy against several test sets could be used for search strategy development in general, beyond its sole use in the development of methods filters.
In the context of systematic reviews, the number of relevant references on a given topic in a database is a matter of particular interest. An estimate of the number of relevant references in the database could be used for resource planning purposes within the framework of comprehensive systematic reviews.
develop search strategies to identify primary publications on nurse staffing research in PubMed/Medline
test the search strategies against a set of relevant references from different sources
compare the search strategies with PubMed's health services research queries (PubMed HSR Queries)
estimate the number of relevant nurse staffing references in PubMed/Medline
Search strategy development
Three search strategies were developed, targeting either the highest sensitivity ('sensitive strategy'), the highest precision ('precise strategy'), or a balance between sensitivity and precision ('balanced strategy'). In the context of search strategy development, sensitivity (or recall) is the number of relevant references retrieved divided by the number of all relevant references. A search with a sensitivity of 1, for instance retrieves all relevant references, while a search with a sensitivity of 0.5 retrieves half of all relevant references. Precision is the fraction of the relevant references of all retrieved references. For example, a precision of 0.33 means that a third of all retrieved references are relevant to the topic of interest. Search strategies solely aimed at sensitivity or precision target the extremes of the inverse relationship of these two parameters. A balanced strategy attempts to achieve both aims: to achieve high sensitivity without losing too much precision and vice versa. Balancing is based on the iterative addition and removal of parts of the search strategy to determine a balance between sensitivity and specificity. However, this balance is not precisely defined and remains a vague concept.
The employed development process of the search strategy includes four sets of references to define and test the developed strategies. Two sets of references were used for the development of the search strategies, a development and a population set.
The development set was used to identify and evaluate the sensitivity of free-text terms (title, abstract) and Medical Subject Heading (MeSH) terms. The development set consisted of a pool of 78 relevant papers from PubMed/Medline, identified in three relevant systematic reviews investigating the association between nurse staffing and patient outcomes [13, 25, 26]. Systematic reviews have previously been used to identify relevant references for search filter development . Well-conducted systematic reviews employ comprehensive searches in various databases and are often complemented by hand searches. A set of references created by merging relevant references from different systematic reviews can be assumed to represent the total population of relevant references. The selection of systematic reviews was not based on a systematic search but on a priori knowledge of the field. The systematic reviews were selected because, to our knowledge, they employed the most comprehensive searches so far targeting nurse staffing research [13, 25]. Only those studies critically appraised and included in the systematic reviews and available in PubMed/Medline were incorporated in the development set.
A population set consisting of a random sample of PubMed/Medline references was used to compare the frequency of terms with the highest sensitivity from the development set with the frequency in the overall PubMed/Medline population. For the sampling procedure we limited a PubMed/Medline search (using an empty search field) to the last 12 months (12/2007 to 12/2008) and saved the retrieval results as a PMID list. From this list a random sample of 10,000 references was drawn. References of the population set were not screened for relevance and all references were assumed to be not relevant.
A text-mining approach was used to identify potentially relevant free-text terms from the development set. The analysis was computed with the tm package  in R , which is a statistical computing language and graphics environment.
Prevalence of terms in the development and population set
n = 78
n = 10,000
The text-mining approach applied worked reliably only for single word terms. MeSH terms often consist of multiple words including special characters, which lead to unexpected results. Due to this technical constraint, a simplified approach was applied to identify the 20 most frequent MeSH terms to be used in the search strategy. Terms were selected on the basis of their frequency in the development set and their relevance to the question.
Empirically derived search strategies to identify nurse staffing research in PubMed
staff[tiab] OR staffing[tiab] OR organizational[tiab] OR skill mix[tiab] OR length of stay[tiab] OR medicare[tiab]
"Nursing Staff, Hospital"[mh]
"Personnel Staffing and Scheduling"[mh]
"Intensive Care Units/manpower"[mh]
"Nursing Administration Research"[mh]
#1 OR #2 OR #3 OR #4 OR #5
"health services administration"[mh]
nurse[tiab] OR nurses[tiab] OR hospitals[tiab] OR nursing[tiab]
#8 OR #9
#6 AND #7 AND #10
"Outcome and Process Assessment (Health Care)" [mh]
"Hospital Units" [mh]
#1 OR #2 OR #3
nurse[tiab] or nurses[tiab]
"Nursing Staff, Hospital" [mh]
#6 OR #7
#4 AND #5 AND #8 AND #9
"Outcome and Process Assessment (Health Care)" [mh]
"Hospital Units" [mh]
#1 OR #2 OR #3
(nurse[tiab] OR nurses[tiab]) AND staffing[tiab]
"Nursing Staff, Hospital" [mh]
#5 OR #6
#4 AND #7
Testing the search strategies
The newly developed search strategies (Table 2) were tested against three reference sets: the development, precision, and journal screening sets. All tests were conducted with the PubMed interface. PubMed/Medline was chosen for its free accessibility. The search development and testing were conducted in December 2008.
The development set consisted of 78 relevant references from the three reviews. As we did not screen the retrieved references in PubMed/Medline for relevance, we calculated precision based on the conservative assumption that all references retrieved additionally were not relevant. Retrieval of results (recall) was limited to the time frame of the searches of the systematic review (1982 to 2006). This rough approximation results in a downward bias of precision and NNR in the development set, as relevant references in the non-screened references were ignored. If relevant references had been identified, precision and NNR would have been improved.
All search strategies tested (Sensitive, Precise, Balanced, PubMed HSR Sensitive, PubMed HSR Precise) were connected with the OR operator and limited to the time frame between 1982 and 2006. A random sample of 2,195 references was drawn from the retrieved 35,708 records and screened for relevance. This set was used to determine a less biased estimate of the precision of the search strategies and to estimate the overall number of relevant references. This estimation was based on the assumption that a joint search including all search strategies (with sensitivity of up to 1.00) should be able to capture all relevant studies in PubMed/Medline for the given time frame. Following this assumption it is possible to calculate an estimate for the number of relevant references in PubMed/Medline using the precision set. The relevant references of the precision set overlapped with the development set, except for one reference. This overlap can be expected for two reasons: 1) both sets target the same time frame, and 2) the development set was based on three comprehensive searches, which potentially captured all relevant records in this time frame.
The journal screening set was based on the assessment of all available abstracts of three journals relevant to nurse staffing research (Medical Care, Health Services Research, and Journal of Nursing Administration; issues 2006 to 2008; 1,274 references). The selection of journals was based on the frequency of relevant articles in each journal in the development set. The time frame of the latest search in the systematic reviews and the journal screening overlapped by six months, which resulted in one paper being included in both sets and two papers not being identified by the systematic reviews; we assume this was caused by the delay in full indexing in PubMed/Medline. There was no overlap of references between the journal screening and precision set.
Inclusion and exclusion criteria for the precision and journal screening set
• Studies investigating the association between staffing (e.g. nurse-to-patient ratio or work hours per patient or patient day) and a) nursing outcomes (e.g. job satisfaction, nurse vacancy rate, nurse turnover rate, nurse retention rate) or b) patient outcomes (e.g. mortality, adverse drug events, nurse quality outcomes, length of stay; patient satisfaction with nursing care)
• Studies not published in English
• Studies including a target population of outpatients and patients in long-term care facilities
• Studies with no information relevant to nurse staffing policies and strategies
• Studies examining the contributions of advance practice nurses (nurse practitioners, nurse clinicians, certified nurse midwives, nurse anesthetists)
• Administrative reports and single-hospital studies that did not include control comparisons and did not test an associative hypothesis
• Systematic or non-systematic reviews
• Editorials, letters, non-original research
Performance of the newly developed strategies
Sensitivity and precision of the search strategies tested
Development set (78 relevant references)
PubMed HSR Query sensitive
PubMed HSR Query precise
Precision set (6 relevant references out of a total of 2,195)
PubMed HSR Query sensitive
PubMed HSR Query precise
Journal screening set (17 relevant references out of a total of 1,274)
PubMed HSR Query sensitive
PubMed HSR Query precise
Comparison with PubMed's HSR Queries
The newly developed search strategies had a higher sensitivity than PubMed's HSR Queries in all test sets (Table 4). In terms of precision, the two strategies performed within the same range in all of the three test sets.
Overall estimate of relevant references on nurse staffing in PubMed/Medline
The precision set contained a total of 2,195 references, of which 6 were relevant references according to the eligibility criteria defined. On the basis of the precision set, an overall number of 97.6 [19.5-175.2] relevant references would be expected on nurse staffing for the time frame between 1982 and 2006.
The search strategies developed performed well in terms of sensitivity, with the expected pay-off for precision and vice versa. Depending on the objective of the search, all three strategies are suitable for specific purposes such as the use of sensitive strategies in systematic reviews or e-mail alerts.
All strategies were assessed against three different test sets. For the measurement of sensitivity, the development and journal screening sets produced similar test results, while the results of the precision set showed greater differences. We assume these differences were caused by the insufficient sample size of the precision set. Although 2,195 references do not appear to be a small sample size, for a given prevalence of 0.0027% of relevant references in the PubMed/Medline population, the sample is still small. Precision ranged from 0.2% to 6.1% in the development set, 0.3% to 14.7% in the precision set, and 8.1% to 32.0% in the journal screening set. Although the ranges varied considerably between the test sets, the overall pattern in the comparison of the search strategies remained consistent: the precise strategies performed better than the balanced strategies and sensitive strategies. Conceptually, the precision set was the closest to the true population. Even with 2,195 screened references, this approach lacked the accuracy to differentiate between strategies in terms of sensitivity. However, precision derived from this set was not hampered by the small sample size and produced less biased estimates for the PubMed/Medline population than the development and journal screening set.
The comparison of PubMed's HSR Queries with the newly developed strategies shows advantages for the latter strategies. This favourable assessment could be expected, due to the broader scope of the HSR Queries. Therefore it might be more important to consider performance comparison as a validation method for the developed strategies and the test sets used.
One of the strengths of the population set is the possibility to infer to the overall PubMed/Medline population and calculate the expected number of relevant references on the topic of interest. However, it should be taken into account that this estimate is based on the Medline references of PubMed/Medline and therefore ignores a small percentage of non-Medline references. Although limited by wide confidence intervals, when exclusively compared to the development and journal screening set, the estimate allows an inference of the overall number of relevant papers.
In addition to the aims outlined, the study employed a development process for search strategies, with some features that might be useful to search strategy development in general. While the identification of candidate terms and the testing of the strategy against the development set have previously been done in research on methods filters, in our opinion the population and precision sets employed are unique features of this study. These sets allow (1) search strategy developers to select terms that are not only frequently used in relevant publications but also specific to the topic of interest, and (2) to achieve more realistic precision estimates for the PubMed/Medline population. For search strategy developers, the frequency of terms should not be the sole criterion for the selection of a term for a search strategy. For example, the term "patients" is present in 65% of the references in the development set, but also in 77% in the population set, indicating a lack of specificity for the topic of interest. The population set enables the developer to preselect these specific terms in order to develop sensitive and precise searches.
Although the development process described could support the development of performance-oriented search strategies, in general some limitations apply to this study and the generalizability of the process.
We assumed that the selected systematic reviews used for building the development set are the most comprehensive reviews in the topic area. However, we cannot rule out that other reviews containing additional relevant references exist.
The search filters developed require references to be fully indexed in Medline and might not be able to fully capture citations in-process; this applies to many search filters  and also limits the use of search strategies as e-mail-update filters.
An untested search strategy without MeSH terms is provided in Additional file 1 (Table S3).
As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Even though sensitive search strategies result in a high level of sensitivity, the considerable number of non-relevant references is a burden. Depending on the purpose of the search, researchers can choose between high sensitivity or high precision, i.e. retrieval of a large number of references or an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field.
The described development process for an empirical search strategy is a useful - though technically demanding - approach to building performance-oriented strategies. The similar sensitivities of the tested strategies in the development and journal screening set confirm the validity of this approach. The precision set can be used to provide more realistic precision estimates and to calculate the expected number of relevant references in the population set.
The authors gratefully acknowledge support by Eugene Yankovskyy (statistics) and Natalie McGauran (editing).
- White VJGJ, Lefebvre C, Sheldon TA: A statistical approach to designing search filters to find systematic reviews: objectivity enhances accuracy. Journal of Information Science. 2001, 27: 357-370. 10.1177/016555150102700601.View ArticleGoogle Scholar
- Eady AM, Wilczynski NL, Haynes RB: PsycINFO search strategies identified methodologically sound therapy studies and review articles for use by clinicians and researchers. J Clin Epidemiol. 2008, 61: 34-40. 10.1016/j.jclinepi.2006.09.016.View ArticlePubMedPubMed CentralGoogle Scholar
- Wong SSL, Wilczynski NL, Haynes RB: Optimal CINAHL search strategies for identifying therapy studies and review articles. Journal of Nursing Scholarship. 2006, 38: 194-199. 10.1111/j.1547-5069.2006.00100.x.View ArticlePubMedGoogle Scholar
- Wong SSL, Wilczynski NL, Haynes RB: Developing optimal search strategies for detecting clinically sound treatment studies in EMBASE. J Med Libr Assoc. 2006, 94: 41-47.PubMedPubMed CentralGoogle Scholar
- McKibbon K, Wilczynski NL: Retrieving randomized controlled trials from medline: a comparison of 38 published search filters. Health Info Libr J. 2008, accessed 12 dec 2008Google Scholar
- Shojania KGBL: Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Effective Clinical Practice. 2001, 4: 157-162.PubMedGoogle Scholar
- Montori VM, Wilczynski NL, Morgan D, Haynes RB: Optimal search strategies for retrieving systematic reviews from Medline: analytical survey. BMJ. 2005, 330: 68-71. 10.1136/bmj.38336.804167.47.View ArticlePubMedPubMed CentralGoogle Scholar
- Wilczynski NLHR, Hedges T: EMBASE search strategies achieved high sensitivity and specificity for retrieving methodologically sound systematic reviews. Journal of Clinical Epidemiology. 2007, 60: 29-33. 10.1016/j.jclinepi.2006.04.001.View ArticlePubMedGoogle Scholar
- Boynton JGJ, McDaid D, Lefebvre C: Identifying systematic reviews in MEDLINE: developing an objective approach to search strategy design. Journal of Information Science. 1998, 24: 137-154. 10.1177/016555159802400301.View ArticleGoogle Scholar
- Sladek RM, Tieman J, Currow DC: Improving search filter development: a study of palliative care literature. BMC Med Inform Decis Mak. 2007, 7: 18-10.1186/1472-6947-7-18.View ArticlePubMedPubMed CentralGoogle Scholar
- Kastner M, Wilczynski NL, Walker-Dilks C, McKibbon KA, Haynes B: Age-specific search strategies for Medline. J Med Internet Res. 2006, 8: e25-10.2196/jmir.8.4.e25.View ArticlePubMedPubMed CentralGoogle Scholar
- Vincent S, Greenley S, Beaven O: Clinical Evidence diagnosis: Developing a sensitive search strategy to retrieve diagnostic studies on deep vein thrombosis: a pragmatic approach. Health Info Libr J. 2003, 20: 150-159. 10.1046/j.1365-2532.2003.00427.x.View ArticlePubMedGoogle Scholar
- Kane RL, Shamliyan T, Mueller C, Duval S, Wilt TJ: Nurse staffing and quality of patient care. Evid Rep Technol Assess (Full Rep). 2007, 1-115.Google Scholar
- Lake ET, Cheung RB: Are patient falls and pressure ulcers sensitive to nurse staffing?. West J Nurs Res. 2006, 28: 654-677. 10.1177/0193945906290323.View ArticlePubMedGoogle Scholar
- Jenkins M: Evaluation of methodological search filters--a review. Health Info Libr J. 2004, 21: 148-163. 10.1111/j.1471-1842.2004.00511.x.View ArticlePubMedGoogle Scholar
- Glanville JM, Lefebvre C, Miles JNV, Camosso-Stefinovic J: How to identify randomized controlled trials in Medline: ten years on. J Med Libr Assoc. 2006, 94: 130-136.PubMedPubMed CentralGoogle Scholar
- Robinson KA, Dickersin K: Development of a highly sensitive search strategy for the retrieval of reports of controlled trials using PubMed. International Journal of Epidemiology. 2002, 31: 150-153. 10.1093/ije/31.1.150.View ArticlePubMedGoogle Scholar
- Royle PL, Waugh NR: Making literature searches easier: A rapid and sensitive search filter for retrieving randomized controlled trials from PubMed. Diabetic Medicine. 2007, 24: 308-311. 10.1111/j.1464-5491.2007.02046.x.View ArticlePubMedGoogle Scholar
- Murphy SA: Research methodology search filters: are they effective for locating research for evidence-based veterinary medicine in PubMed?. Journal of the Medical Library Association. 2003, 91: 484-489.PubMedPubMed CentralGoogle Scholar
- Hopewell S, Clarke M, Lefebvre C, Scherer R: Handsearching versus electronic searching to identify reports of randomized trials. Cochrane Database of Methodology Reviews: Reviews 2002 Issue4. 2002Google Scholar
- Wilczynski NL, Haynes RB: Robustness of empirical search strategies for clinical content in MEDLINE. Proceedings/AMIA. 2002, 904-908. Annual Symposium,Google Scholar
- Haynes RBWN, McKibbon KA, Walker CJ, Sinclair JC: Developing optimal search strategies for detecting clinically sound studies in MEDLINE. Journal of the American Medical Informatics Association. 1994, 1: 447-458.View ArticlePubMedPubMed CentralGoogle Scholar
- Hoogendam A, de Vries Robbe PF, Stalenhoef AF, Overbeke AJ: Evaluation of PubMed filters used for evidence-based searching: validation using relative recall. J Med Libr Assoc. 2009, 97: 186-193. 10.3163/1536-5050.97.3.007.View ArticlePubMedPubMed CentralGoogle Scholar
- Sampson M, Zhang L, Morrison A, Barrowman NJ, Clifford TJ, Platt RW: An alternative to the hand searching gold standard: validating methodological search filters using relative recall. BMC Medical Research Methodology. 2006, 6: 33-10.1186/1471-2288-6-33.View ArticlePubMedPubMed CentralGoogle Scholar
- Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen (IQWiG): Zusammenhang zwischen Pflegekapazität und Ergebnisqualität in der stationären Versorgung - Eine systematische Übersicht. 2006, CologneGoogle Scholar
- Numata Y, Schulzer M, van der Wal R, Globerman J, Semeniuk P, Balka E, Fitzgerald JM: Nurse staffing levels and hospital mortality in critical care settings: literature review and meta-analysis. J Adv Nurs. 2006, 55: 435-448. 10.1111/j.1365-2648.2006.03941.x.View ArticlePubMedGoogle Scholar
- Feinerer I: tm: Text Mining Package. R package Version 0.3.3. 2008Google Scholar
- R Development Core Team: R: A Language and Environment for Statistical Computing. 2008, Vienna: R Foundation for Statistical ComputingGoogle Scholar
- Wilczynski NL, Haynes RB, Lavis JN, Ramkissoonsingh R, Arnold-Oatley AE: Optimal search strategies for detecting health services research studies in MEDLINE. CMAJ. 2004, 171: 1179-1185.View ArticlePubMedPubMed CentralGoogle Scholar
- Simon M, Hausner E, Ivanova G, Kaiser T: Sensitivity of free text terms and controlled descriptors of methodological search filters to identify "In-Process-Citations" of RCTs in Medline (08ebmP23). Evidenzbasierte Primärversorgung und Pflege 9. Jahrestagung Deutsches Netzwerk Evidenzbasierte Medizin und Kongress der Deutschen Gesellschaft für Pflegewissenschaft; 22.-23.02.2008.; Witten. 2008Google Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/10/76/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.