Assessment of the abstract reporting of systematic reviews of dose-response meta-analysis: a literature survey

Background There is an increasing number of published systematic reviews (SR) of dose-response meta-analyses (DRMAs) over the past decades. However, the quality of abstract reporting of these SR-DRMAs remains to be understood. We conducted a literature survey to investigate the abstract reporting of SR-DRMAs. Methods Medline, Embase, and Wiley online Library were searched for eligible SR-DRMAs. The reporting quality of SR-DRMAs was assessed by the modified PRISMA-for-Abstract checklist (14 items). We summarized the adherence rate of each item and categorized them as well complied (adhered by 80% or above), moderately complied (50 to 79%), and poorly complied (less than 50%). We used total score to reflect the abstract quality and regression analysis was employed to explore the potential influence factors for it. Results We included 529 SR-DRMAs. Eight of 14 items were moderately (3 items) or poorly complied (5 items) while only 6 were well complied by these SR-DRMAs. Most of the SR-DRMAs failed to describe the methods for risk of bias assessment (30.2, 95% CI: 26.4, 34.4%) and the results of bias assessment (48.8, 95% CI: 44.4, 53.1%). Few SR-DRMAs reported the funding (2.3, 95% CI: 1.2, 3.9%) and registration (0.6, 95% CI: 0.1, 1.6%) information in the abstract. Multivariable regression analysis suggested word number of abstracts [> 250 vs. ≤ 250 (estimated ß = 0.31; 95% CI: 0.02, 0.61; P = 0.039)] was positively associated with the abstract reporting quality. Conclusion The abstract reporting of SR-DRMAs is suboptimal, substantial effort is needed to improve the reporting. More word number may benefit for the abstract reporting. Given that reporting of abstract largely depends on the reporting and conduct of the SR-DRMA, review authors should also focus on the completeness of SR-DRMA itself. Electronic supplementary material The online version of this article (10.1186/s12874-019-0798-5) contains supplementary material, which is available to authorized users.


Background
Systematic reviews (SRs) and meta-analyses are powerful evidence in guiding health policies and informed decision-making [1][2][3]. Appropriate reporting of SRs and meta-analyses is thus vital for the effective utilization of high-quality evidence in healthcare. This prompted the development of guidelines and checklists for standardized reporting, such as the well-known Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [4].
Reporting of abstracts of published SRs and meta-analyses has been highlighted in previous literatures [5,6]. An abstract summarizes the contents of a research report, in this case a SR or meta-analysis, in a brief pattern for users to outline the research evidence, and it is often the only freely accessible information for some users of evidence. Well-reported abstracts are essential in assessing the study validity, clarifying the applicability of results, and facilitating the peer-reviewing process [6,7]. Literature surveys on abstracts reporting of SRs however demonstrated that the overall abstract reporting was suboptimal that the completeness of information was insufficient [8]. Great efforts were taken to improve the quality of abstract reporting for SRs, for example, the PRISMA for Abstracts statement released in 2013, as an extension of the PRISMA statement [7].
Dose-response meta-analysis (DRMA) is a metaanalysis that explores the dose-response relationship between continuous (or discrete) independent (e.g. sleep duration) and dependent variable (e.g. risk of death) [9]. Unlike traditional meta-analysis, DRMA allows investigators to determine whether there are different effects for presence and absence of exposure (or intervention), as well as whether the effects varying according to dose of exposure for a given population (e.g. alcohol intake and risk of all-cause mortality). This makes sense for decision makers as it is expected to contain more information and be of higher clinical value. The abstract reporting of SR of dose-response meta-analysis (SR-DRMA) is therefore expected to be more informative than traditional meta-analysis. Knowing about the abstract reporting is useful for further studies and helpful to form the standard reporting checklist specific to DRMA [10]. Nevertheless, there were currently no researches investigated the abstract reporting of SR-DRMAs.
We conducted a literature survey of the published SR-DRMAs of the abstract reporting from 2011 to 2017 to investigate the reporting quality of the abstract and to determine potential influential factors for the quality.

Data source and search strategy
This study was reported according to the PRISMA statement [11]. We searched Medline, Embase, and Wiley online Library for SR-DRMAs published from 1st -January-2011 to 31st-December − 2015 and then updating the searching to 31st-July-2017. We limited the time range because there were little DRMAs published before 2011. A combination of keywords and index terms related to dose-response meta-analysis, meta-analysis of cohort studies, meta-analysis of prospective studies, meta-analysis of observational studies and non-linear meta-regression was used after discussed with four core investigators with expertise in literature search. We did not search the grey literature and no limitations were made on the language. The full search strategy was provided in Additional file 1.

Eligible criteria and study selection
We included published aggregate (in contrast to individual participant data) SR-DRMAs with binary outcomes in the biomedical field. The definition of SR-DRMAs has been clarified in the background [9]. Traditional metaregression analysis and survival analysis were not considered as DRMA here. There was no limitation on the population, exposure/intervention, health issues, as well as study design in each SR-DRMA. We focused on binary outcomes because there currently were very few SR-DRMAs with continuous outcomes available and the results reporting of them were of more flexible (for example, it allows zero-reference for relative difference and non-zero-reference for absolute difference) [12]. We did not consider unpublished article and conference abstract because such types of publications generally not peer reviewed.
Study selection was conducted independently by two investigators (XC and LY). We first excluded duplicates by reference management software (Endnote X7). We subsequently reviewed the titles and abstracts of each citation and made a decision regarding its appropriateness for inclusion. Full texts of potentially eligible articles were further assessed by the two investigators independently and any disagreement was solved by consensus. We calibrated the decisions of the two investigators by the "notes" and "find duplicate" functions in Endnote software and those with same decisions were identified as duplicates.

Data extraction
For each SR-DRMA, information was extracted separately including year of publication, region of first author (by affiliation), number of authors, word count of the abstract, structure of abstract, journals (journal name, scope) and funding information. Two screeners (XB and HXH) extracted the data independently and crosschecked the extracted information.

Assessment of abstract reporting
The reporting of abstracts was assessed using the PRISMA for Abstracts [7], which includes a checklist pertaining to each section of the abstract, including "title", "objective", "methods", "results", "conclusion" and "other information" with totally 12 items. In order to make it more suitable for SR-DRMA, we slightly modified the checklist by adding two additional items (in prior) that most of the Cochrane systematic reviews contained: (1) methods of combining dose-response data (Item 6); (2) description and evaluation of quality (risk of bias) of included study (Item 8). We also predefined the type of results of main outcomes as linear or dosespecific absolute risk (AR) or relative risks (RR) for item "results of main outcomes". This modification may have some impact on the total score and data analysis.
The modified PRISMA for Abstract therefore contains 14 items (Additional file 1: Table S1), with each item corresponding to three response options: 'Yes' , 'No' , or 'Unclear'. We assign 1 score for the item for "Yes" while 0 for "no" or "unclear" [13]. Then, the quality score of the abstract ranges from 0 to 14 and an abstract with higher score was regarded as better reporting.
Two researchers (XB and HXH) independently assessed the quality of the abstracts using the modified PRISMA for Abstracts checklist. Any disagreement was solved by consensus after the assessment. The inter-rater correlation was calculated using the kappa (κ) statistics as a measurement of the degree of agreement [14].

Data analysis
Descriptive statistics were used to summarize the basic characteristics. In detail, we used the median and quartile to describe the overall score distribution and the frequency and proportion for the categorical data. Adherence rate (AR, AR ¼ n N Â 100% ) and 95% confidence interval (CI) were used to reflect the degree of compliance of each item. Here n is the number of SR-DRMAs adhere the requirement of certain item while N is the total SR-DRMAs included. We divided the adherence rate of each item into three levels: well complied (met by 80% or above), moderately complied (met by 50 to 79%), and poorly complied (met by less than 50%) [15]. It should be noted that this kind of division is arbitrary.
We used the total score to reflect the quality of the reporting. This is reasonable since it was widely applied in such types of researches [16,17]. In order to investigate potential factors related to the quality of the abstract reporting, we pre-specified the following 5 variables to regress with the total quality score: year of publication, region of first author (Asia-pacific, European, America), number of authors, word count of the abstract (> 250 vs. ≤ 250), and funding (yes vs. no). We choose these factors because we aimed to see if the quality of abstract reporting was increased by years, differed by regions, and improved by more authors, more words as well as financial supporting. Of which, previous literatures have suggested that these variables may influence the abstract reporting [18,19].
We used weighted least square linear regression analysis to modeling the relationship between the 5 variables and the total score, by considering the potential heteroscedasticity on parameter estimation [20]. We employed the robust variance that treats each journal as a cluster to address correlations of the reporting quality of SR-DRMAs published in the same journal. The generalized linear model equation (GEE) was conducted as sensitivity analysis. Data analyses were performed by STATA statistical software (Version 12.0, College Station, TX) and P < 0·05 was treated as statistical significant.

Results
The initial literature search retrieved 7061 records. After removing 1765 duplicates and 3990 clearly irrelevant records, full-text papers of the remaining 1306 records were identified for final assessment. Among the 1306 records, 776 were excluded by the following reasons: not dose-response meta-analysis (n = 596), not binary outcome (n = 38), editor comments or conference abstract (n = 59), meta-regression analysis (n = 40), methodology study (n = 17), out of time range of publish (n = 11), meta-analysis contained within an original study (n = 14) or, individual participant data (n = 10) and survival data (n = 1). Finally, a total of 529 SR-DRMAs were included in this cross-sectional analysis (Fig. 1).

General characteristics
The 529 SR-DRMAs were published in 174 different academic journals. Most of which were in specialist (disease-specific) journal (n = 365, 69.0, 95% CI: 64.9, 72.9%), followed by general journal (n = 119, 22 Within the SR-DRMAs being funded (n = 337), 336 were supported by government and one was supported by the company.

Adherence rate of each reporting item
The two authors achieved a reasonable consistency on assessing the abstract reporting with each item had a κ value over 0.75 and the overall item had a κ value as 0.95 (Additional file 1: Table S2). Generally, 6 out of 14 items were well complied, 3 were moderately complied, and 5 were poorly complied. The adherence rate of items reported for the PRISMA checklist was listed in Fig. 2.
The title section contains 1 item, which was presented as "identify the report as a systematic review, doseresponse meta-analysis, or both". This was compiled by most of the SR-DRMAs (AR = 98. 5 For other information, only 2.3% of the SR-DRMAs reported the funding information (AR = 2.3, 95% CI: 1.2, 3.9%) and 0.6% reported the registration information (AR = 0.6, 95% CI: 0.1, 1.6%) in the abstract.   (Table 2).

Discussion
In this article, we conducted a literature survey on the reporting of the abstract of published SR-DRMAs and we found that the abstract reporting was suboptimal for these SR-DRMAs. The limitations of the abstract reporting mainly embodied in the section of methods, results, and the other information (e.g. registration). In particular, we observed that most of the SR-DRMAs failed to describe the methods for risk of bias assessment and the results of bias assessment, and few SR-DRMAs reported the funding and registration information in the abstract.
Our regression analysis revealed that year of publication was adversely associated with the quality of abstract reporting while the word number of abstract was positively associated with the reporting. We also observed unstable, adverse relationship between number of authors, financial support and reporting quality of abstract. Consistent results suggested no obvious relationship between region of first author and quality of abstract reporting. A potential reason for the relationship between word count and the reporting quality was that authors have more space to describe the results. In contrast, when there was word count limits, review authors may remove those contexts that they think less important and as a result make the abstract less informative. Academic journals may consider to improve the word count limits of the abstract, especially those limited word count as less than 250.
The methods and results reporting are particularly important for a well-organized abstract, which summarizes the design, conduction, analysis and findings of the researches. In our survey, the reporting of methods and results were worrisome. There may be some connections between the abstract reporting and the full-text reporting because abstract depends on the work of what it is summarizing from the full-text. In our previous survey, we observed that the reporting on the methods and results for the full-text of SR-DRMAs were uninformative [21] (Additional file 1: Figure S1). These findings highlighted the importance of the reporting of abstract that it may partly reflect the quality of full-text reporting, and thus review authors should also focus on the completeness of SR-DRMA itself. For systematic review, risk of bias assessment is the essential part and further review authors should report such information, regardless in the abstract and the full-text. There were small amount (less than 1/3) of SR-DRMAs described the methods to access risk of bias of included studies in the abstract while a higher proportion (about 1/2) described the results of risk of bias. It was interesting that more SR-DRMAs reported the combined effect sizes (99.05%) than the combining methods (78.45%). We hypothesized that review authors tend to focus on the results rather than the methods. In our survey, very few SR-DRMAs reported the financial and registration information in the abstract. Indeed, most of the SR-DRMAs reported these information in the full-text (Additional file 1: Figure S1). Such information was important for decision makers and systematic review producers. Previous literature has demonstrated that substantial financial bias may exaggerate the efficiency while cover up the harms of clinical trial [22]. In many academic journals, funding and registration information were required at the full text while no mandatory for abstract. We recommended SR-DRMA authors and academic journals diligent such in formation in the abstract.
In this survey, we did not observe obvious improvement of the reporting quality of SR-DRMAs over the years from 2011 to 2017, though the PRISMA for abstract was released in 2013 [7]. This finding is similar to a previous research that investigated the abstract reporting of randomized controlled trials [23]. In that review, Chhapola et al. used the Consolidated Standards of Reporting Trials (CONSORT) for Abstract extension to assess the abstract reporting and their research showed insignificant change before and after the publication of the CONSORT abstract guideline [23]. Fig. 2 The adherence rate of single item of the abstract. Adherence rate indicates the proportion of SR-DRMAs meet the requirement of the item Fig. 3 The distribution of total quality score. X-axis is the total quality score and the Y-axis is the number of SR-DRMAs under the quality score There were several strengthens of current research. This is the first literature survey on the abstract reporting of published SR-DRMAs. We included almost all of the SR-DRMAs published during the past 7 years for the analysis. Our findings have directive significance for systematic reviewers of SR-DRMA and guideline developers for abstract reporting. Moreover, in an attempt to ensure the validity of quality assessment, we estimated the inter-rater correlation of each item of the judgment. The results suggested a good consistency of the assessment between the two assessors. We also employed the weighted least square regression and the robust variance to achieve credible parameter estimation. Sensitivity analyses for most of the results were stable.
Several limitations should be highlighted. The major limitation in this survey was that we used the modified PRISMA for abstract checklist to access the reporting quality by adding two additional items. Such arbitrary modification may have some influence on the total quality score and the regression analysis. The credibility and validity of the modified checklist needs to be verified.
Second, we only assessed the SR-DRMAs of aggregate data and binary outcomes. The findings of our research may be not suitable for SR-DRMAs based on individual participant data and those with continuous outcomes. Third, some of the results in our regression analysis were instable. For example, the relationship between number of authors, financial support and reporting quality of abstract are inconsistent in sensitivity analysis. These two results should be treated with caution.

Conclusions
The abstract reporting of SR-DRMAs is suboptimal. Substantial effort is needed to improve the reporting, especially for the reporting of the methods and results. More words number may benefit for the abstract reporting and at least 250 words were recommended for SR-DRMAs. Given that the reporting of abstract largely depends on the reporting and conduct of the SR-DRMA, review authors should also focus on the completeness of SR-DRMA itself.