Skip to main content

The endorsement of general and artificial intelligence reporting guidelines in radiological journals: a meta-research study

Abstract

Background

Complete reporting is essential for clinical research. However, the endorsement of reporting guidelines in radiological journals is still unclear. Further, as a field extensively utilizing artificial intelligence (AI), the adoption of both general and AI reporting guidelines would be necessary for enhancing quality and transparency of radiological research. This study aims to investigate the endorsement of general reporting guidelines and those for AI applications in medical imaging in radiological journals, and explore associated journal characteristic variables.

Methods

This meta-research study screened journals from the Radiology, Nuclear Medicine & Medical Imaging category, Science Citation Index Expanded of the 2022 Journal Citation Reports, and excluded journals not publishing original research, in non-English languages, and instructions for authors unavailable. The endorsement of fifteen general reporting guidelines and ten AI reporting guidelines was rated using a five-level tool: “active strong”, “active weak”, “passive moderate”, “passive weak”, and “none”. The association between endorsement and journal characteristic variables was evaluated by logistic regression analysis.

Results

We included 117 journals. The top-five endorsed reporting guidelines were CONSORT (Consolidated Standards of Reporting Trials, 58.1%, 68/117), PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses, 54.7%, 64/117), STROBE (STrengthening the Reporting of Observational Studies in Epidemiology, 51.3%, 60/117), STARD (Standards for Reporting of Diagnostic Accuracy, 50.4%, 59/117), and ARRIVE (Animal Research Reporting of In Vivo Experiments, 35.9%, 42/117). The most implemented AI reporting guideline was CLAIM (Checklist for Artificial Intelligence in Medical Imaging, 1.7%, 2/117), while other nine AI reporting guidelines were not mentioned. The Journal Impact Factor quartile and publisher were associated with endorsement of reporting guidelines in radiological journals.

Conclusions

The general reporting guideline endorsement was suboptimal in radiological journals. The implementation of reporting guidelines for AI applications in medical imaging was extremely low. Their adoption should be strengthened to facilitate quality and transparency of radiological study reporting.

Peer Review reports

Introduction

Complete reporting is essential for translating results of clinical research into scientifically robust evidence to support decision-making in daily practice [1]. Reporting guidelines are developed to serve as useful tools to enhance the quality and transparency of clinical research [2, 3], while it is still a long-standing and widespread issue that the reporting quality is suboptimal [4, 5]. It is no wonder that a series of studies have repeatedly stressed the unsatisfied endorsement of reporting guideline in clinical journals [6,7,8,9,10,11,12,13,14,15,16,17,18,19], since the journals’ endorsement of reporting guidelines may raise the stakeholders’ awareness of them and is related to the completeness of reporting in medical journals [20].

The reporting quality of diagnostic accuracy tests and systematic reviews in Radiology, European Radiology, and Korean Journal of Radiology has been improved since it became mandatory to use the corresponding reporting guidelines [21,22,23]. It is reasonable that the change of the editorial policy of one specific journal significantly influences the reporting quality. On the other hand, the reporting quality of the abstracts of randomized controlled trials in the field of interventional radiology did not show obvious improvement after the update of corresponding guidelines [24]. One of the potential reasons for this difference is that the level of endorsement of reporting guidelines are different among the 61 interventional radiological journals that had been investigated. Therefore, it is necessary to investigate the endorsement of reporting guidelines, and sequent call those journals with low endorsement to make it mandatory. The earlier use of reporting guidelines during research process is considered to have greater impact on the final manuscript in radiological journals, suggesting that there is a need for enhanced education on the use of reporting guidelines [25]. The reporting guidelines should be introduced to the researchers to educate them about the benefits, be used by the reviewers during their review process, and be endorsed by journals to guarantee the adherence.

The artificial intelligence (AI) has engendered a rapid increasing medical application, especially in medical imaging [26,27,28,29,30]. The clinical translation of these academic research on AI should be based on scientific publication with enough details to allow readers to determine the rigor, quality, and generalizability. These publications of AI research in medical imaging calls for specific reporting guidelines for transparent and organized AI research reporting [31,32,33,34,35]. In addition to general reporting guidelines, the endorsement of these AI reporting guidelines has potential to improve the reporting quality and transparency of AI research, and reshape the current radiological practice. So far, the endorsement of AI reporting guidelines has not been assessed.

Therefore, this study aimed to investigate the endorsement of general reporting guidelines and those for AI applications in medical imaging in radiological journals, and explore associated journal characteristic variables.

Materials and methods

Study design

Our study is a meta-research study, and corresponding reporting guideline is currently not available [36,37,38,39,40,41]. Ethical approval or written informed consent were not required for this study because no human or animal subjects have been included in this study. We did not register the study protocol since there were no appropriate platform. However, we have drafted a protocol for this cross-sectional meta-research study (Supplementary Note S1). The sample selection, data extraction, and guideline endorsement assessment were conducted by two independent reviewers (JYZ and YX). The statistical analysis was performed by one reviewer (JYZ) under supervision of a methodologist (JJL). Any discrepancies will be resolved by discussion or consulting with the review group. Our study group is consisted of reviewers with diverse background and knowledge, including radiologists, and health professionals from multiple disciplines. All the reviewers have experience in manuscript drafting and publishing. Some of the reviewers have served as reviewers for radiological journals and evidence-based medicine journals, as well as editorial board members for radiological journals.

Sample journals

The journals from the Radiology, Nuclear Medicine & Medical Imaging category, Science Citation Index Expanded of the 2022 Journal Citation Reports were identified via Clarivate website [42], and assessed for eligibility. The exclusion criteria were (1) journals not publishing original research; (2) journals published in non-English languages; and (3) journals lacking instructions for authors.

Data extraction

The following bibliometrics information was directly downloaded via Clarivate: journal name, 2022 Journal Impact Factor (JIF), and the JIF quartile (Q1, Q2, Q3, Q4). The following journal characteristics were extracted: publication region, publication institution/publisher, publication language, publication frequency, type of access, whether the journal is only in the Radiology, Nuclear Medicine & Medical Imaging category, and whether the journal is owned by an academic society [43, 44]. The official website address of each journal was recorded. The search and data extraction via Clarivate was carried out on 20 July 2023.

Endorsement assessment

The endorsement of fifteen general reporting guidelines [45,46,47,48,49,50,51,52,53,54,55,56,57,58,59] and ten reporting guidelines for AI applications in medical imaging [60,61,62,63,64,65,66,67,68,69] was rated using a 5-level tool [19]. The 15 general reporting guidelines were selected since they are considered as the most frequently used for main study types, and are highlighted on the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network website [70]: (1) CONSORT (Consolidated Standards of Reporting Trials) for randomized trials [45], (2) STROBE (STrengthening the Reporting of Observational Studies in Epidemiology) for observational studies [46], (3) PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for systematic reviews [47], (4) SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) for study protocols [48], (5) PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols) for study protocols [49], (6) STARD (Standards for Reporting of Diagnostic Accuracy) for diagnostic/prognostic studies [50], (7) TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) for diagnostic/prognostic studies [51], (8) CARE (CAse REport guidelines) for case report [52], (9) AGREE (Appraisal of Guidelines, Research, and Evaluation) for clinical practice guidelines [53], (10) RIGHT (Reporting Items for Practice Guidelines in Healthcare) for clinical practice guidelines [54], (11) SRQR (Standards for Reporting of Qualitative Research) for qualitative research [55], (12) COREQ (COnsolidated criteria for REporting Qualitative research) for qualitative research [56], (13) ARRIVE (Animal Research Reporting of In Vivo Experiments) for animal pre-clinical studies [57], (14) SQUIRE (Standards for QUality Improvement Reporting Excellence) for quality improvement studies [58], and (15) CHEERS (Consolidated Health Economic Evaluation Reporting Standards) for economic evaluations [59]. The 10 reporting guidelines for AI applications in medical imaging were identified and chosen according to relevant reviews and expert’s opinions [31,32,33,34,35]: (1) CONSORT-AI (Consolidated Standards of Reporting Trials involving Artificial Intelligence) [60], (2) SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials involving Artificial Intelligence) [61], (3) FUTURE-AI (Fairness Universality Traceability Usability Robustness Explainability Artificial Intelligence solutions) [62], (4) MI-CLAIM (Minimum Information about Clinical Artificial Intelligence Modeling) [63], (5) MINIMAR (Minimum Information for Medical AI Reporting) [64], (6) CLAIM (CheckList for Artificial Intelligence in Medical imaging) [65], (7) MAIC-10 (Must Artificial Intelligence Criteria-10) [66], (8) RQS (Radiomics Quality Score) [67], (9) IBSI (Image Biomarker Standardization Initiative) [68], (10) CLEAR (CheckList for EvaluAtion of Radiomics research) [69]. The reporting guidelines included in our study did not cover all the available AI reporting guidelines. Therefore, we also recorded the extra identified AI reporting guidelines if the journal websites mentioned it. Unfortunately, there was no extra AI reporting guidelines identified.

The modified 5-level tool rated the endorsement into: (1) “active strong”: journal requires completed checklist and/or flow diagram with article submission; (2) “active weak”: journal encourages to flow a specific guideline; (3) “passive moderate”: journal only requires the abstract to follow a specific guideline; (4) “passive weak”: journal encourages to prepare manuscripts according to EQUATOR Network website [70] or the International Committee of Medical Journal Editors document [71]; and (5) “none”: journal does not mention any reporting guideline [19]. The assessment on journal’s endorsement of reporting guidelines were based on their documents on submission (instructions for authors, submission guideline, editorial policies, etc.). The endorsement of reporting guidelines was assessed from 22 July 2023 to 23 July 2023.

Statistical analysis

The statistical analysis was conducted by using R language version 4.1.3 within RStudio version 1.4.1106 [72,73,74,75]. The endorsement types of “active strong”, “active weak”, “passive moderate”, and “passive weak” were considered as a positive outcome, while the type of “none” was treated as a negative outcome. The journal was considered as positive if at least one reporting guideline was positive; otherwise, the journal was treated as negative. The journal characteristics between the positive and negative groups were compared. The journal characteristics were further evaluated by logistic regression analysis to tell whether they are associated with the reporting guidelines endorsement. The factors associated with the endorsement of general reporting guidelines and those for AI applications in medical imaging were assessed together, because the endorsement of reporting guidelines for AI applications in medical imaging were very low. All of the statistical tests were two-sided. The alpha level for statistically significance is set at 0.05, if not stated otherwise.

Results

Journal inclusion

There were 135 journals in the Radiology, Nuclear Medicine & Medical Imaging category, Science Citation Index Expanded of the 2022 Journal Citation Reports. We excluded 10 journals that only publish reviews, 7 non-English journals, and 1 journal without available website. Finally, 117 radiological journals were included (Fig. 1).

Fig. 1
figure 1

Flowchart for radiological journal inclusion

Journal characteristics

The mean ± standard deviation, median (range) of 2022 JIF of included journals was 3.7 ± 2.8, 2.9 (0.6 to 19.7) (Table 1). The journals were most likely belonged to JIF Q2 (29.9%, 35/117), North America (47.9%, 56/117), and not only included in the Radiology, Nuclear Medicine & Medical Imaging category (54.7%, 64/117). Most of the journals were published by Springer (25.6%, 30/117), with a frequency of ≥ 12 issue/year (40.2%, 47/117), supporting hybrid access mode (70.1%, 82/117), and were owned by academic societies (71.8%, 84/117) (Fig. 2). The characteristics of each journal are presented in Supplementary Tables S1 and S2. The original data for analysis are presented in Supplementary Data Sheet.

Table 1 Characteristics of included radiological journals
Fig. 2
figure 2

Sankey diagram of journal characteristics. Abbreviation: JIF = Journal Impact Factor, LWW = Lippincott Williams & Wilkins

Endorsement of reporting guidelines

The journals were divided into positive (61.5%, 72/117) and negative (38.5%, 45/117) groups (Table 1). The endorsement level of general reporting guidelines was most likely to be “active weak” (12.8% to 32.5%, 15/117 to 38/117) (Table 2). The top-five general reporting guidelines were CONSORT (58.1%, 68/117), PRISMA (54.7%, 64/117), STROBE (51.3%, 60/117), STARD (50.4%, 59/117), and ARRIVE (35.9%, 42/117).

Table 2 Endorsement levels of reporting guidelines in radiological journals

The most implemented reporting guidelines for AI applications in medical imaging was CLAIM (1.7%, 2/117), while other nine artificial intelligence reporting guidelines were not mentioned in documents on submission of radiological journals. The examples for five levels of endorsement are presented in Supplementary Table S3. The endorsement of reporting guidelines of each journal is presented in Fig. 3 and Supplementary Table S4.

Fig. 3
figure 3

Endorsement of each reporting guideline according to radiological journals. The left part presents the endorsement of 15 general reporting guidelines. The right part presents the endorsement of 10 reporting guidelines for AI applications in medical imaging. Abbreviation: Q1 to Q4 = the first to the forth Journal Impact Factor quartile

Factors associated with the endorsement of reporting guidelines

The journal characteristics did not show difference between the positive and negative groups, except for the distribution of publisher (Table 1). The multivariable logistic regression analysis showed that JIF quartile and publisher were associated with the endorsement of reporting guidelines in radiological journals (Table 3). The JIF Q2 journals were more likely to endorse the reporting guidelines than JIF Q1 journals (odds ratio 7.83, 95% confidence interval 1.70–36.20, P = 0.008). The journals that are published by academic societies (0.17, 0.04–0.64, P = 0.009), Lippincott Williams & Wilkins (0.13, 0.03–0.68, P = 0.016), and other publishers (0.03, 0.002–0.33, P = 0.004) were less likely to endorse the reporting guidelines than those published by Springer.

Table 3 Factors associated with the endorsement of reporting guidelines

Discussion

The endorsement of general reporting guidelines was lowest for SPRIT, PRISMA-P, AGREE, RIGHT, and COREQ (all 32.5%) in radiological journals, and highest for CONSORT (58.1%). Only two journals suggested to implant the CLAIM (1.7%), while the other nine evaluated reporting guidelines for AI applications in medical imaging were not mentioned. The JIF quartile and publisher were associated with the endorsement of reporting guidelines in radiological journals.

The CONSORT statement was one of the most early developed reporting guidelines for randomized clinical trials, followed by a brunches of reporting guidelines published covering main study types. In accordance to the previous studies, the CONSORT statement has been most commonly endorsed by journals in varying specialties [6,7,8,9,10,11,12,13,14,15,16,17,18,19]. The methodology of randomized clinical trials was less difference among journals from different specialties, and the CONSORT statement met the requirements for most of the journals. Likewise, the reporting guidelines for systematic reviews, observational studies, and animal studies were also with relatively high endorsement, because their methodology were also of less variety. The radiological journals further recommend to use STARD statement, since the diagnostic accuracy test plays an important role in radiology [21, 22]. However, the TRIPOD statement has not been widely accepted by the radiological journals. It is possible that the study type of multivariable prediction model is less conducted than the diagnostic accuracy test. The reporting guidelines for other study types were less endorsed by radiological journals. It is possible that the protocol, and case report are not an acceptable study type for some of the journals, and guidelines, qualitative research, qualitative research, quality improvement, and economic evaluations were not common study types for radiological journals.

In addition to general reporting guidelines, the reporting guidelines for AI applications in medical imaging were also evaluated in our study. Unfortunately, the endorsement of investigated AI reporting guidelines was extremely low in radiological journal. Only European Radiology and Journal of the American College of Radiology recommended CLAIM for AI studies, although the CLAIM was published on the Radiology: Artificial Intelligence [65]. The MAIC-10 and CLEAR were two guidelines recently published on Insights into Imaging for AI studies and radiomics studies [66, 69], and the IBSI statement for radiomics was introduced by Radiology [68]. However, they have not been widely implanted by radiological journals even the journals published them. In contrast to the low endorsement in radiological journals, these reporting guidelines were usually applied for systematic reviews [76,77,78,79,80,81,82,83,84,85]. These systematic reviews found that the adherence rate of RQS, IBSI, and CLAIM were suboptimal for radiomics and AI studies for medical imaging. It is not weird since most of the radiological journals did not endorse these reporting guidelines. The reporting transparency has been improved after the introduction of reporting guidelines [20,21,22,23]. It is expectable to make the reporting guidelines for AI applications in medical imaging mandatory to improve the awareness and application of them, in order to achieve transparent AI and radiomics study reporting. The AI and radiomics community should understand the importance of proper self-reporting, and encourage researchers, journals, editors, and reviewers to take action to ensure the proper usage of checklists [82,83,84,85]. We plan to investigate the influence of the endorsement of the reporting guidelines for AI applications in medical imaging on the quality of study reporting.

We found that the JIF quartile and publisher were associated with the endorsement of reporting guidelines in radiological journals. The journals with higher JIF were generally more likely to endorse the reporting guidelines than lower ones [6, 15, 17, 18]. It is reasonable that the journals with higher impact factors have higher endorsement of reporting guidelines, since they were considered to have higher quality. However, our study showed that the JIF Q2 journals were more likely to endorse the reporting guidelines than JIF Q1 ones in radiological journals. The underlying reason is not yet clear. The journals that were published by Springer showed higher endorsement of reporting guidelines in both surgery and radiological journals [17]. We infer that the higher endorsement was benefited by the unified editorial policy of Springer that recommended the specific reporting guidelines and EQUATOR Network website. The surgical journals from United Kingdom and Europe were more likely to endorse the reporting guidelines than those from North America [17], but we did not find the influence of publication region in radiological journals.

The requirement of adherence to reporting guidelines may improve reporting quality [20,21,22,23], and the agreement among journals on the endorsement of reporting guidelines could improve the quality of research publishing [86]. Rather than prioritizing additional studies on the poor quality of health research reporting, interventions are needed to improve reporting [87]. However, only a limited number of tools has been raised for this purpose, and a smaller number of them has been evaluated [88]. There was the only one that showed a statistically significant effect on reporting quality was a Consort-based WEB tool [89, 89]. This tool supports adherence at the manuscript writing stage. The earlier the reporting guidelines were used, the more impact on the final manuscript and higher perceived value [25]. Therefore, it has been repeatedly suggested to enhanced education on the use of these guidelines. It seemed to be most pivotal to support journals to include hard-wiring adherence to reporting guidelines into their editorial policy. It may be an effective way to make the reporting guidelines mandatory, and ask the reviewers to use the reporting guidelines during the review process. The reporting quality has been improved if the journal required authors to incorporate section headings that reflected CONSORT items into their manuscripts [90]. Although it has not been found that these interventions are effective in radiological journals, the authors, reviewers, and editors may actively use the reporting guideline to guide themselves in the study design, conduction, drafting, reviewing, and revision. Further survey is need to identify the obstacles against the endorsement of reporting guidelines in journals. The potential reasons for suboptimal endorsement of reporting guidelines includes the insufficient resources for mandatory use, the needs to reduce barriers to submission and review, the unique editorial perspective, or the use of alternative to improve the study quality [16, 17]. As the current study may aware the radiological community on the issue of suboptimal endorsement of reporting guidelines. We plan to re-evaluate the endorsement in the future to find out whether the publication of radiological research is reshaped.

Our study has limitations. First, our study only included the radiological journals in the Science Citation Index Expanded. We selected these radiological journals as a representative sample of the high-quality journals of this field. Our results may over-estimate the journals’ endorsement of the reporting guideline. Second, our study is a cross-sectional study only presenting the current endorsement of reporting guidelines in radiological journals is suboptimal. An update study should be conducted in the future. Third, we relied on online documents to assess the endorsement without verifying the additional instructions potentially appear during the manuscript submission and review process. Although we collected and cross-checked the journal information, the journal websites may be update afterwards. Changes in editorial policies may not be timely updated online, which may influence on our conclusion. Fourth, we only investigated a limited number of general reporting guidelines and those for AI applications in medical imaging. There are a lot of reporting guidelines developed for more specific purpose and are available on EQUATOR Network website. Although our study did not cover all the available guidelines, we considered that the included reporting guidelines can at least represent the usually-used ones. We did find many extra AI guidelines which was not included in our study [91,92,93,94,95]. It would be interesting to investigate the endorsement of AI reporting guidelines which is not specifically designed for AI application medical imaging. Finally, our study only emphasized the potential of making reporting guidelines for AI applications in medical imaging mandatory to improve awareness and application in order to achieve high-quality study reporting. Nevertheless, there are currently too many different reporting guidelines for the AI research domain that is difficult for authors, reviewers, and editors to choose [31,32,33,34,35, 60,61,62,63,64,65,66,67,68,69, 91,92,93,94,95]. It is the next step of study to assess how relevant and how actual are these developed and developing guidelines to the current state of development of AI.

Conclusion

As a summary, our study found that the general reporting guideline endorsement are suboptimal in radiological journals. The implementation of artificial intelligence reporting guidelines was extremely low. Radiological journals may consider making general and artificial intelligence reporting guidelines mandatory to improve their awareness and application, in order to achieve high-quality and transparent radiological study reporting.

Availability of data and materials

All data generated or analysed during this study are included in this published article and its supplementary information files.

Abbreviations

AGREE:

Appraisal of Guidelines, Research, and Evaluation

AI:

Artificial Intelligence

ARRIVE:

Animal Research Reporting of In Vivo Experiments

CARE:

CAse REport guidelines

CHEERS:

Consolidated Health Economic Evaluation Reporting Standards

CLAIM:

CheckList for Artificial Intelligence in Medical imaging

CLEAR:

CheckList for EvaluAtion of Radiomics research

CONSORT:

Consolidated Standards of Reporting Trials

CONSORT-AI:

Consolidated Standards of Reporting Trials involving Artificial Intelligence

COREQ:

COnsolidated criteria for REporting Qualitative research

EQUATOR:

Enhancing the QUAlity and Transparency Of health Research

FUTURE-AI:

Fairness Universality Traceability Usability Robustness Explainability Artificial Intelligence solutions

IBSI:

Image Biomarker Standardization Initiative

JIF:

Journal Impact Factor

MAIC-10:

Must Artificial Intelligence Criteria-10

MI-CLAIM:

Minimum Information about Clinical Artificial Intelligence Modeling

MINIMAR:

Minimum Information for Medical AI Reporting

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PRISMA-P:

Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols

RIGHT:

Reporting Items for Practice Guidelines in Healthcare

RQS:

Radiomics Quality Score

SPIRIT:

Standard Protocol Items: Recommendations for Interventional Trials

SPIRIT-AI:

Standard Protocol Items: Recommendations for Interventional Trials involving Artificial Intelligence

SQUIRE:

Standards for QUality Improvement Reporting Excellence

SRQR:

Standards for Reporting of Qualitative Research

STARD:

Standards for Reporting of Diagnostic Accuracy

STROBE:

STrengthening the Reporting of Observational Studies in Epidemiology

TRIPOD:

Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis

References

  1. Groves T. Enhancing the quality and transparency of health research. BMJ. 2008;337(7661): a718.

    Article  PubMed  Google Scholar 

  2. Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med. 2010;8:24.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Zeng X, Zhang Y, Kwong JS, Zhang C, Li S, Sun F, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid Based Med. 2015;8(1):2–10.

    Article  PubMed  Google Scholar 

  4. Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gøtzsche PC, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383(9913):267–76.

    Article  PubMed  Google Scholar 

  6. Kunath F, Grobe HR, Rücker G, Engehausen D, Antes G, Wullich B, et al. Do journals publishing in the field of urology endorse reporting guidelines? A survey of author instructions. Urol Int. 2012;88(1):54–9.

    Article  PubMed  Google Scholar 

  7. Sims MT, Henning NM, Wayant CC, Vassar M. Do emergency medicine journals promote trial registration and adherence to reporting guidelines? A survey of “Instructions for Authors.” Scand J Trauma Resusc Emerg Med. 2016;24(1):137.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Wayant C, Smith C, Sims M, Vassar M. Hematology journals do not sufficiently adhere to reporting guidelines: a systematic review. J Thromb Haemost. 2017;15(4):608–17.

    Article  CAS  PubMed  Google Scholar 

  9. Checketts JX, Sims MT, Detweiler B, Middlemist K, Jones J, Vassar M. An evaluation of reporting guidelines and clinical trial registry requirements among orthopaedic surgery journals. J Bone Joint Surg Am. 2018;100(3):e15.

    Article  PubMed  Google Scholar 

  10. Sims MT, Checketts JX, Wayant C, Vassar M. Requirements for trial registration and adherence to reporting guidelines in critical care journals: a meta-epidemiological study of journals’ instructions for authors. Int J Evid Based Healthc. 2018;16(1):55–65.

    Article  PubMed  Google Scholar 

  11. Sims MT, Bowers AM, Fernan JM, Dormire KD, Herrington JM, Vassar M. Trial registration and adherence to reporting guidelines in cardiovascular journals. Heart. 2018;104(9):753–9.

    Article  PubMed  Google Scholar 

  12. Jorski A, Scott J, Heavener T, Vassar M. Reporting guideline and clinical trial registration requirements in gastroenterology and hepatology journals. Int J Evid Based Healthc. 2018;16(2):119–27.

    Article  PubMed  Google Scholar 

  13. Cook C, Checketts JX, Atakpo P, Nelson N, Vassar M. How well are reporting guidelines and trial registration used by dermatology journals to limit bias? A meta-epidemiological study. Br J Dermatol. 2018;178(6):1433–4.

    Article  CAS  PubMed  Google Scholar 

  14. Wayant C, Moore G, Hoelscher M, Cook C, Vassar M. Adherence to reporting guidelines and clinical trial registration policies in oncology journals: a cross-sectional review. BMJ Evid Based Med. 2018;23(3):104–10.

    Article  PubMed  Google Scholar 

  15. Sharp MK, Tokalić R, Gómez G, Wager E, Altman DG, Hren D. A cross-sectional bibliometric study showed suboptimal journal endorsement rates of STROBE and its extensions. J Clin Epidemiol. 2019;107:42–50.

    Article  PubMed  Google Scholar 

  16. Zuñiga-Hernandez JA, Dorsey-Treviño EG, González-González JG, Brito JP, Montori VM, Rodriguez-Gutierrez R. Endorsement of reporting guidelines and study registration by endocrine and internal medicine journals: meta-epidemiological study. BMJ Open. 2091;9(9):e031259.

  17. Zhou J, Li J, Zhang J, Geng B, Chen Y, Zhou X. Requirements for study registration and adherence to reporting guidelines in surgery journals: a cross-sectional study. World J Surg. 2021;45(4):1031–42.

    Article  PubMed  Google Scholar 

  18. Zhou J, Li J, Zhang J, Geng B, Chen Y, Zhou X. The relationship between endorsing reporting guidelines or trial registration and the impact factor or total citations in surgical journals. PeerJ. 2022;10: e12837.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Duan Y, Zhao L, Ma Y, Luo J, Chen J, Miao J, et al. A cross-sectional study of the endorsement proportion of reporting guidelines in 1039 Chinese medical journals. BMC Med Res Methodol. 2023;23(1):20.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Stevens A, Shamseer L, Weinstein E, Yazdi F, Turner L, Thielman J, et al. Relation of completeness of reporting of health research to journals’ endorsement of reporting guidelines: systematic review. BMJ. 2014;348: g3804.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Stahl AC, Tietz AS, Dewey M, Kendziora B. Has the quality of reporting improved since it became mandatory to use the Standards for Reporting Diagnostic Accuracy? Insights Imaging. 2023;14(1):85.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Stahl AC, Tietz AS, Kendziora B, Dewey M. Has the STARD statement improved the quality of reporting of diagnostic accuracy studies published in European Radiology? Eur Radiol. 2023;33(1):97–105.

    Article  PubMed  Google Scholar 

  23. Park HY, Suh CH, Woo S, Kim PH, Kim KW. Quality reporting of systematic review and meta-analysis according to PRISMA 2020 guidelines: results from recently published papers in the Korean Journal of Radiology. Korean J Radiol. 2022;23(3):355–69.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Grégory J, Maino C, Vilgrain V, Ronot M, Boutron I. Completeness of reporting in abstracts of randomized controlled trials assessing interventional radiology for liver disease. J Vasc Interv Radiol. 2023;1576–1583.e7.

  25. Dewey M, Levine D, Bossuyt PM, Kressel HY. Impact and perceived value of journal reporting guidelines among Radiology authors and reviewers. Eur Radiol. 2019;29(8):3986–95.

    Article  PubMed  Google Scholar 

  26. Mollura DJ, Culp MP, Pollack E. Artificial intelligence in low- and middle-income countries: innovating global health radiology. Radiology. 2020;297(3):513–20.

    Article  PubMed  Google Scholar 

  27. Daye D, Wiggins WF, Lungren MP, Alkasab T, Kottler N, Allen B, et al. Implementation of clinical artificial intelligence in radiology: who decides and how? Radiology. 2022;305(3):555–63.

    Article  PubMed  Google Scholar 

  28. Strohm L, Hehakaya C, Ranschaert ER, Boon WPC, Moors EHM. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol. 2020;30(10):5525–32.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Scheek D, Rezazade Mehrizi MH, Ranschaert E. Radiologists in the loop: the roles of radiologists in the development of AI applications. Eur Radiol. 2021;31(10):7960–8.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Yang L, Ene IC, Arabi Belaghi R, Koff D, Stein N, Santaguida PL. Stakeholders’ perspectives on the future of artificial intelligence in radiology: a scoping review. Eur Radiol. 2022;32(3):1477–95.

    Article  PubMed  Google Scholar 

  31. Shelmerdine SC, Arthurs OJ, Denniston A, Sebire NJ. Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health Care Inform. 2021;28(1): e100385.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Ibrahim H, Liu X, Denniston AK. Reporting guidelines for artificial intelligence in healthcare research. Clin Exp Ophthalmol. 2021;49(5):470–6.

    Article  PubMed  Google Scholar 

  33. Meshaka R, Pinto Dos Santos D, Arthurs OJ, Sebire NJ, Shelmerdine SC. Artificial intelligence reporting guidelines: what the pediatric radiologist needs to know. Pediatr Radiol. 2022;52(11):2101–2110.

  34. Zrubka Z, Gulácsi L, Péntek M. Time to start using checklists for reporting artificial intelligence in health care and biomedical research: a rapid review of available tools. 2022 IEEE 26th International Conference on Intelligent Engineering System. August 12–15, 2022. Crete, Greece. https://doi.org/10.1109/INES56734.2022.9922639. Accessed 15 Jul 2023.

  35. Klontzas ME, Gatti AA, Tejani AS, Kahn CE Jr. AI reporting guidelines: how to select the best one for your research. Radiol Artif Intell. 2023;5(3): e230055.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Ioannidis JP, Fanelli D, Dunne DD, Goodman SN. Meta-research: Evaluation and improvement of research methods and practices. PLoS Biol. 2015;13(10): e1002264.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Tatsioni A, Ioannidis JPA. Meta-research: bird’s eye views of primary care research. Fam Pract. 2020;37(3):287–9.

    Article  PubMed  Google Scholar 

  38. Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020;9(7):497–508.

    Article  PubMed  Google Scholar 

  40. Mbuagbaw L, Lawson DO, Puljak L, Allison DB, Thabane L. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol. 2020;20(1):226.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Syst Rev. 2023;12(1):96.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Clarivate. Journal Citation Reports. https://jcr.clarivate.com/jcr. Accessed 20 Jul 2023.

  43. Frank RA, McInnes MDF, Levine D, Kressel HY, Jesurum JS, Petrcich W, et al. Are study and journal characteristics reliable indicators of “truth” in imaging research? Radiology. 2018;287(1):215–23.

    Article  PubMed  Google Scholar 

  44. Baek S, Yoon DY, Lim KJ, Cho YK, Seo YL, Yun EJ. The most downloaded and most cited articles in radiology journals: a comparative bibliometric analysis. Eur Radiol. 2018;28(11):4832–8.

    Article  PubMed  Google Scholar 

  45. Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010;152(11):726–732.

  46. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007;147(8):573–577.

  47. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372: n71.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Chan AW, Tetzlaff JM, Altman DG, Laupacis A, Gøtzsche PC, Krleža-Jerić K, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013;158(3):200–7.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al; PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.

  50. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig L, et al; STARD Group. STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. Radiology. 2015;277(3):826–832.

  51. Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med. 2015;162(1):55–63.

    Article  PubMed  Google Scholar 

  52. Gagnier JJ, Kienle G, Altman DG, Moher D, Sox H, Riley D; CARE Group. The CARE guidelines: consensus-based clinical case reporting guideline development. BMJ Case Rep 2013;2013:bcr2013201554.

  53. Brouwers MC, Kerkvliet K, Spithoff K; AGREE Next Steps Consortium. The AGREE reporting checklist: a tool to improve reporting of clinical practice guidelines. BMJ. 2016;352:i1152.

  54. Chen Y, Yang K, Marušic A, Qaseem A, Meerpohl JJ, Flottorp S, Akl EA, et al. RIGHT (Reporting Items for Practice Guidelines in Healthcare) Working Group. A reporting tool for practice guidelines in health care: the RIGHT statement. Ann Intern Med. 2017;166(2):128–132.

  55. O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

    Article  PubMed  Google Scholar 

  56. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

    Article  PubMed  Google Scholar 

  57. Percie du Sert N, Hurst V, Ahluwalia A, Alam S, Avey MT, Baker M, et al. The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research. PLoS Biol. 2020;18(7):e3000410.

  58. Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2016;25(12):986–992.

  59. Husereau D, Drummond M, Augustovski F, de Bekker-Grob E, Briggs AH, Carswell C, et al; CHEERS 2022 ISPOR Good Research Practices Task Force. Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) statement: updated reporting guidance for health economic evaluations. BMJ. 2022;376:e067975.

  60. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health. 2020;2(10):e537-e548.

  61. Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ; SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Lancet Digit Health. 2020;2(10):e549-e560.

  62. Lekadir K, Feragen A, Fofanah AJ, Frangi A, Buyx A, Emrlie A, et al. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. arXiv:2309.12325v1. https://arxiv.org/abs/2309.12325. Accessed 10 Oct 2023.

  63. Norgeot B, Quer G, Beaulieu-Jones BK, Torkamani A, Dias R, Gianfrancesco M, et al. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med. 2020;26(9):1320–4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  64. Hernandez-Boussard T, Bozkurt S, Ioannidis JPA, Shah NH. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc. 2020;27(12):2011–5.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Mongan J, Moy L, Kahn CE Jr. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiol Artif Intell. 2020;2(2): e200029.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Cerdá-Alberich L, Solana J, Mallol P, Ribas G, García-Junco M, Alberich-Bayarri A, et al. MAIC-10 brief quality checklist for publications using artificial intelligence and medical images. Insights Imaging. 2023;14(1):11.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Lambin P, Leijenaar RTH, Deist TM, Peerlings J, de Jong EEC, van Timmeren J, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol. 2017;14(12):749–62.

    Article  PubMed  Google Scholar 

  68. Zwanenburg A, Vallières M, Abdalah MA, Aerts HJWL, Andrearczyk V, Apte A, et al. The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping. Radiology. 2020;295(2):328–38.

    Article  PubMed  Google Scholar 

  69. Kocak B, Baessler B, Bakas S, Cuocolo R, Fedorov A, Maier-Hein L, et al. CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII. Insights Imaging. 2023;14(1):75.

    Article  PubMed  PubMed Central  Google Scholar 

  70. Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network. https://www.equator-network.org. Accessed 10 Oct 2023.

  71. International Committee of Medical Journal Editors document. https://www.icmje.org. Accessed 10 Oct 2023.

  72. Mangiafico. An R companion for the handbook of biological statistics, version 1.3.9, revised 2023. https://rcompanion.org/rcompanion/. Accessed 20 Jul 2023.

  73. Mangiafico SS. Summary and analysis of extension program evaluation in R, version 1.20.05, revised 2023. rcompanion.org/handbook/. Accessed 10 Oct 2023.

  74. The R Project for Statistical Computing. R language version 4.1.3. https://www.r-project.org/. Accessed 20 Jul 2023.

  75. Posit. within RStudio version 1.4.1106. https://posit.co. Accessed 10 Oct 2023.

  76. Park JE, Kim D, Kim HS, Park SY, Kim JY, Cho SJ, et al. Quality of science and reporting of radiomics in oncologic studies: room for improvement according to radiomics quality score and TRIPOD statement. Eur Radiol. 2020;30(1):523–36.

    Article  PubMed  Google Scholar 

  77. Zhong J, Hu Y, Si L, Jia G, Xing Y, Zhang H, et al. A systematic review of radiomics in osteosarcoma: utilizing radiomics quality score as a tool promoting clinical translation. Eur Radiol. 2021;31(3):1526–35.

    Article  PubMed  Google Scholar 

  78. Zhong J, Hu Y, Zhang G, Xing Y, Ding D, Ge X, et al. An updated systematic review of radiomics in osteosarcoma: utilizing CLAIM to adapt the increasing trend of deep learning application in radiomics. Insights Imaging. 2022;13(1):138.

    Article  PubMed  PubMed Central  Google Scholar 

  79. Zhong J, Hu Y, Ge X, Xing Y, Ding D, Zhang G, et al. A systematic review of radiomics in chondrosarcoma: assessment of study quality and clinical value needs handy tools. Eur Radiol. 2023;33(2):1433–44.

    Article  PubMed  Google Scholar 

  80. Zhong J, Xing Y, Zhang G, Hu Y, Ding D, Ge X, et al. A systematic review of radiomics in giant cell tumor of bone (GCTB): the potential of analysis on individual radiomics feature for identifying genuine promising imaging biomarkers. J Orthop Surg Res. 2023;18(1):414.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Zhong J, Hu Y, Xing Y, Ge X, Ding D, Zhang H, et al. A systematic review of radiomics in pancreatitis: applying the evidence level rating tool for promoting clinical transferability. Insights Imaging. 2022;13(1):139.

    Article  PubMed  PubMed Central  Google Scholar 

  82. O’Shea RJ, Sharkey AR, Cook GJR, Goh V. Systematic review of research design and reporting of imaging studies applying convolutional neural networks for radiological cancer diagnosis. Eur Radiol. 2021;31(10):7969–83.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Si L, Zhong J, Huo J, Xuan K, Zhuang Z, Hu Y, et al. Deep learning in knee imaging: a systematic review utilizing a Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Eur Radiol. 2022;32(2):1353–61.

    Article  PubMed  Google Scholar 

  84. Kocak B, Keles A, Akinci DT. Self-reporting with checklists in artificial intelligence research on medical imaging: a systematic review based on citations of CLAIM. Eur Radiol. 2023. https://doi.org/10.1007/s00330-023-10243-9.

    Article  PubMed  Google Scholar 

  85. Sivanesan U, Wu K, McInnes MDF, Dhindsa K, Salehi F, van der Pol CB. Checklist for artificial intelligence in medical imaging reporting adherence in peer-reviewed and preprint manuscripts with the highest altmetric attention scores: a meta-research study. Can Assoc Radiol J. 2023;74(2):334–42.

    Article  PubMed  Google Scholar 

  86. Mannocci A, Saulle R, Colamesta V, D’Aguanno S, Giraldi G, Maffongelli E, et al. What is the impact of reporting guidelines on public health journals in Europe? The case of STROBE CONSORT and PRISMA. J Public Health (Oxf). 2015;37(4):737–40.

    PubMed  Google Scholar 

  87. Dal Santo T, Rice DB, Amiri LSN, Tasleem A, Li K, Boruff JT, et al. Methods and results of studies on reporting guideline adherence are poorly reported: a meta-research study. J Clin Epidemiol. 2023;159:225–34.

    Article  Google Scholar 

  88. Blanco D, Altman D, Moher D, Boutron I, Kirkham JJ, Cobo E. Scoping review on interventions to improve adherence to reporting guidelines in health research. BMJ Open. 2019;9(5): e026589.

    Article  PubMed  PubMed Central  Google Scholar 

  89. Barnes C, Boutron I, Giraudeau B, Porcher R, Altman DG, Ravaud P. Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial. BMC Med. 2015;13:221.

    Article  PubMed  PubMed Central  Google Scholar 

  90. Koletsi D, Fleming PS, Behrents RG, Lynch CD, Pandis N. The use of tailored subheadings was successful in enhancing compliance with CONSORT in a dental journal. J Dent. 2017;67:66–71.

    Article  PubMed  Google Scholar 

  91. Sounderajah V, Ashrafian H, Golub RM, Shetty S, De Fauw J, Hooft L, et al; STARD-AI Steering Committee. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open. 2021;11(6):e047709.

  92. Collins GS, Dhiman P, Andaur Navarro CL, Ma J, Hooft L, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021;11(7): e048008.

    Article  PubMed  PubMed Central  Google Scholar 

  93. Sounderajah V, Ashrafian H, Rose S, Shah NH, Ghassemi M, Golub R, et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat Med. 2021;27(10):1663–5.

    Article  CAS  PubMed  Google Scholar 

  94. Olczak J, Pavlopoulos J, Prijs J, Ijpma FFA, Doornberg JN, Lundström C, et al. Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders: an introductory reference with a guideline and a Clinical AI Research (CAIR) checklist proposal. Acta Orthop. 2021;92(5):513–25.

    Article  PubMed  PubMed Central  Google Scholar 

  95. Vasey B, Nagendran M, Campbell B, Clifton DA, Collins GS, Denaxas S, et al; DECIDE-AI expert group. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med. 2022;28(5):924–933.

Download references

Acknowledgements

The authors would like to thank Ms. Hongyan Huang for English editing.

Funding

This study has received funding by National Natural Science Foundation of China (82302183, 82271934), Yangfan Project of Science and Technology Commission of Shanghai Municipality (22YF1442400), Research Found of Health Commission of Changing District, Shanghai Municipality (2023QN01), Research Fund of Tongren Hospital, Shanghai Jiao Tong University School of Medicine (TRKYRC-XX202204, TRGG202101, TRYJ2021JC06, TRYXJH18, TRYXJH28), and Guangci Innovative Technology Launch Plan of Ruijin Hospital, Shanghai Jiao Tong University School of Medicine (2022–13). They played no role in the study design, data collection or analysis, decision to publish, or manuscript preparation.

Author information

Authors and Affiliations

Authors

Contributions

JYZ, YX, JJL, GCZ, SQM, HDC, QY, QQC, RJ, YFH, DFD, XG, HZ, and WWY contributed to the study concepts and design, and read and approved the final version of the manuscript. JYZ, and YX contributed to the literature research, data acquisition, data analysis, and data visualization. JYZ, and JJL contributed to the statistical analysis. JYZ prepared the original version of manuscript. HZ, and WWY supervised the whole study. WWY is the guarantor of integrity of the entire study.

Corresponding authors

Correspondence to Huan Zhang or Weiwu Yao.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

Dr. Jingyu Zhong acknowledges his position as a member of the Scientific Editorial Board Member of European Radiology and BMC Medical Imaging, which have been included as sample in this study. However, the assessment of these two journals were cross-checked by other authors to avoid bias. All other authors of this manuscript declare no relationships with any companies, whose products or services may be related to the subject matter of the article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Supplementary Note S1. Study protocol. Supplementary Table S1. Bibliometrics information of included and excluded journals. Supplementary Table S2. Characteristics and homepages of included and excluded journals. Supplementary Table S3. Five endorsement level defined with examples in radiological journals. Supplementary Table S4. Endorsement level rating of reporting guidelines of included journals.

Additional file 2. 

Data Sheet.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, J., Xing, Y., Lu, J. et al. The endorsement of general and artificial intelligence reporting guidelines in radiological journals: a meta-research study. BMC Med Res Methodol 23, 292 (2023). https://doi.org/10.1186/s12874-023-02117-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-023-02117-x

Keywords