Skip to main content

Table 2 RepeAT framework variables where inter-rater reliability could be calculated using Cohen’s kappa

From: Repeat: a framework to assess empirical reproducibility in biomedical research

RepeAT Framework Variable Cohen’s Kappa Kappa Bounds var Rater 1 var Rater 2 Percent Agreement
Publication state database(s) source(s) of data? 0.320 (0.580–0.060) 0.095 0.250 70.6
Does the publication clearly state process(es) for validating data minded via nlp and/or queried from a database? 0.440 (0.860–0.019) 0.182 0.069 85.7
Does the author state any clear process documented for accounting for missing data? 0.520 (0.890–0.140) 0.115 0.261 83.3
Does the research involve natural language processing or text mining? 0.870 (1.100–0.630) 0.134 0.107 97.1
Does the author indicate the software used to develop the analysis code? 0.880 (1.000–0.710) 0.236 0.243 94.1