Homogeneity score test of AC1 statistics and estimation of common AC1 in multiple or stratified inter-rater agreement studies

Background Cohen’s κ coefficient is often used as an index to measure the agreement of inter-rater determinations. However, κ varies greatly depending on the marginal distribution of the target population and overestimates the probability of agreement occurring by chance. To overcome these limitations, an alternative and more stable agreement coefficient was proposed, referred to as Gwet’s AC1. When it is desired to combine results from multiple agreement studies, such as in a meta-analysis, or to perform stratified analysis with subject covariates that affect agreement, it is of interest to compare several agreement coefficients and present a common agreement index. A homogeneity test of κ was developed; however, there are no reports on homogeneity tests for AC1 or on an estimator of common AC1. In this article, a homogeneity score test for AC1 is therefore derived, in the case of two raters with binary outcomes from K independent strata and its performance is investigated. An estimation of the common AC1 between strata and its confidence intervals is also discussed. Methods Two homogeneity tests are provided: a score test and a goodness-of-fit test. In this study, the confidence intervals are derived by asymptotic, Fisher’s Z transformation and profile variance methods. Monte Carlo simulation studies were conducted to examine the validity of the proposed methods. An example using clinical data is also provided. Results Type I error rates of the proposed score test were close to the nominal level when conducting simulations with small and moderate sample sizes. The confidence intervals based on Fisher’s Z transformation and the profile variance method provided coverage levels close to nominal over a wide range of parameter combination. Conclusions The method proposed in this study is considered to be useful for summarizing evaluations of consistency performed in multiple or stratified inter-rater agreement studies, for meta-analysis of reports from multiple groups and for stratified analysis.


Background
To evaluate the reliability when two raters classify objects as either positive (+) or negative (−), Cohen's κ [1] and the intra-class version of κ, which is identical to Scott's π [2], have often been used. Let p a be the agreement probability, and p 1 and p 2 the probabilities classified as (+) by rater 1 and 2 respectively. Then Cohen's κ (κ Cohen ) and Scott's π (κ Scott ) are defined as follows: where p eðcÞ ¼ p 1 p 2 þ ð1−p 1 Þð1−p 2 Þ; p eðsÞ ¼ p 2 þ þ ð1−p þ Þ 2 and p + = (p 1 + p 2 )/2. The p e(c) and p e(s) are the probabilities of agreement expected by chance for Cohen's κ and Scott's π respectively. The p e(c) assumes that the probabilities of positive classification differ between two raters, while the p e(s) assumes that these two probabilities are the same. Landis and Koch provided benchmarks of the strength of consistency as follows: values ≤0 as poor, 0.00 to 0.20 as slight, 0.21 to 0.40 as fair, 0.41 to 0.60 as moderate, 0.61 to 0.80 as substantial and 0.81 to 1.00 as almost perfect agreement [3]. Although the authors acknowledge the arbitrary nature of their benchmarks, they recommended their benchmark scale as a useful guideline for practitioners.
Gwet showed that AC 1 has better statistical properties (bias and variance) than Cohen's κ, Scott's π and Gindex under a limited set of simulations for two raters with binary outcomes [24]. Shanker and Bangdiwala compared Cohen's κ, Scott's π, Prevalence Adjusted Bias Adjusted Kappa (PABAK) [26], AC 1 and B-statistic [27], which is not a kappa-type chance-corrected measure, in the case of two raters and binary outcomes and showed that AC 1 has better properties than other kappa-type measures [28]. In addition, AC 1 has been utilized in the field of medical research over the past decade [29][30][31][32][33][34][35]. Therefore, in this study we have limited our discussion to AC 1 in the case of two raters with binary outcomes.
First, a brief review of the concept of Gwet's AC 1 is provided. Consider the situation in which two raters independently classify randomly extracted subject as positive (+) or negative (−). Gwet defined two events: G = {the two raters agree} and R = {at least one rater performs random rating}. The probability of agreement expected by chance is then p e = P(G ⋂ R) = P(G| R)P(R). A random rating would lead to the classification of an individual into each category with the same probability 1 2 and it follows that PðGjRÞ ¼ 2 Â ð 1 2 Þ Â ð 1 2 Þ ¼ 1 2 . As for the estimation of P(R), this probability cannot be obtained from data. Therefore, Gwet proposed approximating it with a normalized measure of randomness Ψ, defined as follows: where π + is the probability that a randomly chosen rater classifies a randomly chosen subject into the + category. Thus, the approximated probability of chance agreement is represented by AC 1 is thus defined as follows: where p a is the probability of agreement. Although p e is approximated to p Ã e , Gwet showed that the bias of γ, the difference between γ and the true inter-rater reliability, is equal to or less than Cohen's κ, Scott's π and G-index under some assumption in the case of two raters with binary outcomes. Gwet also provided an estimatorγ Ã of γ and its variance for multiple raters and multiple categories based on the randomization approach, which requires the selection of subjects to be random in such a way that all possible subject samples have the exact same chance of being selected. However, it is advantageous to employ a model-based approach when, for example, the evaluation of the effect of subject covariates on agreement is of interest. Therefore, in the case of two raters with binary outcomes, Ohyama [36] assumed the underlying probability that a subject is rated as (+) and its marginal homogeneity of the two raters, and then constructed the likelihood. The maximum likelihood estimator of γ, which is shown to be identical to the estimator given by Gwet, was derived. The likelihood-based confidence intervals for AC 1 , inclusion of subject covariates, hypothesis testing and sample size determination were also discussed [36].
In this article, we discuss stratification analyses as another approach to adjust the effect of subject covariates on agreement. For example, a clinical assessment whether a patient has a particular disease symptom may be influenced by overall severity of the disease. In such a case, we consider stratification based on the severity of the disease. Another example is a multicenter inter-rater agreement study, in which the classifications for subjects are conducted independently in each center. These situations require several independent agreement statistics. Then the main purpose of the analyses would be testing whether the degree of inter-rater agreement can be regarded as homogeneous across strata, such as centers and severities of the disease.
For κ, Fleiss has been at the forefront of the idea of χ 2 test-based inter-class consistency with large sample variances [37] and further studies by Donner, Eliasziw and Klar [14], Nam [15,38] and Wilding, Consiglio and Shan [39] have developed the homogeneity test of κ across covariate levels. However, there are no reports on homogeneity tests for AC 1 or on an estimator of common AC 1 . Therefore, in this article, we derive the homogeneity score test for AC 1 from K independent strata and its performance is investigated. An estimation of the common AC 1 between strata and its confidence intervals is also discussed. Finally, an example application of our approach to clinical trial data is provided.

Score test
Consider K independent strata involving n k subjects for k = 1, …, K. In each stratum, two raters independently classify subjects as either positive (+) or negative (−). Let X kij = 1 if subject i(=1, …, n k ) in the k-th stratum is classified as "+" by rater j(=1, 2) and X kij = 0 otherwise. Suppose that P(X kij = 1| i) = u ki , Eðu ki Þ ¼ π k and Varðu ki Þ ¼ σ 2 k . The γ of the k-th stratum is then expressed as follows [36]: Let the number of observed pairs in the three categories of the k-th stratum be x 1k , x 2k and x 3k and their corresponding probabilities be P 1k (γ k ), P 2k (γ k ) and P 3k (γ k ). The data of the k-th stratum are then given as shown in Table 1.
The log-likelihood function is given by where γ = (γ 1 , …, γ K ) ' , π = (π 1 , …, π K ) ' , l k (γ k , π k ) = x 1k log P 1k (γ k ) + x 2k log P 2k (γ k ) + x 3k log P 3k (γ k ), The maximum likelihood estimators of γ k and π k are then given byγ respectively. The first and second derivatives of the log-likelihood function and the Fisher information matrix are given in the Appendix. The aim of this study is to test the homogeneity of the agreement coefficients among K strata, and thus the null hypothesis to test is represented by H 0 : γ k = γ 0 (k = 1, 2, ..., K). The score test statistic for the null hypothesis is derived as follows (see Appendix): whereB k ;C k ;D k andR k are obtained by substituting the maximum likelihood estimatorsγ 0 andπ k under the null hypothesis into Tðγ 0 ;πÞ is asymptotically distributed as a χ 2 with K − 1 degrees of freedom. The homogeneity hypothesis is rejected at level α when T ðγ 0 ;πÞ ≥ χ 2 ð1−αÞ;K−1 , where χ 2 ð1−αÞ;K −1 is the 100 × (1 − α) percentile point of the χ 2 distribution with K − 1 degrees of freedom.

Goodness-of-fit test
Donner, Eliasziw and Klar proposed a goodness-of-fit approach for testing homogeneity of kappa statistics in the case of two raters with binary outcomes [40]. This procedure can also be applied to AC 1 statistics. Given that the frequencies x 1k , x 2k , x 3k , k = 1, …, K in Table 1 follow a multinomial distribution conditional on n k , estimated probabilities under H 0 are given byP hk ðγ 0 Þ, which is obtained by replacing π k byπ k and γ k byγ 0 in P hk (γ k ); h = 1, 2, 3; k = 1, …, K. Then the goodness-of-fit statistic is derived as follows: under H 0 , χ 2 G follows an approximate χ 2 distribution with K − 1 degrees of freedom. The homogeneity hypothesis is rejected at level α when

Estimation of common AC 1
If the assumption of homogeneity is reasonable, the estimate of γ 0 can be used as an appropriate summary measure of reliability. The maximum likelihood estimatorγ 0 andπ k are obtained by maximizing the log-likelihood functions l 0 ðγ 0 ; πÞ ¼ P K k¼1 l k ðγ 0 ; π k Þ. Since an analytical solution cannot be obtained from this function, numerical iterative calculations are used. The variance Varðγ 0 Þ ofγ 0 can be expressed as follows (see Appendix): are values using γ k = γ 0 in B k , C k , D k respectively, and A simple 100 × (1 − α) % confidence interval using the asymptotic normality ofγ 0 can be expressed as follows: where Z α/2 is the α/2 upper quantile of the standard normal distribution and d Varðγ 0 Þ is obtained by substitutingγ 0 andπ k into (14). Hereafter, this method is referred to as the simple asymptotic (SA) method. Since Eq. (14) depends on γ 0 , SA method may not have the correct coverage rate, and the normality of the sampling distribution ofγ 0 may be improved using Fisher's Z transformation. This method is referred to below as Fisher's Z transformation (FZ) method (see Appendix).
As an alternative method, we employ the profile variance approach, which has been shown to perform well in the case of the intra-class κ for binary outcome data [41][42][43]. This approach also performs well for AC 1 in the case of two raters with binary outcomes [36]. The confidence interval based on the profile variance can be obtained by solving the following inequality for γ 0 : where g Varðγ 0 Þ is given by substitutingπ k into π k in (15). Hereafter, this method is referred to as the profile variance (PV) method (see Appendix).

Numerical evaluations
We conducted Monte Carlo simulations to investigate the performance of the proposed homogeneity tests and to evaluate the estimate of common AC 1 and its confidence intervals under the following conditions: the number of strata in the simulation is K = 2 or 3; and random observations are generated from the trinomial distributions according to the probabilities of (6), (7) and (8) by giving the values of γ k and π k . The balanced and unbalanced cases were considered for the values of π k and n k . The values of γ k and π k are set within the theoretical range of Eq. (12) derived in the preceding paragraph. Ten thousand times of iterations were carried out for each parameter combination.
When π k is close to 0 or 1 and n k is small, there are cases in which the generated data include zero cells. In such cases, B k , C k , D k and R k cannot be estimated . Thus, when zero cells were generated, we adopted the approach of adding 0.5 to the frequency of each combination by two raters, (+,+), (+,−), (−,+), (−,−). This simple method was discussed by Agresti [44] and was adopted in a previous study [39].

Empirical type I error rate for the homogeneity test
The type I error rates of the homogeneity tests with a significance level of 0.05 were examined. The sample size was set at n k = n = 20, 50, 80 for balanced settings and (n 1 , n 2 , n 3 ) = (20, 50, 80) for unbalanced settings. The error rate obtained by the score test is expressed as SCORE and the error rate obtained by the goodness-offit test is expressed as GOF. Table 2 summarizes the results for K = 2.
Overall, the proposed score test did not show any significant type I error rate inflation, but it was very conservative when sample size was small and γ 0 was close to 1.
In the case of n = 20 when γ 0 = 0.1, 0.3 or 0.5, the type I error rates of SCORE were maintained at the nominal level of 0.05 regardless of whether π k was balanced or unbalanced, but when γ 0 = 0.7 or 0.9, the type I error rates were slightly conservative. Especially when γ 0 = 0.9, the rate was significantly conservative to the extent of being less than 0.01. In the case of n = 50, the type I error rates were maintained at the nominal level of 0.05 except when γ 0 = 0.9. Finally in the case of n = 80, the type I error rates were almost maintained at the nominal level. In contrast, the type I error rate of GOF tended to be larger than that of SCORE and in many cases it was not maintained at the nominal level.
The results obtained for K = 3 are shown Table S1 and Table S2 in Additional file 1.
The Additional file 2 provides the simulation code of empirical type I error rate using R language.

Empirical power of the homogeneity test
The empirical power of the score test was investigated only for the case of K = 2, by setting γ 1 = 0.1, 0.3, 0.5 and γ 2 − γ 1 = 0.3, 0.4. The values of π k and n k were set as in the type I error simulation. The results are shown in Table 3. The power tended to be large as the value of γ 1 increased under the fixed values of π and γ 2 − γ 1 . Table 2 Empirical type I error rates of homogeneity tests for γ 1 = γ 2 = γ 0 based on 10,000 simulations (K = 2 balanced sample size) Balanced π conditions Unbalanced π conditions  Table 3 Empirical power of homogeneity tests based on 10,000 simulations (K = 2 balanced sample size) The empirical power of the GOF test was also examined under the same simulation conditions as the score test. The results are also shown in Table 3. However, the GOF had a large type I error rate inflation ( Table 2) and was invalid as a test.
The Additional file 2 provides the simulation code of empirical power using R language.
Bias and mean square error for common AC 1 We evaluated the bias and mean square error (MSE) of the maximum likelihood estimator for the common AC 1 ,γ 0 . The balanced and unbalanced conditions for π k and the balanced condition for n k were considered. The results are shown in Table 4. The bias ofγ 0 tended to be small as γ 0 increased, butγ 0 was almost unbiased. As expected, the bias and MSE tended to be small as the sample size increased.
The Additional file 3 provides the simulation code of bias and mean square error for common AC 1 using R language.

Confidence intervals for common AC 1
We conducted a simulation study to evaluate the performances of the three confidence intervals presented in the previous section. The coverage rates of the 95% confidence interval were examined. The balanced and unbalanced conditions for π k and the balanced condition for n k are considered. The results are shown in Table 5. The coverage rate of the SA method was generally lower than 0.95 under many conditions, with the exception of the value being close to 0.99 in the case of n 1 = n 2 = 20 and γ 0 = 0.9. The FZ method and PV method greatly improved the coverage rates close to the nominal level. However, the coverage rate of the PV method was closer to the nominal level than that of the FZ method in most cases under the conditions examined. The coverage rates of each method were also evaluated in the case of K = 3, and the unbalanced n k conditions and both the FZ method and the PV method achieved coverage rates near 0.95 (results not shown). Table 4 Bias and mean square error of the maximum likelihood estimator for the common AC 1 based on 10,000 simulations (K = 2 balanced sample size) Balanced π conditions Unbalanced π conditions  Table 5 Coverage rates of common γ 95% confidence intervals of the three proposed methods based on 10,000 simulations Balanced π conditions Unbalanced π conditions n 1 = n 2 γ 0 π 1 = π 2 SA FZ PV n 1 = n 2 γ 0 π 1 π 2 SA FZ PV The Additional file 4 provides the simulation code of confidence intervals for common AC 1 using R language.

An example
As an example, we used data from a randomized clinical trial called the Silicon Study, which was conducted to investigate the effectiveness of silicone fluids versus gases in the management of proliferative vitreoretinopathy (PVR) by vitrectomy [45]. The PVR classification, determined at the baseline visit, defines the severity of the disease as a continuum of increasing pathology graded as C3, D1, D2 or D3. The presence or absence of retinal injury in the superior nasal cavity was evaluated clinically by the operating ophthalmic surgeon and photographically by an independent fundus photograph reading center [46].
The data and results are summarized in Table 6. For reference, the results of the homogeneity score test proposed by Nam for the intra-class κ are also provided [15]. The probabilities of agreement in each stratum were from 0.800 to 0.880 and not so different. However, the values of κ in each stratum were from 0.117 to 0.520 and were greatly different. This might be due to the prevalence effect caused by the small values of π. In contrast, the values of γ were 0.723 to 0.861 and did not differ greatly among strata.
The Additional file 5 provides the code for clinical data examples using R language.
To investigate the sensitivity of the indicators to π k , we hypothetically considered more balanced and less balanced π k under fixed p a and n k in each stratum. The generated data set and analysis results are summarized as Table S3 in the Additional file 1. κ was more sensitive to changes in the value of π, but AC 1 was less sensitive to changes in the value of π than κ. The common AC 1 was not affected as much as the common κ even if the π balance was lost.

Discussion
It is well known that Cohen's κ depends strongly on the marginal distributions, and Gwet proposed alternative and more stable measures of agreement, AC 1 for nominal data and its extended agreement AC 2 for ordinal data [24,25]. A number of alternative measures have also been proposed, as in Holley and Guilford's G [19], Aickin's α [20], Andres and Marzo's delta [21] and Marasini's s* [22,23]. Gwet [24] and Shankar and Bangdiwala [28] compared some measures and showed that AC 1 has better properties than other kappa-type measures. In addition, AC 1 has been utilized in the field of medical research over the past decade [29][30][31][32][33][34][35]. However statistical inference procedures of AC 1 have not been discussed sufficiently. Therefore, Ohyama expressed AC 1 using population parameters to develop a likelihood-based inference procedure and constructed confidence intervals of AC 1 based on profile variances and likelihood ratios. Inclusion of subjects' covariates, hypothesis testing and sample size estimation were also presented [36]. In the present study, the case of stratified data was discussed as one development of Ohyama [36] for two raters with binary outcomes. Furthermore, tests were derived for the homogeneity of AC 1 between K independent strata and the inference of common AC 1 was discussed.
In the numerical evaluation of type I error, both tests were conservative when the sample size was small and γ 0 was 0.9, but the conservativeness was relaxed when the sample size was as large as 80. In other settings of simulation, the score test performed well while GOF sometimes could not achieve the nominal level. Therefore, we recommend using the score test for testing the homogeneity of AC 1 among K strata. Note that, when zero cells are observed, the homogeneity score test statistic cannot be calculated. In such cases in our simulation study, we simply added 0.5 to the data set, which had no serious effect on the performance of the proposed score test in our simulation settings.
If the homogeneity assumption is reasonable, it may be desired to provide an estimate of the common AC 1 as a summary measure of reliability. In the present study, we proposed an estimator of common AC 1 and constructed its confidence intervals based on the SA, FZ, and PV methods. We also evaluated the performance of each numerically. The bias and MSE tended to be small as the sample size increased, and the results were nearly 0 when n = 80. The PV method provides coverage levels close to nominal in most situations, while the SA method tends to provide a shortage of coverage and the FZ method tends to provide excess coverage in some situations. Therefore, we recommend the PV method for calculating confidence intervals. As in the PVR example, AC 1 in each stratum is less affected by the prevalence or marginal probability than by the κ. It is suggested that the proposed homogeneity test and the general framework of common AC 1 estimation are also essentially more stable than those of the κ.
There were some limitations in this study. First, as described above, the performance of the proposed score test was very conservative when γ 0 = 0.9 and sample size was small. An exact approach might be an alternative method in such cases.
Next, in this study, the cases were limited to two raters with binary outcomes in each stratum. However, in the evaluation of medical data, it is often the case that multiple raters classify subjects into nominal or ordered categories. Our proposed method may be extended to the case of multiple raters with binary outcomes using the likelihood function for multiple raters. In the cases of two raters with nominal outcomes, Agresti [47] proposed a quasisymmetry model with kappa as a parameter, and this technique may be extended to AC 1 in the case of stratified data.
Finally, continuous covariates need to be categorized adequately to apply the proposed approach. The regression model proposed by Ohyama [36] can be used to assess the effect of continuous covariates on AC 1 , but it is limited to the case of two raters with binary data. Nelson and Edwards [48] and Nelson, Mitani and Edwards [49] proposed a method for constructing a measure of agreement using generalized linear mixed-effect models by introducing continuous latent variables representing the subject's true disease status and for flexibly incorporating rater and subject covariates. These approaches might be applicable to AC 1 and AC 2 .

Conclusion
The method proposed in this study is considered to be useful for summarizing evaluations of consistency performed in multiple or stratified inter-rater agreement studies. In addition, the proposed method can be applied not only to medical or epidemiological research but also to assessment of the degree of consistency of characteristics, such as biometrics, psychological measurements, and data in the behavioral sciences.
First and second derivatives of the log-likelihood function The first and second derivatives of l(γ, π), γ = (γ 1 , • • •, γ K ) ′ , π = (π 1 , • • •, π K )′ with respect to r k and π k are obtained as follows: : Since E(x hk ) = n k P hk (γ k ) (h = 1, 2, 3), Thus, the Fisher information matrix is given as follows: Score test statistic for the null hypothesis Define the score function U as The score statistic for testing the null hypothesis H 0 : γ 1 = ⋯ = γ K = γ 0 is asymptotically distributed as a χ 2 with K-1 degrees of freedom, and then expressed as The upper left K × K matrix of Iðγ;πÞ −1 is expressed as follows: so that, the score test statistic Eq. (11) is derived.

Confidence interval for γ 0 based on Fisher's Z transformation
Fisher's Z transformation ofγ 0 is defined bỹ z ¼ 1 2 log 1 þγ 0 1−γ 0 : Using the delta method, the asymptotic variance ofz is represented by where VarðγÞ is given by Eq. (14). Then a confidence interval for z = 0.5 log[(1 + γ 0 )/(1 − γ 0 )] is obtained by where d VarðzÞ is defined by substitutingγ 0 andπ k into VarðzÞ. Thus the confidence interval for γ 0 based on FZ method is obtained as follows: Derivation of Eq. (14) By the second-order partial derivatives of the loglikelihood function l 0 (γ 0 , π) and taking expectations, we obtain where I γπ = (I γ1 , …, I γK ) and I ππ = diag(I kk ). When P 1k ≠ 0, P 2k ≠ 0 and P 3k ≠ 0 for all k, I ππ is non-singular matrix and then the element corresponding to I γγ of the inverse matrix of I, which is the variance ofγ 0 , is given by where p a;k ¼ γ 0 ð1−p Ã e;k Þ þ p Ã e;k is the probability of agreement in the k-th stratum and p Ã e;k ¼ 2π k ð1−π k Þ is the probability of chance agreement in the k-th stratum, and using the variance formula for the single stratum case given by Ohyama [36], I γγ can be reduced to the rightmost expression in Eq. (14).

Profile variance approach
The profile variance of a statistic is defined as the variance similar to the estimated variance but without substituting the estimate for the parameter corresponding to the statistic [41].
In this study,π k is substituted for π k in Eq. (15), and then we obtain the profile variance g Varðγ 0 Þ from (14). Sinceγ 0 is distributed asymptotically as the normal distribution with mean γ 0 and variance (14), we have Thus we can obtain the confidence limits as the two admissible roots of Eq. (17).
Since Eq. (17) is a cubic equation for γ 0 and is complicated to solve, thus we calculated the confidence limit by numerical calculation. The program is given in additional file. Other examples of the profile variance approach for obtaining confidence intervals can be found in many literatures. For example, Bickel and Doksum [50] reported a confidence interval based on the profile variance for the one-sample binomial proportion. Rothman [51] and Afifi, Elashoff and Lee JJ [52] described profile variance type of confidence intervals for survival probability.