Skip to main content

Advertisement

Homogeneity score test of AC1 statistics and estimation of common AC1 in multiple or stratified inter-rater agreement studies

Abstract

Background

Cohen’s κ coefficient is often used as an index to measure the agreement of inter-rater determinations. However, κ varies greatly depending on the marginal distribution of the target population and overestimates the probability of agreement occurring by chance. To overcome these limitations, an alternative and more stable agreement coefficient was proposed, referred to as Gwet’s AC1. When it is desired to combine results from multiple agreement studies, such as in a meta-analysis, or to perform stratified analysis with subject covariates that affect agreement, it is of interest to compare several agreement coefficients and present a common agreement index. A homogeneity test of κ was developed; however, there are no reports on homogeneity tests for AC1 or on an estimator of common AC1. In this article, a homogeneity score test for AC1 is therefore derived, in the case of two raters with binary outcomes from K independent strata and its performance is investigated. An estimation of the common AC1 between strata and its confidence intervals is also discussed.

Methods

Two homogeneity tests are provided: a score test and a goodness-of-fit test. In this study, the confidence intervals are derived by asymptotic, Fisher’s Z transformation and profile variance methods. Monte Carlo simulation studies were conducted to examine the validity of the proposed methods. An example using clinical data is also provided.

Results

Type I error rates of the proposed score test were close to the nominal level when conducting simulations with small and moderate sample sizes. The confidence intervals based on Fisher’s Z transformation and the profile variance method provided coverage levels close to nominal over a wide range of parameter combination.

Conclusions

The method proposed in this study is considered to be useful for summarizing evaluations of consistency performed in multiple or stratified inter-rater agreement studies, for meta-analysis of reports from multiple groups and for stratified analysis.

Peer Review reports

Background

To evaluate the reliability when two raters classify objects as either positive (+) or negative (−), Cohen’s κ [1] and the intra-class version of κ, which is identical to Scott’s π [2], have often been used. Let pa be the agreement probability, and p1 and p2 the probabilities classified as (+) by rater 1 and 2 respectively. Then Cohen’s κ (κCohen) and Scott’s π (κScott) are defined as follows:

$$ {\kappa}_{Cohen}=\frac{p_a-{p}_{e(c)}}{1-{p}_{e(c)}},\kern0.5em {\kappa}_{Scott}=\frac{p_a-{p}_{e(s)}}{1-{p}_{e(s)}}, $$

where \( {p}_{e(c)}={p}_1{p}_2+\left(1-{p}_1\right)\left(1-{p}_2\right),{p}_{e(s)}={p}_{+}^2+{\left(1-{p}_{+}\right)}^2 \) and p+ = (p1 + p2)/2. The pe(c) and pe(s) are the probabilities of agreement expected by chance for Cohen’s κ and Scott’s π respectively. The pe(c) assumes that the probabilities of positive classification differ between two raters, while the pe(s) assumes that these two probabilities are the same. Landis and Koch provided benchmarks of the strength of consistency as follows: values ≤0 as poor, 0.00 to 0.20 as slight, 0.21 to 0.40 as fair, 0.41 to 0.60 as moderate, 0.61 to 0.80 as substantial and 0.81 to 1.00 as almost perfect agreement [3]. Although the authors acknowledge the arbitrary nature of their benchmarks, they recommended their benchmark scale as a useful guideline for practitioners.

Many extensions have been made to Cohen’s κ including those for agreement in the cases of ordinal data [4], multiple raters [5,6,7,8,9], comparisons of correlated κ ’s [10,11,12,13] and stratified data [14, 15]. However, as Feinstein and Cicchetti showed, Cohen’s κ depends strongly on the marginal distributions and therefore behaves paradoxically [16]. This behavior can be explained by the bias effect and the prevalence effect, on which various discussions have been undertaken [16,17,18]. A number of alternative measures of agreements have also been proposed, such as Holley and Guilford’s G [19], Aickin’s α [20], Andres and Marzo’s delta [21], Marasini’s s* [22, 23] and Gwet’s AC1 [24] and AC2 [25].

Gwet showed that AC1 has better statistical properties (bias and variance) than Cohen’s κ, Scott’s π and G-index under a limited set of simulations for two raters with binary outcomes [24]. Shanker and Bangdiwala compared Cohen’s κ, Scott’s π, Prevalence Adjusted Bias Adjusted Kappa (PABAK) [26], AC1 and B-statistic [27], which is not a kappa-type chance-corrected measure, in the case of two raters and binary outcomes and showed that AC1 has better properties than other kappa-type measures [28]. In addition, AC1 has been utilized in the field of medical research over the past decade [29,30,31,32,33,34,35]. Therefore, in this study we have limited our discussion to AC1 in the case of two raters with binary outcomes.

First, a brief review of the concept of Gwet’s AC1 is provided. Consider the situation in which two raters independently classify randomly extracted subject as positive (+) or negative (−). Gwet defined two events: G = {the two raters agree} and R = {at least one rater performs random rating}. The probability of agreement expected by chance is then pe = P(GR) = P(G| R)P(R). A random rating would lead to the classification of an individual into each category with the same probability \( \frac{1}{2} \) and it follows that \( P\left(G|R\right)=2\times \left(\frac{1}{2}\right)\times \left(\frac{1}{2}\right)=\frac{1}{2} \). As for the estimation of P(R), this probability cannot be obtained from data. Therefore, Gwet proposed approximating it with a normalized measure of randomness Ψ, defined as follows:

$$ P(R)\simeq \Psi =\frac{\pi_{+}\left(1-{\pi}_{+}\right)}{\frac{1}{2}\left(1-\frac{1}{2}\right)}=4{\pi}_{+}\left(1-{\pi}_{+}\right), $$
(1)

where π+ is the probability that a randomly chosen rater classifies a randomly chosen subject into the + category. Thus, the approximated probability of chance agreement is represented by

$$ {p}_e^{\ast }=P\left(G|R\right)\Psi =2{\pi}_{+}\left(1-{\pi}_{+}\right). $$
(2)

AC1 is thus defined as follows:

$$ \gamma =\frac{p_a-{p}_e^{\ast }}{1-{p}_e^{\ast }}, $$
(3)

where pa is the probability of agreement. Although pe is approximated to \( {p}_e^{\ast } \), Gwet showed that the bias of γ, the difference between γ and the true inter-rater reliability, is equal to or less than Cohen’s κ, Scott’s π and G-index under some assumption in the case of two raters with binary outcomes. Gwet also provided an estimator \( {\hat{\gamma}}^{\ast } \) of γ and its variance for multiple raters and multiple categories based on the randomization approach, which requires the selection of subjects to be random in such a way that all possible subject samples have the exact same chance of being selected. However, it is advantageous to employ a model-based approach when, for example, the evaluation of the effect of subject covariates on agreement is of interest. Therefore, in the case of two raters with binary outcomes, Ohyama [36] assumed the underlying probability that a subject is rated as (+) and its marginal homogeneity of the two raters, and then constructed the likelihood. The maximum likelihood estimator of γ, which is shown to be identical to the estimator given by Gwet, was derived. The likelihood-based confidence intervals for AC1, inclusion of subject covariates, hypothesis testing and sample size determination were also discussed [36].

In this article, we discuss stratification analyses as another approach to adjust the effect of subject covariates on agreement. For example, a clinical assessment whether a patient has a particular disease symptom may be influenced by overall severity of the disease. In such a case, we consider stratification based on the severity of the disease. Another example is a multicenter inter-rater agreement study, in which the classifications for subjects are conducted independently in each center. These situations require several independent agreement statistics. Then the main purpose of the analyses would be testing whether the degree of inter-rater agreement can be regarded as homogeneous across strata, such as centers and severities of the disease.

For κ, Fleiss has been at the forefront of the idea of χ2 test-based inter-class consistency with large sample variances [37] and further studies by Donner, Eliasziw and Klar [14], Nam [15, 38] and Wilding, Consiglio and Shan [39] have developed the homogeneity test of κ across covariate levels. However, there are no reports on homogeneity tests for AC1 or on an estimator of common AC1. Therefore, in this article, we derive the homogeneity score test for AC1 from K independent strata and its performance is investigated. An estimation of the common AC1 between strata and its confidence intervals is also discussed. Finally, an example application of our approach to clinical trial data is provided.

Methods

Homogeneity tests

Score test

Consider K independent strata involving nk subjects for k = 1, …, K. In each stratum, two raters independently classify subjects as either positive (+) or negative (−). Let Xkij= 1 if subject i(=1, …, nk) in the k-th stratum is classified as “+” by rater j(=1, 2) and Xkij = 0 otherwise. Suppose that P(Xkij = 1| i) = uki, \( E\left({u}_{ki}\right)={\pi}_k\ \mathrm{and}\kern0.5em Var\left({u}_{ki}\right)={\sigma}_k^2 \). The γ of the k-th stratum is then expressed as follows [36]:

$$ {\gamma}_k=\frac{1+2\left[{\sigma_k}^2-2{\pi}_k\left(1-{\pi}_k\right)\right]}{1-2{\pi}_k\left(1-{\pi}_k\right)}. $$
(4)

Let the number of observed pairs in the three categories of the k-th stratum be x1k, x2k and x3k and their corresponding probabilities be P1k(γk), P2k(γk) and P3k(γk). The data of the k-th stratum are then given as shown in Table 1.

Table 1 Data layout

The log-likelihood function is given by

$$ l\left(\boldsymbol{\gamma}, \boldsymbol{\pi} \right)=\sum \limits_{k=1}^K{l}_k\left({\gamma}_k,{\pi}_k\right), $$
(5)

where γ = (γ1, …, γK)', π = (π1, …, πK)', lk(γk, πk) = x1k log P1k(γk) + x2k log P2k(γk) + x3k log P3k(γk),

$$ {P}_{1k}\left({\gamma}_k\right)={\pi}_k\left(2-{\pi}_k\right)-\frac{1}{2}+\frac{\gamma_k}{2}{A}_k, $$
(6)
$$ {P}_{2k}\left({\gamma}_k\right)={A}_k\left(1-{\gamma}_k\right), $$
(7)
$$ {P}_{3k}\left({\gamma}_k\right)=\left(1-{\pi}_k\right)\left(1+{\pi}_k\right)-\frac{1}{2}+\frac{\gamma_k}{2}{A}_k, $$
(8)

and Ak = 1 − 2πk(1 − πk).

The maximum likelihood estimators of γk and πk are then given by

$$ {\hat{\gamma}}_k=1-\frac{2{n}_k{x}_{2k}}{n_k^2+{\left({x}_{1k}-{x}_{3k}\right)}^2} $$
(9)

and

$$ {\hat{\pi}}_k=\frac{2{x}_{1k}+{x}_{2k}}{2{n}_k}, $$
(10)

respectively.

The first and second derivatives of the log-likelihood function and the Fisher information matrix are given in the Appendix. The aim of this study is to test the homogeneity of the agreement coefficients among K strata, and thus the null hypothesis to test is represented by H0 : γk = γ0 (k = 1, 2, ..., K). The score test statistic for the null hypothesis is derived as follows (see Appendix):

$$ T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right)=\sum \limits_{k=1}^K\frac{{{\overset{\sim }{R}}_k}^2{\overset{\sim }{D}}_k}{n_k\left({\overset{\sim }{B}}_k{\overset{\sim }{D}}_k-{{\overset{\sim }{C}}_k}^2\right)}, $$
(11)

where \( {\overset{\sim }{B}}_k,{\overset{\sim }{C}}_k,{\overset{\sim }{D}}_k\ \mathrm{and}\ {\overset{\sim }{R}}_k \) are obtained by substituting the maximum likelihood estimators \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) under the null hypothesis into \( {\displaystyle \begin{array}{l}{B}_k=\frac{1}{P_{1k}}+\frac{4}{P_{2k}}+\frac{1}{P_{3k}},\\ {}{C}_k=\frac{1}{P_{1k}}-\frac{1}{P_{3k}}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right){B}_k,\\ {}{D}_k=\frac{1}{P_{1k}}+\frac{1}{P_{3k}}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right)\left(\frac{1}{P_{1k}}-\frac{1}{P_{3k}}+{C}_k\right),\\ {}{R}_k=\frac{x_{1k}}{P_{1k}}-\frac{2{x}_{2k}}{P_{2k}}+\frac{x_{3k}}{P_{3k}}.\end{array}} \)

\( T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right) \) is asymptotically distributed as a χ2 with K − 1 degrees of freedom. The homogeneity hypothesis is rejected at level α when \( T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right) \)\( \kern0.5em {\chi}_{\left(1-\alpha \right),K-1}^2 \), where \( {\chi}_{\left(1-\alpha \right),K-1}^2 \) is the 100 × (1 − α) percentile point of the χ2 distribution with K − 1 degrees of freedom.

Note that, since 0 ≤ P1k(γk), P2k(γk), P3k(γk) ≤ 1 and P1k(γk) + P2k(γk) + P3k(γk) = 1, substituting (6), (7) and (8) into these equations, the admissible range of γk with respect to πk is obtained as follows [36]:

$$ \frac{2-\left(1-|1-2{\pi}_k|\right)\left(3+|1-2{\pi}_k|\right)}{2-\left(1-|1-2{\pi}_k|\right)\left(1+|1-2{\pi}_k|\right)}\le {\gamma}_k\le 1. $$
(12)

When obtaining the maximum likelihood estimators \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) under the null hypothesis by numerical calculation, initial values need to be set to satisfy this condition.

Goodness-of-fit test

Donner, Eliasziw and Klar proposed a goodness-of-fit approach for testing homogeneity of kappa statistics in the case of two raters with binary outcomes [40]. This procedure can also be applied to AC1 statistics. Given that the frequencies x1k, x2k, x3k, k = 1, …, K in Table 1 follow a multinomial distribution conditional on nk, estimated probabilities under H0 are given by \( {\hat{P}}_{hk}\left({\overset{\sim }{\gamma}}_0\right) \), which is obtained by replacing πk by \( {\hat{\pi}}_k \) and γk by \( {\overset{\sim }{\gamma}}_0 \) in Phk(γk); h = 1, 2, 3; k = 1, …, K. Then the goodness-of-fit statistic is derived as follows:

$$ {\chi}_G^2=\sum \limits_{k=1}^K\sum \limits_{h=1}^3\frac{{\left({x}_{hk}-{n}_k{\hat{P}}_{hk}\left({\tilde{\gamma}}_0\right)\right)}^2}{n_k{\hat{P}}_{hk}\left({\tilde{\gamma}}_0\right)}, $$
(13)

under H0, \( {\chi}_G^2 \) follows an approximate χ2 distribution with K − 1 degrees of freedom. The homogeneity hypothesis is rejected at level α when \( {\chi}_G^2\ge {\chi}_{\left(1-\alpha \right),K-1}^2 \), where \( {\chi}_{\left(1-\alpha \right),K-1}^2 \) is the 100 × (1 − α) percentile point of the χ2 distribution with K − 1 degrees of freedom.

Estimation of common AC1

If the assumption of homogeneity is reasonable, the estimate of γ0 can be used as an appropriate summary measure of reliability. The maximum likelihood estimator \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) are obtained by maximizing the log-likelihood functions \( {l}_0\left({\gamma}_0,\boldsymbol{\pi} \right)=\sum \limits_{k=1}^K{l}_k\left({\gamma}_0,{\pi}_k\right) \). Since an analytical solution cannot be obtained from this function, numerical iterative calculations are used. The variance \( Var\left({\overset{\sim }{\gamma}}_0\right) \) of \( {\overset{\sim }{\gamma}}_0 \) can be expressed as follows (see Appendix):

$$ Var\left({\tilde{\gamma}}_0\right)=4{\left[\sum \limits_{k=1}^K{n}_k{A_k}^2\left({B}_k^{(0)}-\frac{{C_k^{(0)}}^2}{D_k^{(0)}}\right)\right]}^{-1}={\left[\sum \limits_{k=1}^K\frac{1}{Va{r}_k\left({\tilde{\gamma}}_0\right)}\right]}^{-1}, $$
(14)

where \( {B}_k^{(0)},{C}_k^{(0)},{D}_k^{(0)} \) are values using γk = γ0 in Bk, Ck, Dk respectively, and

$$ Va{r}_k\left({\tilde{\gamma}}_0\right)=\frac{1}{n_k{A}_k^2}\left[{A}_k\left(1-{\gamma}_0\right)-\left({A}_k^2-4{A}_k+2\right){\left(1-{\gamma}_0\right)}^2-{A}_k\left(2{A}_k-1\right){\left(1-{\gamma}_0\right)}^3\right]. $$
(15)

A simple 100 × (1 − α) % confidence interval using the asymptotic normality of \( {\overset{\sim }{\gamma}}_0 \) can be expressed as follows:

$$ {\tilde{\gamma}}_0\pm {Z}_{\raisebox{1ex}{$\alpha $}\!\left/ \!\raisebox{-1ex}{$2$}\right.}\sqrt{\hat{Var}\left({\tilde{\gamma}}_0\right)}, $$
(16)

where Zα/2 is the α/2 upper quantile of the standard normal distribution and \( \hat{Var}\left({\overset{\sim }{\gamma}}_0\right) \) is obtained by substituting \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) into (14). Hereafter, this method is referred to as the simple asymptotic (SA) method. Since Eq. (14) depends on γ0, SA method may not have the correct coverage rate, and the normality of the sampling distribution of \( {\overset{\sim }{\gamma}}_0 \) may be improved using Fisher’s Z transformation. This method is referred to below as Fisher’s Z transformation (FZ) method (see Appendix).

As an alternative method, we employ the profile variance approach, which has been shown to perform well in the case of the intra-class κ for binary outcome data [41,42,43]. This approach also performs well for AC1 in the case of two raters with binary outcomes [36]. The confidence interval based on the profile variance can be obtained by solving the following inequality for γ0:

$$ \frac{{\left(\overset{\sim }{\boldsymbol{\gamma}}-{\gamma}_0\right)}^2}{\overset{\sim }{Var}\left({\overset{\sim }{\gamma}}_0\right)}\le {Z}_{\mathrm{a}/2}^2, $$
(17)

where \( \overset{\sim }{Var}\left({\overset{\sim }{\gamma}}_0\right) \) is given by substituting \( {\overset{\sim }{\pi}}_k \) into πk in (15). Hereafter, this method is referred to as the profile variance (PV) method (see Appendix).

Numerical evaluations

We conducted Monte Carlo simulations to investigate the performance of the proposed homogeneity tests and to evaluate the estimate of common AC1 and its confidence intervals under the following conditions: the number of strata in the simulation is K = 2 or 3; and random observations are generated from the trinomial distributions according to the probabilities of (6), (7) and (8) by giving the values of γk and πk. The balanced and unbalanced cases were considered for the values of πk and nk. The values of γk and πk are set within the theoretical range of Eq. (12) derived in the preceding paragraph. Ten thousand times of iterations were carried out for each parameter combination.

When πk is close to 0 or 1 and nk is small, there are cases in which the generated data include zero cells. In such cases, Bk, Ck, Dk and Rk cannot be estimated . Thus, when zero cells were generated, we adopted the approach of adding 0.5 to the frequency of each combination by two raters, (+,+), (+,−), (−,+), (−,−). This simple method was discussed by Agresti [44] and was adopted in a previous study [39].

Results

Empirical type I error rate for the homogeneity test

The type I error rates of the homogeneity tests with a significance level of 0.05 were examined. The sample size was set at nk = n = 20, 50, 80 for balanced settings and (n1, n2, n3) = (20, 50, 80) for unbalanced settings. The error rate obtained by the score test is expressed as SCORE and the error rate obtained by the goodness-of-fit test is expressed as GOF. Table 2 summarizes the results for K = 2.

Table 2 Empirical type I error rates of homogeneity tests for γ1 = γ2 = γ0 based on 10,000 simulations (K = 2 balanced sample size)

Overall, the proposed score test did not show any significant type I error rate inflation, but it was very conservative when sample size was small and γ0 was close to 1.

In the case of n = 20 when γ0 = 0.1, 0.3 or 0.5, the type I error rates of SCORE were maintained at the nominal level of 0.05 regardless of whether πk was balanced or unbalanced, but when γ0 = 0.7 or 0.9, the type I error rates were slightly conservative. Especially when γ0 = 0.9, the rate was significantly conservative to the extent of being less than 0.01. In the case of n = 50, the type I error rates were maintained at the nominal level of 0.05 except when γ0 = 0.9. Finally in the case of n = 80, the type I error rates were almost maintained at the nominal level. In contrast, the type I error rate of GOF tended to be larger than that of SCORE and in many cases it was not maintained at the nominal level.

The results obtained for K = 3 are shown Table S1 and Table S2 in Additional file 1.

The Additional file 2 provides the simulation code of empirical type I error rate using R language.

Empirical power of the homogeneity test

The empirical power of the score test was investigated only for the case of K = 2, by setting γ1 = 0.1, 0.3, 0.5 and γ2 − γ1 = 0.3, 0.4. The values of πk and nk were set as in the type I error simulation. The results are shown in Table 3. The power tended to be large as the value of γ1 increased under the fixed values of π and γ2 − γ1.

Table 3 Empirical power of homogeneity tests based on 10,000 simulations (K = 2 balanced sample size)

The empirical power of the GOF test was also examined under the same simulation conditions as the score test. The results are also shown in Table 3. However, the GOF had a large type I error rate inflation (Table 2) and was invalid as a test.

The Additional file 2 provides the simulation code of empirical power using R language.

Bias and mean square error for common AC1

We evaluated the bias and mean square error (MSE) of the maximum likelihood estimator for the common AC1, \( {\overset{\sim }{\gamma}}_0 \). The balanced and unbalanced conditions for πk and the balanced condition for nk were considered. The results are shown in Table 4. The bias of \( {\overset{\sim }{\gamma}}_0 \) tended to be small as γ0 increased, but \( {\overset{\sim }{\gamma}}_0 \) was almost unbiased. As expected, the bias and MSE tended to be small as the sample size increased.

Table 4 Bias and mean square error of the maximum likelihood estimator for the common AC1 based on 10,000 simulations (K = 2 balanced sample size)

The Additional file 3 provides the simulation code of bias and mean square error for common AC1 using R language.

Confidence intervals for common AC1

We conducted a simulation study to evaluate the performances of the three confidence intervals presented in the previous section. The coverage rates of the 95% confidence interval were examined. The balanced and unbalanced conditions for πk and the balanced condition for nk are considered. The results are shown in Table 5. The coverage rate of the SA method was generally lower than 0.95 under many conditions, with the exception of the value being close to 0.99 in the case of n1 = n2 = 20 and γ0= 0.9. The FZ method and PV method greatly improved the coverage rates close to the nominal level. However, the coverage rate of the PV method was closer to the nominal level than that of the FZ method in most cases under the conditions examined. The coverage rates of each method were also evaluated in the case of K = 3, and the unbalanced nk conditions and both the FZ method and the PV method achieved coverage rates near 0.95 (results not shown).

Table 5 Coverage rates of common γ 95% confidence intervals of the three proposed methods based on 10,000 simulations

The Additional file 4 provides the simulation code of confidence intervals for common AC1 using R language.

An example

As an example, we used data from a randomized clinical trial called the Silicon Study, which was conducted to investigate the effectiveness of silicone fluids versus gases in the management of proliferative vitreoretinopathy (PVR) by vitrectomy [45]. The PVR classification, determined at the baseline visit, defines the severity of the disease as a continuum of increasing pathology graded as C3, D1, D2 or D3. The presence or absence of retinal injury in the superior nasal cavity was evaluated clinically by the operating ophthalmic surgeon and photographically by an independent fundus photograph reading center [46].

The data and results are summarized in Table 6. For reference, the results of the homogeneity score test proposed by Nam for the intra-class κ are also provided [15]. The probabilities of agreement in each stratum were from 0.800 to 0.880 and not so different. However, the values of κ in each stratum were from 0.117 to 0.520 and were greatly different. This might be due to the prevalence effect caused by the small values of π. In contrast, the values of γ were 0.723 to 0.861 and did not differ greatly among strata.

Table 6 Agreement between ophthalmologist and reading center classifying superior nasal retinal breaks stratified by PVR grade

The proposed homogeneity score statistic \( T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right) \) was 2.060 (p-value = 0.560) and the homogeneity hypothesis was not rejected. The estimate of common AC1 was 0.808 and its 95% confidence intervals were 0.743–0.873 (SA method), 0.732–0.864 (FZ method) and 0.730–0.862 (PV method). Also, the score statistic for testing the homogeneity of κ’s [15] was 2.700 (p-value = 0.440) and the common κ was 0.352.

The Additional file 5 provides the code for clinical data examples using R language.

To investigate the sensitivity of the indicators to πk, we hypothetically considered more balanced and less balanced πk under fixed pa and nk in each stratum. The generated data set and analysis results are summarized as Table S3 in the Additional file 1. κ was more sensitive to changes in the value of π, but AC1 was less sensitive to changes in the value of π than κ. The common AC1 was not affected as much as the common κ even if the π balance was lost.

Discussion

It is well known that Cohen’s κ depends strongly on the marginal distributions, and Gwet proposed alternative and more stable measures of agreement, AC1 for nominal data and its extended agreement AC2 for ordinal data [24, 25]. A number of alternative measures have also been proposed, as in Holley and Guilford’s G [19], Aickin’s α [20], Andres and Marzo’s delta [21] and Marasini’s s* [22, 23]. Gwet [24] and Shankar and Bangdiwala [28] compared some measures and showed that AC1 has better properties than other kappa-type measures. In addition, AC1 has been utilized in the field of medical research over the past decade [29,30,31,32,33,34,35]. However statistical inference procedures of AC1 have not been discussed sufficiently. Therefore, Ohyama expressed AC1 using population parameters to develop a likelihood-based inference procedure and constructed confidence intervals of AC1 based on profile variances and likelihood ratios. Inclusion of subjects’ covariates, hypothesis testing and sample size estimation were also presented [36]. In the present study, the case of stratified data was discussed as one development of Ohyama [36] for two raters with binary outcomes. Furthermore, tests were derived for the homogeneity of AC1 between K independent strata and the inference of common AC1 was discussed.

In the numerical evaluation of type I error, both tests were conservative when the sample size was small and γ0 was 0.9, but the conservativeness was relaxed when the sample size was as large as 80. In other settings of simulation, the score test performed well while GOF sometimes could not achieve the nominal level. Therefore, we recommend using the score test for testing the homogeneity of AC1 among K strata. Note that, when zero cells are observed, the homogeneity score test statistic cannot be calculated. In such cases in our simulation study, we simply added 0.5 to the data set, which had no serious effect on the performance of the proposed score test in our simulation settings.

If the homogeneity assumption is reasonable, it may be desired to provide an estimate of the common AC1 as a summary measure of reliability. In the present study, we proposed an estimator of common AC1 and constructed its confidence intervals based on the SA, FZ, and PV methods. We also evaluated the performance of each numerically. The bias and MSE tended to be small as the sample size increased, and the results were nearly 0 when n = 80. The PV method provides coverage levels close to nominal in most situations, while the SA method tends to provide a shortage of coverage and the FZ method tends to provide excess coverage in some situations. Therefore, we recommend the PV method for calculating confidence intervals.

As in the PVR example, AC1 in each stratum is less affected by the prevalence or marginal probability than by the κ. It is suggested that the proposed homogeneity test and the general framework of common AC1 estimation are also essentially more stable than those of the κ.

There were some limitations in this study. First, as described above, the performance of the proposed score test was very conservative when γ0 = 0.9 and sample size was small. An exact approach might be an alternative method in such cases.

Next, in this study, the cases were limited to two raters with binary outcomes in each stratum. However, in the evaluation of medical data, it is often the case that multiple raters classify subjects into nominal or ordered categories. Our proposed method may be extended to the case of multiple raters with binary outcomes using the likelihood function for multiple raters. In the cases of two raters with nominal outcomes, Agresti [47] proposed a quasi-symmetry model with kappa as a parameter, and this technique may be extended to AC1 in the case of stratified data.

Finally, continuous covariates need to be categorized adequately to apply the proposed approach. The regression model proposed by Ohyama [36] can be used to assess the effect of continuous covariates on AC1, but it is limited to the case of two raters with binary data. Nelson and Edwards [48] and Nelson, Mitani and Edwards [49] proposed a method for constructing a measure of agreement using generalized linear mixed-effect models by introducing continuous latent variables representing the subject’s true disease status and for flexibly incorporating rater and subject covariates. These approaches might be applicable to AC1 and AC2.

Conclusion

The method proposed in this study is considered to be useful for summarizing evaluations of consistency performed in multiple or stratified inter-rater agreement studies. In addition, the proposed method can be applied not only to medical or epidemiological research but also to assessment of the degree of consistency of characteristics, such as biometrics, psychological measurements, and data in the behavioral sciences.

Availability of data and materials

The program codes are shared in additional files. Clinical data referred to are from Barlow, et al. ref. [45].

Abbreviations

FZ:

Fisher’s Z transformation

GOF:

goodness-of-fit

MLE:

maximum likelihood estimator

MSE:

mean squared error

PV:

profile variance

PVR:

proliferative vitreoretinopathy

SA:

simple asymptotic

References

  1. 1.

    Cohen J. Coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–40. https://doi.org/10.1177/001316446002000104.

  2. 2.

    Scott WA. Reliability of content analysis; the case of nominal scale coding. Public Opin Q. 1955;19:321–5. https://doi.org/10.1086/266577.

  3. 3.

    Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74. https://doi.org/10.2307/2529310.

  4. 4.

    Cohen J. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychol Bull. 1968;70(4):213–20. https://doi.org/10.1037/h0026256.

  5. 5.

    Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull. 1971;76(5):378–82. https://doi.org/10.1037/h0031619.

  6. 6.

    Landis JR, Koch GG. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics. 1977;33(2):363–74. https://doi.org/10.2307/2529786.

  7. 7.

    Kraemer HC. Extension of the kappa coefficient. Biometrics. 1980;36(2):207–16. https://doi.org/10.2307/2529972.

  8. 8.

    Davies M, Fleiss JL. Measuring agreement for multinomial data. Biometrics. 1982;38(4):1047–51. https://doi.org/10.2307/2529886.

  9. 9.

    Berry KJ, Mielke PW. A generalization of Cohen's kappa agreement measure to interval measurement and multiple raters. Educ Psychol Meas. 1988;48(4):921–33. https://doi.org/10.1177/0013164488484007.

  10. 10.

    Oden NL. Estimating kappa from binocular data. Stat Med. 1991;10(8):1303–11. https://doi.org/10.1002/sim.4780100813.

  11. 11.

    McKenzie DP, Mackinnon AJ, Péladeau N, Onghena P, Bruce PC, Clarke DM, et al. Comparing correlated kappas by resampling: is one level of agreement significantly different from another? J Psychiatr Res. 1996;30(6):483–92. https://doi.org/10.1016/S0022-3956(96)00033-7.

  12. 12.

    Barnhart HX, Williamson JM. Weighted least-squares approach for comparing correlated kappa. Biometrics. 2002;58(4):1012–9. https://doi.org/10.1111/j.0006-341X.2002.01012.x.

  13. 13.

    Gwet KL. Testing the difference of correlated agreement coefficients for statistical significance. Educ Psychol Meas. 2016 Aug;76(4):609–37. https://doi.org/10.1177/0013164415596420.

  14. 14.

    Donner A, Eliasziw M, Klar N. Testing the homogeneity of kappa statistics. Biometrics. 1996;52(1):176–83. https://doi.org/10.2307/2533154.

  15. 15.

    Nam JM. Homogeneity score test for the intraclass version of the kappa statistics and sample-size determination in multiple or stratified studies. Biometrics. 2003;59(4):1027–35. https://doi.org/10.1111/j.0006-341X.2003.00118.x.

  16. 16.

    Feinstein AR, Cicchetti DV. High agreement but low kappa: I. the problems of two paradoxes. J Clin Epidemiol. 1990;43(6):543–9. https://doi.org/10.1016/0895-4356(90)90158-L.

  17. 17.

    Thompson WD, Walter SD. A reappraisal of the kappa coefficient. J Clin Epidemiol. 1988;41(10):949–58. https://doi.org/10.1016/0895-4356(88)90031-5.

  18. 18.

    Vach W. The dependence of Cohen's kappa on the prevalence does not matter. J Clin Epidemiol. 2005 Jul;58(7):655–61. https://doi.org/10.1016/j.jclinepi.2004.02.021.

  19. 19.

    Holley JW, Guilford JP. A note on the G index of agreement. Educ Psychol Meas. 1964;24(4):749–53. https://doi.org/10.1177/001316446402400402.

  20. 20.

    Aickin M. Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen’s kappa. Biometrics. 1990;46(2):293–302. https://doi.org/10.2307/2531434.

  21. 21.

    Andrés AM, Marzo PF. Delta: a new measure of agreement between two raters. Br J Math Stat Psychol. 2004;57(Pt 1):1–19. https://doi.org/10.1348/000711004849268.

  22. 22.

    Marasini D, Quatto P, Ripamonti E. The ordinal inter-rater agreement for the evaluation of University courses. Stat Appl. 2014;XII(1):5–16. https://doi.org/10.1400/229464.

  23. 23.

    Marasini D, Quatto P. Ripamonti. Assessing the inter-rater agreement through weighted indexes. Stat Methods Med Res. 2016;25(6):2611–33. https://doi.org/10.1177/0962280214529560.

  24. 24.

    Gwet KL. Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol. 2008;61(Pt 1):29–48. https://doi.org/10.1348/000711006X126600.

  25. 25.

    Gwet KL. Handbook of inter-rater reliability: the definitive guide to measuring the extent of agreement among raters. 4th ed. Gaithersburg: Advanced Analytics, LLC; 2014.

  26. 26.

    Byrt T, Bishop J, Carlin JB. Bias, prevalence and kappa. J Clin Epidemiol 1993;46(5):423–429. https://doi.org/10.1016/0895-4356(93)90018-V.

  27. 27.

    Bangdiwala SI. A graphical test for observer agreement, vol. 1. Amsterdam: Proc 45th Int Stats Institute Meeting; 1985. p. 307–8.

  28. 28.

    Shankar V, Bangdiwala SI. Observer agreement paradoxes in 2x2 tables : comparison of agreement measures. BMC Med Res Methodol. 2014;14:100. https://doi.org/10.1186/1471-2288-14-100.

  29. 29.

    Alencar LM, Zangwill LM, Weinreb RN, Bowd C, Vizzeri G, Sample PA, et al. Agreement for detecting glaucoma progression with the GDx guided progression analysis, automated perimetry, and optic disc photography. Ophthalmology. 2010;117(3):462–70. https://doi.org/10.1016/j.ophtha.2009.08.012.

  30. 30.

    Marks D, Comans T, Thomas M, Ng SK, O'Leary S, Conaghan PG, et al. Agreement between a physiotherapist and an orthopaedic surgeon regarding management and prescription of corticosteroid injection for patients with shoulder pain. Man Ther. 2016;26:216–22. https://doi.org/10.1016/j.math.2016.10.001.

  31. 31.

    Pollock M, Fernandes RM, Hartling L. Evaluation of AMSTAR to assess the methodological quality of systematic reviews in overviews of reviews of healthcare interventions. BMC Med Res Methodol. 2017;17(1):48. https://doi.org/10.1186/s12874-017-0325-5.

  32. 32.

    Veldhoen S, Weng AM, Knapp J, Kunz AS, Stäb D, Wirth C, et al. Self-gated non-contrast-enhanced functional lung MR imaging for quantitative ventilation assessment in patients with cystic fibrosis. Radiology. 2017;283(1):242–51. https://doi.org/10.1148/radiol.2016160355.

  33. 33.

    Zee J, Hodgin JB, Mariani LH, Gaut JP, Palmer MB, Bagnasco SM, et al. Reproducibility and feasibility of strategies for morphologic assessment of renal biopsies using the Nephrotic syndrome study network digital pathology scoring system. Arch Pathol Lab Med. 2018;142(5):613–25. https://doi.org/10.5858/arpa.2017-0181-OA.

  34. 34.

    Hansen D, Hansen E, Retegan C, Morphet J, Beiles CB. Validation of data submitted by the treating surgeon in the Victorian audit of surgical mortality. ANZ J Surg. 2019;89(1–2):16–9. https://doi.org/10.1111/ans.14910.

  35. 35.

    Wennberg S, Karlsen LA, Stalfors J, Bratt M, Bugten V. Providing quality data in health care - almost perfect inter-rater agreement in the Norwegian tonsil surgery register. BMC Med Res Methodol. 2019;19(1):6. https://doi.org/10.1186/s12874-018-0651-2.

  36. 36.

    Ohyama T. Statistical inference of agreement coefficient between two raters with binary outcomes. Commun Stat Theory Methods. 2019. https://doi.org/10.1080/03610926.2019.1576894.

  37. 37.

    Fleiss JL. Statistical methods for rates and proportions. 2nd ed. Hoboken: Wiley; 1981.

  38. 38.

    Nam JM. Testing the intraclass version of kappa coefficient of agreement with binary scale and sample size determination. Biom J. 2002;44:558–70. https://doi.org/10.1002/1521-4036(200207)44:5<558::AID-BIMJ558>3.0.CO;2-5.

  39. 39.

    Wilding GE, Consiglio JD, Shan G. Exact approaches for testing hypotheses based on the intra-class kappa coefficient. Stat Med. 2014;33(17):2998–3012. https://doi.org/10.1002/sim.6135.

  40. 40.

    Donner A, Eliasziw M. A goodness-of-fit approach to inference procedures for the kappa statistic: confidence interval construction, significance-testing and sample size estimation. Stat Med. 1992;11(11):1511–9. https://doi.org/10.1002/sim.4780130809.

  41. 41.

    Lee JJ, Tu ZN. A better confidence interval for kappa on measuring agreement between two raters with binary outcomes. J Comput Graph Stat. 1994;3:301–21. https://doi.org/10.2307/1390914.

  42. 42.

    Donner A, Zou G. Interval estimation for a difference between intraclass kappa statistics. Biometrics. 2002;58(1):209–15. https://doi.org/10.1111/j.0006-341X.2002.00209.x.

  43. 43.

    Zou G, Donner A. Confidence interval estimation of the intraclass correlation coefficient for binary outcome data. Biometrics. 2004;60(3):807–11. https://doi.org/10.1111/j.0006-341X.2004.00232.x.

  44. 44.

    Agresti A. Categorical data analysis. 2nd ed. Hoboken: Wiley; 2002. https://doi.org/10.1002/0471249688.

  45. 45.

    Barlow W, Lai MY, Azen SP. A comparison of methods for calculating a stratified kappa. Stat Med. 1991;10(9):1465–72. https://doi.org/10.1002/sim.4780100913.

  46. 46.

    Silicone Study Group. Proliferative vitreoretinopathy. The Silicone Study Group. Am J Ophthalmol. 1985;99(5):593–5. https://doi.org/10.1016/S0002-9394(14)77967-X.

  47. 47.

    Agresti A. An agreement model with kappa as parameter. Stat Prob Lett. 1989;7(4):271–3. https://doi.org/10.1016/0167-7152(89)90104-1.

  48. 48.

    Nelson KP, Edwards D. Measures of agreement between many raters for ordinal classifications. Stat Med. 2015;34(23):3116–32. https://doi.org/10.1002/sim.6546.

  49. 49.

    Nelson KP, Mitani AA, Edwards D. Assessing the influence of rater and subject characteristics on measures of agreement for ordinal ratings. Stat Med. 2017;36(20):3181–99. https://doi.org/10.1002/sim.7323.

  50. 50.

    Bickel PJ, Doksum KA. Mathematical statistics : basic ideas and selected topics. 1st ed. San Francisco: Holden-Day; 1977.

  51. 51.

    Rothman KJ. Estimation of confidence limits for the cumulative probability of survival in life table analysis. J Clin Epdemiol. 1978;31(8):557–60. https://doi.org/10.1016/0021-9681(78)90043-7.

  52. 52.

    Afifi AA, Elashoff RM, Lee JJ. Simultaneous non-parametric confidence intervals for survival probabilities from censored data. Stat Med. 1986;5:653–62. https://doi.org/10.1002/sim.4780050612.

Download references

Acknowledgements

We thank Professor T. Kakuma and Dr. T. Yanagawa for providing useful advice on the model construction and evaluation method. We also thank reviewers and editors for constructive and useful advice for improving this article.

Funding

No grant support or other funding was received.

Author information

CH and TO designed the concept of this research. CH conducted the simulation, analyzed a clinical example and drafted the manuscript. TO supervised this study and critically reviewed the manuscript. Both authors have read and approved the manuscript.

Correspondence to Chikara Honda.

Ethics declarations

Ethics approval and consent to participate

Not applicable because of the study involved the development of statistical methods. Clinical example retrospective data were originally published in Barlow, et al. ref. [45].

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Appendix

Appendix

First and second derivatives of the log-likelihood function

The first and second derivatives of l(γ, π), γ = (γ1, ∙ ∙ ∙, γK), π = (π1, ∙ ∙ ∙, πK)′ with respect to rk and πk are obtained as follows:

$$ {\displaystyle \begin{array}{l}\frac{\partial l}{\partial {\gamma}_k}=\frac{1}{2}{A}_k{R}_k,\\ {}\frac{\partial l}{\partial {\pi}_k}=\frac{x_{1k}}{P_{1k}}-\frac{x_{3k}}{P_{3k}}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right){R}_k,\\ {}\frac{\partial^2l}{\partial {\gamma}_k^2}=-\frac{1}{4}{A}_k^2{S}_k,\\ {}\frac{\partial^2l}{\partial {\gamma}_k\partial {\pi}_k}=-\frac{A_k}{2}\left[\frac{x_{1k}}{P_{1k}^2}-\frac{x_{3k}}{P_{3k}^2}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right){S}_k\right]-\left(1-2{\pi}_k\right){R}_k,\\ {}\frac{\partial^2l}{\partial {\pi}_k^2}=-\frac{x_{1k}}{P_{1k}^2}-\frac{x_{3k}}{P_{3k}^2}-2\left(\frac{x_{1k}}{P_{1k}^2}-\frac{x_{3k}}{P_{3k}^2}\right)\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right)-2\left(1-{\gamma}_k\right){R}_k-{\left(1-{\gamma}_k\right)}^2{\left(1-2{\pi}_k\right)}^2{S}_k,\\ {}\frac{\partial^2l}{\partial {\gamma}_k\partial {\gamma}_{k\hbox{'}}}=\frac{\partial^2l}{\partial {\pi}_k\partial {\pi}_{k\hbox{'}}}=\frac{\partial^2l}{\partial {\gamma}_k\partial {\pi}_{k\hbox{'}}}=0\kern0.50em \left(k\ne {k}^{\prime}\right),\end{array}} $$

where

$$ {\displaystyle \begin{array}{l}{A}_k=1-2{\pi}_k\left(1-{\pi}_k\right),\\ {}{R}_k=\frac{x_{1k}}{P_{1k}}-\frac{2{x}_{2k}}{P_{2k}}+\frac{x_{3k}}{P_{3k}},\\ {}{S}_k=\frac{x_{1k}}{P_{1k}^2}+\frac{4{x}_{2k}}{P_{2k}^2}+\frac{x_{3k}}{P_{3k}^2}.\end{array}} $$

Since E(xhk) = nkPhk(γk) (h = 1, 2, 3),

$$ {\displaystyle \begin{array}{l}E\left(-\frac{\partial^2l}{\partial {\gamma}_k^2}\right)=\frac{n_k}{4}{A}_k^2{B}_k,\\ {}E\left(-\frac{\partial^2l}{\partial {\gamma}_k\partial {\pi}_k}\right)=\frac{n_k}{2}{A}_k{C}_k,\\ {}E\left(-\frac{\partial^2l}{\partial {\pi}_k^2}\right)={n}_k{D}_k,\end{array}} $$

where

$$ {\displaystyle \begin{array}{l}{B}_k=\frac{1}{P_{1k}}+\frac{4}{P_{2k}}+\frac{1}{P_{3k}},\\ {}{C}_k=\frac{1}{P_{1k}}-\frac{1}{P_{3k}}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right){B}_k,\\ {}{D}_k=\frac{1}{P_{1k}}+\frac{1}{P_{3k}}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right)\left(\frac{1}{P_{1k}}-\frac{1}{P_{3k}}+{C}_k\right).\end{array}} $$

Thus, the Fisher information matrix is given as follows:

$$ I\left(\boldsymbol{\gamma}, \boldsymbol{\pi} \right)=\frac{1}{4}\left[\begin{array}{cc}\operatorname{diag}\left({n}_k{A}_k^2{B}_k\right)& \operatorname{diag}\left(2{n}_k{A}_k{C}_k\right)\\ {}\operatorname{diag}\left(2{n}_k{A}_k{C}_k\right)& \operatorname{diag}\left(4{n}_k{D}_k\right)\end{array}\right]. $$

Score test statistic for the null hypothesis

Define the score function U as

$$ U\left(\boldsymbol{\gamma}, \boldsymbol{\pi} \right)={\left(\partial l/\partial {\gamma}_1,\dots, \partial l/\partial {\gamma}_K,\partial l/\partial {\pi}_1,\dots, \partial l/\partial {\pi}_K\right)}^{\prime } $$

The score statistic for testing the null hypothesis H0 : γ1 =  = γK = γ0 is asymptotically distributed as a χ2 with K-1 degrees of freedom, and then expressed as

$$ T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right)=U{\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right)}^{\prime }I{\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right)}^{-1}U\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right), $$

where \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) are the maximum likelihood estimators under H0, and then score function vector is expressed as \( U\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right)=\frac{1}{2}{\left({\overset{\sim }{A}}_1{\overset{\sim }{R}}_1,\dots, {\overset{\sim }{A}}_K{\overset{\sim }{R}}_K,0,\dots, 0\right)}^{\prime }. \)

The upper left K × K matrix of \( I{\left(\overset{\sim }{\boldsymbol{\gamma}},\overset{\sim }{\boldsymbol{\pi}}\right)}^{-1} \) is expressed as follows:

$$ {\displaystyle \begin{array}{l}{I}^{\gamma \gamma}\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right)=4{\left(\operatorname{diag}\left({n}_k{\overset{\sim }{A}}_k^2{\overset{\sim }{B}}_k\right)-\operatorname{diag}\left(2{n}_k{\overset{\sim }{A}}_k{\overset{\sim }{C}}_k\right){\operatorname{diag}}^{-1}\left(4{n}_k{\overset{\sim }{D}}_k\right)\operatorname{diag}\left(2{n}_k{\overset{\sim }{A}}_k{\overset{\sim }{C}}_k\right)\right)}^{-1}\\ {}=4\operatorname{diag}\left[\frac{{\overset{\sim }{D}}_k}{n_k{\overset{\sim }{A}}_k^2\left({\overset{\sim }{B}}_k{\overset{\sim }{D}}_k-{\overset{\sim }{C}}_k^2\right)}\right],\end{array}} $$

so that, the score test statistic Eq. (11) is derived.

Confidence interval for γ0 based on Fisher’s Z transformation

Fisher’s Z transformation of \( {\overset{\sim }{\gamma}}_0 \) is defined by

$$ \overset{\sim }{z}=\frac{1}{2}\log \frac{1+{\overset{\sim }{\gamma}}_0}{1-{\overset{\sim }{\gamma}}_0}. $$

Using the delta method, the asymptotic variance of \( \overset{\sim }{z} \) is represented by

$$ Var\left(\overset{\sim }{z}\right)={\left(\frac{1}{1-{\gamma}_0^2}\right)}^2 Var\left({\overset{\sim }{\gamma}}_0\right), $$

where \( Var\left(\overset{\sim }{\gamma}\right) \) is given by Eq. (14). Then a confidence interval for z = 0.5 log[(1 + γ0)/(1 − γ0)] is obtained by

$$ {C}_{\pm }=\overset{\sim }{z}\pm {Z}_{\alpha /2}\sqrt{\hat{Var}\left(\overset{\sim }{z}\right)}, $$

where \( \hat{Var}\left(\overset{\sim }{z}\right) \) is defined by substituting \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) into \( Var\left(\overset{\sim }{z}\right) \). Thus the confidence interval for γ0 based on FZ method is obtained as follows:

$$ \left(\frac{\exp \left(2{C}_{-}\right)-1}{\exp \left(2{C}_{-}\right)+1},\frac{\exp \left(2{C}_{+}\right)-1}{\exp \left(2{C}_{+}\right)+1}\right). $$

Derivation of Eq. (14)

By the second-order partial derivatives of the log-likelihood function l0(γ0, π) and taking expectations, we obtain

$$ {\displaystyle \begin{array}{l}{I}_{\gamma \gamma}=E\left(-\frac{\partial^2{l}_0}{\partial {\gamma_0}^2}\right)=\frac{1}{4}\sum \limits_{k=1}^K{n}_k{A}_k^2{B}_k^{(0)},\\ {}{I}_{kk}=E\left(-\frac{\partial^2{l}_0}{\partial {\pi}_k^2}\right)={n}_k{D}_k^{(0)},\\ {}{I}_{\gamma k}=E\left(-\frac{\partial^2{l}_0}{\partial {\gamma}_0\partial {\pi}_k}\right)=\frac{1}{2}{n}_k{A}_k{C}_k^{(0)},\end{array}} $$

where \( {B}_k^{(0)},{C}_k^{(0)} \) and \( {D}_k^{(0)} \) are values using γk = γ0 in Bk, Ck and Dk respectively. Let

$$ I=\left(\begin{array}{cc}{I}_{\gamma \gamma}& {I}_{\gamma \pi}\\ {}{I}_{\gamma \pi}^{\prime }& {I}_{\pi \pi}\end{array}\right), $$

where Iγπ = (Iγ1, …, IγK) and Iππ = diag(Ikk). When P1k ≠ 0, P2k ≠ 0 and P3k ≠ 0 for all k, Iππ is non-singular matrix and then the element corresponding to Iγγ of the inverse matrix of I, which is the variance of \( {\overset{\sim }{\gamma}}_0 \), is given by

$$ {I}^{\gamma \gamma}={\left({I}_{\gamma \gamma}-{I}_{\gamma \pi}{I}_{\pi \pi}^{-1}{I}_{\gamma \pi}^{\prime}\right)}^{-1}={\left({I}_{\gamma \gamma}-\sum \limits_{k=1}^K{I}_{\gamma k}^2{I}_{kk}^{-1}\right)}^{-1}=4{\left[\sum \limits_{k=1}^K{n}_k{A_k}^2\left({B}_k^{(0)}-\frac{{C_k^{(0)}}^2}{D_k^{(0)}}\right)\right]}^{-1}. $$

Since

$$ \frac{4{D}_k^{(0)}}{B_k^{(0)}{D}_k^{(0)}-{C}_k^{(0)2}}=\left(1-{p}_{a,k}\right){p}_{a,k}+\left(1-{\gamma}_0\right)\left(1-2{p}_{e,k}^{\ast}\right)\left(1+{p}_{a,k}\right), $$

where \( {p}_{a,k}={\gamma}_0\left(1-{p}_{e,k}^{\ast}\right)+{p}_{e,k}^{\ast } \) is the probability of agreement in the k-th stratum and \( {p}_{e,k}^{\ast }=2{\pi}_k\left(1-{\pi}_k\right) \) is the probability of chance agreement in the k-th stratum, and using the variance formula for the single stratum case given by Ohyama [36], Iγγ can be reduced to the rightmost expression in Eq. (14).

Profile variance approach

The profile variance of a statistic is defined as the variance similar to the estimated variance but without substituting the estimate for the parameter corresponding to the statistic [41].

In this study, \( {\overset{\sim }{\pi}}_k \) is substituted for πk in Eq. (15), and then we obtain the profile variance \( \overset{\sim }{Var}\left({\overset{\sim }{\gamma}}_0\right) \) from (14). Since \( {\overset{\sim }{\gamma}}_0 \) is distributed asymptotically as the normal distribution with mean γ0 and variance (14), we have

$$ \frac{{\left({\overset{\sim }{\gamma}}_0-{\gamma}_0\right)}^2}{\overset{\sim }{Var}\left({\overset{\sim }{\gamma}}_0\right)}\to {\chi}_1^2. $$

Thus we can obtain the confidence limits as the two admissible roots of Eq. (17).

Since Eq. (17) is a cubic equation for γ0 and is complicated to solve, thus we calculated the confidence limit by numerical calculation. The program is given in additional file. Other examples of the profile variance approach for obtaining confidence intervals can be found in many literatures. For example, Bickel and Doksum [50] reported a confidence interval based on the profile variance for the one-sample binomial proportion. Rothman [51] and Afifi, Elashoff and Lee JJ [52] described profile variance type of confidence intervals for survival probability.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Honda, C., Ohyama, T. Homogeneity score test of AC1 statistics and estimation of common AC1 in multiple or stratified inter-rater agreement studies. BMC Med Res Methodol 20, 20 (2020). https://doi.org/10.1186/s12874-019-0887-5

Download citation

Keywords

  • Common AC1
  • Consistency evaluation
  • Gwet’s AC1
  • Homogeneity test
  • Inter-rater agreement
  • Stratified study