BMC Medical Research Methodology

Background: The within-subject coefficient of variation and intra-class correlation coefficient are commonly used to assess the reliability or reproducibility of interval-scale measurements. Comparison of reproducibility or reliability of measurement devices or methods on the same set of subjects comes down to comparison of dependent reliability or reproducibility parameters.


Background
An extensive literature has been developed on procedures for testing the equality of two or more independent coefficients of variation as measures of reproducibility [3][4][5]. Their work shows that likelihood-based methods such as the likelihood ratio (LR) test, score test, and tests based on the method of generalized statistics developed by Weerahandi [6], provide efficient procedures for comparing coefficient of variations (CV) in univariate normal populations or from independent samples. However, there are situations where comparing CVs from related samples should be considered. Typical situation is when two instruments are used to measure the same set of subjects, and each subject is repeatedly measured by the same instrument. We shall explain in the methods section the reason why the within-subject coefficient of variation (WSCV) is a more appropriate measure of reproducibility than the CV. Many authors use the terms reliability and reproducibility interchangeably [7][8][9]; however we believe that they are conceptually different. The reliability is the degree of closeness of the repeated observation on the same subject under the same experimental conditions, so the instrument is always the same. The Intra-class correlation coefficient (ICC) is commonly used as a measure of reliability. It is calculated as the ratio between subjects variance to the total variance. Therefore, the larger the heterogeneity among the subjects, with lower or equal random error the easier it is to differentiate among subjects. In other words, the ICC measures how distinguishable the subjects are. On the other hand, reproducibility determines the degree of closeness of the repeated observations made on the same subject either by the same instrument or different instruments. There is a wide debate among statisticians and psychometricians related to the choice of appropriate measures of reliability and reproducibility. We refer the interested reader to [10,11]. The main focus of our paper is on the reproducibility parameter.
An important application from molecular biology research in which correlated/dependent reproducibility coefficients are compared is when microarray technologies are compared in terms of reproducibility of gene expression measurements. DNA Microarrays are powerful technologies which make it possible to study genomewide gene expressions and are extensively used in biological research. As the technology evolves rapidly a number of different platforms became available, which introduces some challenges for researchers to know which technology is best suited for their needs. There have been various studies that directly compared the performance of one platform with another in terms of cross-platform comparability and agreement of gene expression results. However the results of these studies are conflicting: some demonstrate concordance, others discordance between technologies [12][13][14][15][16][17]. Thus one needs to take into consid-eration the accuracy and reproducibility of different types of microarrays when allocating the laboratory resources for future experiments. The key factors for selecting an appropriate platform are (1) Intra-assay reproducibility, and (2) the degree of cross-platform agreement [18]. The concordance among microarray platforms would allow researchers to directly compare their measurements and perform meta-analyses.
Most of the microarray reliability or reproducibility and cross-platform studies use Pearson's correlation, as an index of reproducibility or agreement. However, it has long been recognized that application of procedures such as the paired t-test and Pearson's correlation are not appropriate tools for measuring agreement between measuring devices [19,20]. Rather, indices such as the intra-class correlation coefficient [21] and the within-subject coefficient of variation should be used as measures of reproducibility. It has also been demonstrated that the within-subject coefficient of variation is very useful in assessing instrument reproducibility [8,22].
The main focus of this paper is to develop several procedures for testing the equality of two dependent withinsubject coefficients of variation computed from the same sample of subjects, which is, to the best of our knowledge, has not been dealt with in the statistical literature, and to evaluate the statistical properties of these methods via extensive Monte Carlo simulation. We propose two approaches; one is likelihood based (LRT, Wald, and Score test), and the other is a regression based approach coined as PM test. After evaluating the statistical properties (power and empirical level of significance) of these tests using Monte Carlo simulation, the methodology is illustrated on data from two biomedical studies.

Likelihood based methodology
Suppose that we are interested in comparing the reproducibility of two instruments. Let x ijl be the jth measurement of the ith subject by the lth instrument, j = 1,2,... m l , i = 1,2,... n, and l = 1, 2. To evaluate the WSCV we consider the one-way random effects model (1) where μ l is the mean value of measurements made by the lth instrument, b i are independent random subject effects ), and e ijl are independent N(0, ).
Many authors have used the intra-class correlation coefficient (ICC), ρ l defined by the ratio as measure of reproducibility/reliability [18,23]. Quan and Shih [8] argued that ρ l is study-population based since it involves between-subject variation. Meaning that the more heterogeneity in the population, the larger the ρ l .
Alternatively, they proposed the within-subject coefficient of variation (WSCV) θ l = σ l /μ l as a measure of reproducibility. It determines the degree of closeness of repeated measurements taken on the same subject either by the same instruments or on different occasions under the same conditions. It is clear that, the smaller the WSCV, the better the reproducibility. We distinguish the WSCV from the coefficient of variation since CV l involves in the numerator and similar to ρ l is population based. Therefore, more heterogeneity in the population would result in a large value of CV l . For that reason we shall focus our work on the WSCV rather than the CV. We also note that there is an inverse relationship between the ICC (ρ l ) and the corresponding within subject variance . Clearly, larger values of ICC (higher reliability) would be associated with smaller WSCV (better reproducibility). The focus of this paper is on aspects of statistical inference on the difference between two correlated WSCV. The inferential procedure depends on the multivariate normality of the measurements and is mainly likelihood based. The following set-up is to facilitate the construction of the likelihood function.
Let denote the measurements on the i th subject, i = 1,2,....,n where are the m 1 measurements obtained by the first method (platform), are the m 2 measurements obtained the second method (platform). We assume that For the l th method, the WSCV, which will be denoted as θ l in the remainder of the paper is defined as Our primary aim is to develop and evaluate methods of testing H 0 :θ 1 = θ 2 taking into account dependencies induced by a positive value of ρ 12 . We restrict our evaluation to reproducibility studies having m 1 = m 2 = m.
The summary statistics given in (3) are defined as:  The maximum likelihood estimates (MLE) for μ l and are given respectively by , where and l = 1, 2. Clearly, exists for values of m > 1. Therefore we shall assume that m > 1 throughout this paper. From [24], we obtain and by computing Pearson's product-moment correlation over all possible pairs of measurements that can be constructed within platforms 1 and 2 respectively, with similarly obtained by computing this correlation over the The WT of H 0 :θ 1 = θ 2 requires the evaluation of variance of , l = 1, 2, and . To obtain these values we use elements of Fisher's information matrix, along with the delta method [26,27]. On writing:

and
, the Fisher's information matrix I = -EN∂ 2 l/∂ψ∂ψ'Q has the following structure: This is based on a result from [26] (page 239) indicating that, I 12 = = -E(∂ 2 l/∂ψ 1 ∂ψ' 2 ) = 0. Therefore, from the asymptotic theory of maximum likelihood estimation we have: And the elements of I 22 are given in the Appendix.
The elements of are the asymptotic variance-covariance matrix of the maximum likelihood estimators of the covariance parameters. Inverting Fisher's information matrices we get: Applying the delta method [27], we can show, to the first order of approximation that: The maximum likelihood estimator of θ l is .
Again, by application of the delta method, we can show to the first order of approximation that: as was shown by Quan and Shih [8].
Again using the delta method we show approximately that: From [28] we apply the large sample theory of maximum likelihood to establish that: is approximately distributed under H 0 as a standard normal deviate. The denominator of Z is the standard error of and is denoted by SE . Since the standard error of contains unknown parameters, its maximum likelihood estimate is obtained by substituting for θ l , for ρ l and for ρ 12 . Moreover, we may construct an approximate (1-α)100% confidence interval on (θ 1 -θ 2 ) given as: cut-off point of the standard normal distribution.

Score test
One of the advantages of likelihood based inference procedure is that in addition to the WT and the LRT "Rao's score test" can also be readily developed. The motivation for it is that it can sometimes be easier to maximize the likelihood function under the null hypothesis than under the alternative hypothesis. A standard procedure for performing the score test of H 0 : θ 1 = θ 2 is to set θ 2 = θ 1 + Δ, so that the null hypothesis is equivalent to H 0 : Δ = 0, where Δ is unrestricted. Replacing μ l by σ l /θ l , the log-likelihood function L is then independent of μ l .
The score test has been applied in many situations and has been proven to be locally powerful. Unfortunately, the inversion of A 1•2 is quite complicated and we cannot obtain a simple expression for S that can be easily used. Moreover, we have also found through extensive simulations that while the score test holds its levels of significance, it is less powerful than LRT and WT across all parameter configurations. We therefore focus our subsequent discussion of power to LRT and WT. Direct application of the multivariate normal theory shows that the conditional expectation of d i on s i is linear [32]. That is

Regression test
is strictly positive.
The proof is straightforward and is therefore omitted.
It can be shown then from direct application of the multivariate normal theory that the conditional expectation (11) is linear, and does not depend on the parameter ρ 12 .
From (11.a) and (11.b), it is clear that α = β = 0 if and only if μ 1 = μ 2 and σ 1 = σ 2 simultaneously. Therefore, testing the equality of two correlated coefficients of variations is equivalent to testing the significance of the regression equation (11).

Simulation
The theoretical properties of the test procedures discussed thus far are largely intractable in finite samples. We therefore took a Monte Carlo study to determine the levels of significance and powers of these tests over a wide range of parameter values. For this study we generated observations from a multivariate normal distribution with covariance structure defined as in (2). Simulations were performed using programs written in MATLAB (The Math. Works, Inc., Natic, MA).
The parameters of the simulation included the total number of subjects (n), the number of replications (m 1 = m 2 = m), and various values of (θ 1 , θ 2 , ρ 1 , ρ 2 , ρ 12 ). For each of 2000 independent runs of an algorithm constructed to generate observations from multivariate normal distribution, we estimated the true level of significance and power of the LRT, Wald, Score and PM tests using a nominal level of significance 5% (two sided) for various combinations of parameters.
Tables 1 and 2 report the empirical significance levels based on 2000 simulated datasets for four (WT, Score, LRT and PM) procedures for sample size of n = 50 and n = 100, respectively. It is seen that all procedures provide satisfactory significance levels at all parameter values examined. The empirical significance levels for smaller sample sizes (n = 10, 20, and 30) were also estimated. All test procedures provided empirical levels that are very close to the 5% nominal level (data not shown).

Tables 3 and 4 display empirical powers based on 2000
simulated datasets for WT and LRT in sample sizes n = 30 and 50, respectively. As alluded to earlier, the score test is excluded from the power Tables 3 and 4 because its simulated empirical power values were unacceptably low (as we show in Table 5). We observe that for all parameter values that WT and LRT provide almost identical values of power (Tables 3 and 4). Thus, although the LRT shows greater power at some parameter combinations than the WT, the difference is usually less than three percentage points. We also conducted simulations to estimate the powers of the test statistics for smaller sample sizes (n =  10, and 20) (data not shown). We found that for some parameter combinations Wald and LRT provided acceptable power especially if the distance between θ 1 and θ 2 is large, and showed greater power than both the Score and PM tests. The power of Score test was generally very low.
For selected parameter values, power levels of PM, Wald, and the score tests for n = 50 subjects are given in Table 5. As already mentioned, the power of the score test is generally low. We note that the power of the Wald test is quite sensitive to the distance between θ1 and θ2. We note that the equality of the means and variances implies the equality of the WSCV, but the reverse is not true. This strong assumption might explain the relatively poor performance of the PM test, particularly when the means are not well separated.
To assess the effect of non-normality on the properties of the proposed test statistics we generated data from a lognormal distribution, and evaluated the performance of the four procedures for 2000 simulated datasets. The empirical levels of the regression based PM test were quite close to the 5% nominal level, but the power was poor. However, the likelihood based procedures (Wald, LRT and Score) did not preserve their nominal levels for the majority of the parameters combinations (data not shown).

Gene expression data
We illustrate the proposed methodologies by analyzing data from two biomedical studies. In the first data sets we illustrate the methodology on the gene expression measurement results of identical RNA preparations for two commercially available microarray platforms, namely, Affymerix (25-mer), and Amersham (30-mer) [14]. The RNA was collected from pancreatic PANC-1 cells grown in a serum-rich medium ("control") and 24 h following the removal of the serum ("treatment"). Three biological replicates (B1, B2, and B3) and three technical replicates (T1,    T2, and T3) for the first biological replicate (B1) were produced by each platform. Therefore, for each condition (control and treatment) five hybridizations are conducted. The dataset consists of 2009 genes that are identified as common across the platforms after comparing their Gene Bank IDs, and is normalized according to the manufacturer's standard software and normalization procedures. More details concerning this dataset can be found in the original article [14].
The results presented in this section were not restricted to the group of differentially expressed genes, and we used the "control" part of the data for both technical and biological replicates. The normalized intensity values are averaged for genes with multiple probes for a given Gene ID. Hence, we have a sample size of n = 2009 genes measured three times (m = 3) by each of the two platforms (or instruments). We have used the within-gene coefficient of variation as a measure of reproducibility of a specific platform.
The results of the data analyses are summarized in Table  6. Parameter estimates for both platforms, the estimated WSCV under the null hypotheses, as well as confidence interval of the difference of the two WSCVs are given in the Table. We note that the correlation estimates remain the same under both hypotheses. Moreover, we note that the intraclass correlations (ρ) are quite high. Using benchmarks provided in [33], both platforms produce substantially reproducible gene expression levels. Clearly, this is due to the large heterogeneity among the genes in the data set. Application of the LRT, Wald, and the PM tests for testing the equality of two dependent WSCV show that the Amersham has significantly lower WSCV (P < 0.001) i.e. better reproducibility for both the technical and biological replicates.

Analysis of computer aided tomographic scan measurements
Here we demonstrate the statistical methodologies of this paper on a much smaller data set than the microarray gene expression example. The data are from a study using the Computer-Aided Tomographic Scans (CAT-SCAN) of the heads of 50 psychiatric patients [20,34]. The measurements are the size of the brain ventricle relative to that of the patient's skull, and given by the ventricle-brain ratio VBR = (ventricle size/brain size) × 100. For a given scan, VBR was determined from measurements of the perimeter of the patient's ventricle together with the perimeter of the inner surface of the skull. These measurements were taken either: (i) from an automated pixel count (PIX) based on the images displayed on a television screen, or (ii) a handheld planimeter (PLAN) on a projection of the X-ray image. Table 7 summarizes the results. Clearly all tests show that PIX has significantly lower WSCV that the PLAN (p < 0.001); that is better reproducibility.

Discussion
A comparison between the reproducibility of two measuring instruments using the same set of subjects leads naturally to a comparison of two dependent indices. In this paper, several procedures are developed for testing equality of two dependent within-subject coefficient of variations computed from the same sample of subjects. We proposed two approaches; one is likelihood based (LRT, Wald, and Score test), while the other is regression based approach (extension of Pitman-Morgan). We assessed the powers and the empirical levels of significance of these   [1] and Morgan [2] is also developed and shown to be as powerful as the likelihood based tests.
We illustrated the proposed methodologies with the analyses of data from two biomedical studies. The majority of microarray reproducibility and cross-platform agreement studies use Pearson's correlation, as an index of reproducibility and agreement, which would not be an appropriate measure of reproducibility. Because of the large heterogeneity among the genes in the data set, the intra-class correlation coefficient as an index of reproducibility of the platform would also not be an appropriate index of reliability as highly heterogeneous populations artificially produces high reliability index. Therefore, WSCV should be used as an index of reproducibility. In addition, the meth-odology presented in this paper overcomes the difficulty noted by Tan et al. [14] in which the authors state that "Dependence between the datasets would confound any inferences we could make about the differences in correlations. ... determination whether differences in correlation were statistically significant could not be made". In this paper, we have used the within-gene coefficient of variation as a measure of reproducibility of a specific platform. Therefore, a comparison across platforms leads naturally to a comparison of two dependent within-subject coefficients of variation.
Two issues need to be discussed in this section. The first is related to the nature of the data to be analyzed while the other is related to situations when the assumed underlying model generating the data deviates from the normal distribution.
First, a frequently occurring question in the planning of biomedical investigations is whether to measure the response or the trait of interest on a continuous scale (e.g.   [36] and more recently Shoukri and Donner [37] cautioned against dichotomizing traits measured on the continuous scales. They demonstrated that the loss in the efficiency in estimation of the reliability coefficient can be severe. The conclusion is that for naturally dichotomous traits (e.g. affected vs. not affected) one can use kappa to assess the test-re-test reliability, while for continuous traits the methods presented in this paper would be more appropriate.
Second, it should be noted that the inference procedures discussed in this paper (except the PM test) are likelihood based and their statistical properties may not be appropriate in small samples. The difficulty is that the sampling distribution of a test statistics is unknown. Alternatively, one may use the bootstrap technology to estimate the sampling distributions of the test statistics. When the data are hierarchical in nature with variance covariance matrix Σ as shown in (2), one may use model-based approach to generate bootstrap samples [38], which is achieved by sampling subjects with replacement and estimate the coefficients of variations and hence their empirical sampling distributions. There is already a rich class of bootstrap methods for clustered data in the literature but there is an absence of detailed theoretical results on the properties of these methods [39]. Gaining insight into bootstrapping clustered data for all these methods and draw comparison to our proposed likelihood based approach warrants serious investigation and is beyond the scope of this paper.

Conclusion
Comparison of reproducibility or reliability of measurement devices or methods on the same set of subjects comes down to comparison of dependent reliability or reproducibility parameters. Testing the equality of two dependent WSCV has not been dealt with in the statistical literature. The presented methodology overcomes the difficulty noted by data analysts that the issue of dependence when ignored, would confound the inference on measures of reliability or reproducibility. It should also be emphasized that when comparison among platforms reliability indices the ICC is not an appropriate measure of reliability due to the large heterogeneity among the genes. Because the magnitude of the ICC depends on the degree of heterogeneity among the subjects it is not an appropriate index of reproducibility. We therefore recommend the WSCV in similar settings.
The LRT and WT procedures presented in Section 2 may also be extended in a straightforward manner to compare more than two platforms (methods, labs, or measurement devices). A further advantage of the LRT in this context is that it may easily be extended to deal with the case of an unequal number of replicates for each platform.
The codes developed (in MATLAB) can be used to do power calculations for planning a reproducibility study when comparing two methods (or devices), and can be obtained on request from the authors.

Pre-publication history
The pre-publication history for this paper can be accessed here: