Multivariate random effects meta-analysis of diagnostic tests with multiple thresholds

Background Bivariate random effects meta-analysis of diagnostic tests is becoming a well established approach when studies present one two-by-two table or one pair of sensitivity and specificity. When studies present multiple thresholds for test positivity, usually meta-analysts reduce the data to a two-by-two table or take one threshold value at a time and apply the well developed meta-analytic approaches. However, this approach does not fully exploit the data. Methods In this paper we generalize the bivariate random effects approach to the situation where test results are presented with k thresholds for test positivity, resulting in a 2 by (k+1) table per study. The model can be fitted with standard likelihood procedures in statistical packages such as SAS (Proc NLMIXED). We follow a multivariate random effects approach; i.e., we assume that each study estimates a study specific ROC curve that can be viewed as randomly sampled from the population of all ROC curves of such studies. In contrast to the bivariate case, where nothing can be said about the shape of study specific ROC curves without additional untestable assumptions, the multivariate model can be used to describe study specific ROC curves. The models are easily extended with study level covariates. Results The method is illustrated using published meta-analysis data. The SAS NLMIXED syntax is given in the appendix. Conclusion We conclude that the multivariate random effects meta-analysis approach is an appropriate and convenient framework to meta-analyse studies with multiple threshold without losing any information by dichotomizing the test results.


Background
Meta-analysis of diagnostic accuracy studies depends on the type of data that is available from different studies. The most frequently reported measures of diagnostic test accuracy are sensitivity and specificity or a two by two table, i.e. with a single threshold value. Meta-analytic methodologies for such kind of data have been developed to summarize sensitivity and specificity separately or jointly in a fixed or random effects context, for example [1][2][3][4][5][6]. In recent years the bivariate random effects metaanalysis of diagnostic tests has become a well established approach, which can be fitted in many statistical packages [1,2]. The bivariate approach has many advantages over separate random effects meta-analysis of sensitivity and specificity and the traditional summary receiver operating characteristics (SROC) method of Littenberg and Moses [1,2,4]. Besides it is flexible to derive different outcome measures, such as overall sensitivity and/or specificity, diagnostic odds ratio and SROC curves, from the estimated parameters.
In this article we consider the situation where diagnostic results for the test under evaluation are reported in three or more categories; for example, disease severity classified as malignant, suspect or benign. One straightforward approach often followed in practice is to dichotomize the test results into two categories and apply the well developed bivariate methods separately for each of the thresholds. When data is presented for many thresholds, a ROC can be calculated per study, and meta-analytic methods have been developed to derive a SROC from them [7][8][9]. Poon [10] discusses a latent normal distribution model for analysing ordinal responses with applications in metaanalysis. This model can be applied to multiple threshold diagnostic meta-analysis data. However, this paper only considered fixed effect modeling. Bipat et al [11] discussed a multivariate random-effects approach for meta-analysis of cancer staging. They assumed the correlations between different cancer stages to be independent, i.e. SROC curves can not be derived from their model. Specifically for diagnostic accuracy studies, Dukic et al [12] discussed both ordinal regression and hierarchical approaches based on latent variable modeling.
The above approaches are no direct extensions of the nowadays popular bivariate meta-analysis approach for the one threshold case. The aim of this article is to generalize this approach to the situation where test results are presented using more than one threshold or in more than two categories. Not necessarily in all studies the same number of categories is presented, however, in this article we focus on the case where the number of categories is equal across studies and we discuss how our approach can accommodate studies reported with unequal number of thresholds. Our approach can be implemented in standard statistical packages. In the methods section we briefly review the bivariate random effects approach, and we introduce the multivariate approach to meta-analyse studies that report test results with more than one threshold. We illustrate the methods using published meta-analysis data and end with a discussion.

The bivariate random effects (BREM) approach
For the situation where each study presents one pair of sensitivity and specificity with corresponding standard errors, the bivariate meta-analysis approach [13] has become a well established method [1,2,14]. The approach preserves the two-dimensional nature of the original data taking into account the between-studies correlation of sensitivity and specificity. It can be seen as an improvement on the method of Littenberg and Moses [4], which has been the standard method to construct a SROC for more than a decade.
In this section first we will introduce the bivariate random effects model (BREM) in its standard form. Subsequently we will derive another form of the model, which starts from a model for study specific ROCs and has a different parametrization. This formulation of the model is the natural one to generalize to the case where we have two or more pairs of specificity and sensitivity per study. This formulation also sheds more light on the interpretation of SROCs, which is problematic in the case where only one pair of sensitivity and specificity is available.
For study i, denote ξ i = logit(1 -specificity i ) and η i = logit(sensitivity i ). Let x 1i be the number of true positives, n 1i the total number of diseased subjects, x 0i the number of false positives and n 0i the total number of non-diseased subjects. Then the observed sensitivity and specificity for a given study i are x 1i /n 1i and (n 0i -x 0i )/n 0i respectively. Note that the underlying sensitivity and specificity tend to be negatively correlated across studies because of explicit or implicit differences in the thresholds. Therefore ξ i and η i will tend to be positively correlated.

Between-studies variability
The between-studies model [1] is given by: The bivariate distribution of true logit transformed sensitivities and 1-specificities can be characterized by different lines. Back transforming such a line by taking the inverse logit gives a SROC. Since there are several reasonable choices for lines characterizing a bivariate normal distribution, several types of SROCs are possible. For example, a straightforward choice would be the regression of η on ξ. However, since the roles of ξ (specificity) and η (sensitivity) are interchangeable, the regression line of ξ on η is an evenly reasonable choice. The method of Littenberg and Moses chooses the regression line of D = η-ξ on S = η + ξ. Table 1 gives an overview of 5 different choices as distinguished by Arends et al [2].
The different SROCs can be vastly different in applications, see for instance Arends et al [2] and the data examples. The BREM approach as introduced by Reitsma et al [1] and discussed by Arends et al [2] does not assume anything about study specific curves. The method simply leads to an estimated underlying bivariate distribution of the true sensitivities and specificities as reported by the different studies included in the meta-analysis. This means that the chosen SROC does not necessarily correspond with the true curves of the studies. The true study specific curves might have a substantially different shape, and the SROC cannot be interpreted as a kind of average or overall ROC representative for the ROCs of the different studies. There might even be no study specific curves at all, in case the diagnostic test cannot be thought of as a continuous test. However, this does not mean that the analysis does not make sense in this case, since the existence of study specific ROC curves is not assumed by the method. In the remainder of this section we introduce a new formulation of the BREM, which starts with the study specific ROCs. This will make clear under which extra assumption the BREM describes the distribution of study specific ROCs and the calculated SROC can be considered to be a real overall SROC.
Suppose that in the (ξ, η) space the study specific ROC curves are straight lines with a common slope β.
This model is just the same as (1), only with a different parametrization. However, the number of parameters is one more, which means that one of them is unidentifiable. To make the model identifiable, we need a further assumption on how the ξ i 's in the different studies are selected. For instance we could assume that σ αξ is zero.
This means that the individual investigators, in selecting their ξ i value, are not lead by the level of their line. However it is perfectly conceivable that an investigator who happens to have a ROC that is relatively low, tends to choose a relatively high value for his ξ i if a high sensitivity is preferred, or just a relatively low value of ξ i if high specificity is preferred.
If we assume that the correlation between α i and ξ i is zero, it can be seen that β is given by the slope of the regression line of η on ξ. In this case the η on ξ type SROC is the true SROC in the sense that it really can be interpreted as such.
In the (ξ, η) space it is just the average line over the population of studies, in the ROC space it can be interpreted as a kind of median ROC.
Another assumption could be that the correlation between η and α is zero. This means that we assume and therefore the correlation between α and ξ i is given by ρ αξ = -σ α /(βσ ξ ). The slope, then, is rewrit- More general, we could assume that some linear combination aξ + bη of ξ and η is not correlated to α, for some value of a and b. We have already seen that if a = 1 and b = 0, the η on ξ type SROC is the correct one. If a = 0 and b = 1, then the ξ on η type is the correct one. If we assume a = b = 1, then one can check that β is equal to the slope of the regression of D = η-ξ on S = η + ξ, and the Littenberg & Moses type SROC is the correct one. One can also check that the assumption a = β and b = 1 leads to the Rutter & Gatsonis (R&G) type SROC. The slope of their method is the geometric average of the η on ξ and ξ on η regression line, that is, their regression line always lies in between the two curves. For a detailed discussion we refer to Arends et al [2] Generally, the 5 SROCS only coincide in the very degenerate case where the between-studies variances of sensitivity and specificity are equal and the correlation is one. If only the between-studies variances of sensitivity and specificity are equal then the three methods (D on S, R&G and Major axis) lead to the same SROC curve. Note that all choices of SROC curves in Table 1, except the R&G, can possibly give an improper SROC curve that runs from (TPR = 1, FPR = 1) to (TPR = 0, FPR = 0) if one of the following situation occurred. However, the chance that this occurs in practice is very unlikely. Notice that the R&G method always yields a proper SROC because it implies that the slope parameter is always positive, being equal to the ratio of the between studies standard deviations of sensitivity and specificity. It is remarkable that the R&G SROC can be calculated from two simple univariate meta-analyses on the sensitivities and specificities separately. This is an advantage since no bivariate modeling is needed at all, but it might also question its value because apparently it does not use the possible correlation between sensitivity and specificity. We conclude that in the situation where we have only one pair of sensitivity and specificity per study a calculated SROC can only be interpreted as a real overall ROC under an untestable assumption. The assumption is especially sensitive when the differences among the estimated between-studies variances and covariance of sensitivity and specificity are large. This issue seems to have been overlooked in the literature. In a recent letter in Biostatistics Chu and Guo [15] claim that the SROC given by Harbord et al [14] and Rutter and Gatsonis [6] are "are incorrect and potentially misleading". Chu and Guo assume that the η on ξ type is the "real" SROC, but they forget that this one is based on an untestable assumption too. However, the situation changes as soon as more pairs of sensitivity and specificity are available per study.

Within-study variability
The within-study variability can be modeled using an approximate normal distribution [1,2] or a binomial distribution [2,14,16]. Hamza et al [17,18] compared in extensive simulation experiments the binomial and approximate normal within-study models, and showed that in general the performance of the binomial withinstudy model is much better. Chu and Cole [16] also showed similar results using a selected number of simulations. Therefore in this paper we restrict to the binomial within-study model. For the approximate approach we refer to [1,2].
The within-study model is based on the binomial distribution of the number of false positive (x 0i ) and true positive (x 1i ) test results. More specifically we assume: The ξ i and η i = α i + βξ i are the true logit transformed (1specificity) and sensitivity from (2). The bivariate model given by (2)(3)(4) can be fitted using generalized linear mixed model procedures in standard statistical packages, such as the SAS procedure NLMIXED, STATA gllamm or the R/S-Plus program nlme.

Multivariate random effects meta-analysis (MREM)
In this section we consider studies where a single test is administered and the results are reported using J -1 thresholds or, equivalently, with J ordered categories; see for example references [19,22] and table 2. Let the number of non-diseased and diseased patients with test result in category j from the i th study be given by x 0ij and x 1ij , respectively. The total number of non-diseased and

The model
Let the true logit transformed 1 -specificity and sensitivity for a given threshold j be denoted by ξ ij and η ij respectively, where ξ ij 's and η ij 's are ordered in the j index. We assume a hierarchical model that is a direct generalization of model (2)(3). In contrast to the one threshold case (bivariate approach), when we have more than one threshold (multiple points per study), the SROC curve is identifiable. The between and within-study models are given as follows: Between-studies model 1. Model for the relation between ξ ij and η ij : Within a study we assume a linear relation with common slope β and study specific intercept α i .
The parameters and β determine the summary ROC curve. Note that β is an asymmetry parameter. If β = 1 then the curve is symmetric around the line of equal sensitivity and specificity. We could also allow β to vary across studies and assume a bivariate normal distribution for the pair α i and β i .

Model for the ξ ij 's:
Here is the mean ξ ij over studies and the 's are constrained to keep their rank order, Δ i represents the study specific systematic deviation of the ξ ij 's from the overall means and δ ij represents the random residual deviation of true unobserved observations. The Δ i 's can be assumed to follow some parametric or non-parametric distribution. In this article we assume a normal distribution given by Δ i~N (0, ). The δ ij 's are assumed to be independent and follow a normal dis- ). Furthermore, the δ ij 's are assumed to be independent of the Δ i and α i . The covariance between α i and Δ i is denoted by σ αΔ . A negative σ αΔ for instance would mean that in studies with a relatively small α i the ξ ij 's tend to be chosen relatively high. The above assumptions (in 5 & 6) lead to the following marginal between-studies model: Note that the covariance structure for the ξ ij 's is of compound symmetry, i.e. the between-studies variances of ξ ij 's are assumed to be equal and the covariances between any of the ξ ij 's are assumed to be the same. We have chosen this structure since it is popular in repeated measures modeling as a simple but often realistic covariance structure. Moreover the parameters have a nice interpretation. However, one can choose any structure. The assumption of compound symmetry might be strong; for example, the covariance (correlation) between consecutive thresholds may be larger compared to between non-consecutive ones. If so, a richer structure is needed. In general, the between-studies variances and covariances of ξ ij 's could be allowed to be all different (unstructured covariance matrix) and, in the spirit of the general guide-

FNAC outcome Malignant Suspect Benign Total
Final diagnosis Malignant lines for mixed model building given by Verbeeke and Molenberghs [23](chapter 9), it could then be simplified into another covariance structure, such as Toeplitz, auto-regressive or compound symmetry.

Within-study model
, the observed number of subjects in the non-diseased (x 0i1 , ʜ, x 0iJ ) and diseased (x 1i1 , ʜ, x 1iJ ) groups have independent multinomial distributions with parameters (π 0i1 , ʜ, π 0iJ ) and (π 1i1 , ʜ, π 1iJ ), where Note that ξ ij is logit(1-specificity) and η ij is logit(sensitivity) for threshold j, and the cell probabilities for diseased (or non-diseased) subjects for category j is the difference of sensitivities (or 1-specificities) between category j and j -1.
The probability density function (pdf) given the π 0ij 's and π 1ij 's of the observations of the i th study is given by: Inference on the parameters is obtained through the standard likelihood method based on the marginal density for the data, which is calculated by integrating out the random effects B = (α, ξ 1 , ʜ, ξ J-1 )'. Then the contribution of the i th study to the likelihood is As seen from 7, we assumed a multivariate normal distribution for the random effects. However, the density g(B) can also be assumed to belong to some other parametric family of distributions [24]. In summary the MREM model allows to calculate different relevant measures of diagnostic test accuracies: • The mean logit sensitivity and logit specificity along with their standard errors for any known threshold, j, are estimated by the model. One can then derive sensitivity and specificity as follows: and specificity j = , and the corresponding standard errors can be calculated using delta method [25]. SAS NLMIXED users avoid hand calculation by using the 'Estimate' statement (see the SAS syntax for example).
• The diagnostic odds ratio (DOR), that can be derived from sensitivity and specificity, for a given threshold, j, is given by DOR j = • The estimated parameters from model (5 -7) are used to derive the overall median SROC curve that is given by sensitivity(specificity) = . Besides, study specific ROC curves can be generated from the empirical Bayes estimates of the random effects model. In SAS NLMIXED, the empirical Bayes estimates are generated automatically and collected from the output file ('out') specified in the 'random' statement (see SAS syntax).
• Uncertainty around the SROC can be characterized by calculating the confidence interval at each point along the curve.
• A prediction band for the true ROC curve of a new study can be calculated by adding and subtracting 1.96 times the estimated standard deviation of α i in the equation above.
• The model accommodates study level covariates, for example corresponding to two different diagnostic tests, that enables to test the hypothesis for the differences between groups, e.g. diagnostic tests. A thorough discussion of comparison between groups or test results using the bivariate model is given in Hamza et al [26]. In the second data example of this paper we show how to test for the significant difference between groups.  ,) that are used to calculate different diagnostic test accuracy measures can be estimated as follows.
1. We fitted the model using Proc NLMIXED of SAS. This procedure does not support directly the multinomial distribution. However, the procedure allows a user specified log-likelihood function. This is easily done for the multinomial distribution. In the appendix the syntax for an example is given. The NLMIXED procedure calculates the likelihood function by numerical integration, using adaptive Gaussian quadrature. The number of quadrature points is specified by the user or automatically by SAS. The larger that number is chosen the better the approximation, but at the cost of more computational time. For a detailed discussion of different choices of quadrature options, optimization methods and convergence criteria we refer to the SAS manual [27]. In practice, when a final model is reached, one increases the number of quadrature points until the parameter estimates do not change anymore.
2. NLMIXED allows user specified likelihoods, but many other programs do not. Usually the binomial distribution is supported, therefore we also mention another possibility to fit the model in Generalized Linear Mixed Model programs. The trick is to write the multinomial pdf as a sequence of conditional univariate pdf's, i.e. the pdf of (x Di1 , ʜ, x DiJ ) is expressed as: where D is the disease status, 0 or 1. These conditional distributions are all binomial [28] and given by: where π 0ij and π 1ij are calculated as in (8 & 9) with j = 1, ..., J -1.

Results
To illustrate the methods discussed in this article, we apply them to two published meta-analysis data-sets. One is relatively large (29 studies) with three test result categories (2 thresholds). The second data set is small (10 studies) with five test result categories (4 thresholds). Here our objective is to fit the models discussed in the methods section, and to derive the SROC curves.

Example 1: Fine-needle aspiration cytologic examination
Giard and Hermans [19] present 29 studies evaluating the accuracy of fine-needle aspiration cytologic examination (FNAC) of the breast to assess the presence of breast cancer (see additional file 1. FNAC provides a non-operative way of obtaining cells for the establishment of the nature of a breast lump and therefore plays a pivotal role in the preoperative diagnostic process [19,20]. The selected FNAC results were classified in the following four cytologic categories: definitely malignant, suspect for malignancy, benign, and unsatisfactory specimen for diagnosis (acellular aspiration). The last category is for those who do not have satisfactory specimen for diagnosis. Following the authors, we merged this group with the benign group, which resulted in a two by three table ( Table 2).
The authors [19] determined the sensitivity and specificity of FNAC for each study by reducing the two-by-three   SROC curves from the five choices of BREM approach (red = R&G, blue = D-on-S, green = Major axis, cyan = η-on-ξ and gray = ξ-on-η) and MREM approach (black) for the FNAC data set Comparing the estimated SROC curves from the two approaches, one can see that the BREM approach underestimates or overestimates the SROC curves depending on the choice of the type of SROC. This can be seen clearly from the AUC of the SROC curves in table 3. Of course, if we had chosen another cut point for the BREM, we would have ended up with a different SROC estimate for each of the choices. By the delta method it is possible to construct a confidence band for the SROC curve, as explained above. We did not do here, to avoid a too busy picture.
In contrast to the BREM, the MREM can provide estimates of the study specific ROCs. The program that we used, NLMIXED from SAS, gives the empirical Bayes estimates of the study specific random intercept α i , which enables to draw study specific SROC curves. We give the study specific ROCs from the MREM approach in figure 2. Indeed, the BREM can also provide study specific curves, but only if an untestable assumption on the correlation between α i and ξ i is made. Note that the study specific curves in figure   2 are "parallel". This is because we assumed a common fixed slope parameter β across studies. Allowing β to be random might give study specific curves that might cross. In principle one can do that. Then 3 extra parameters have to be estimated: the variance of β and its covariance with α and Δ. There are enough degrees of freedom and the model can still be fitted in NLMIXED. However, the more covariance parameters, the larger the chance of non-convergence problems. When we tried to take β random in this example, we were not able to get the program converging, probably due to correlations becoming -1 or 1, a phenomenon already well known in the much simpler BREM [21].

Example 2: CAGE in screening for alcoholism
The CAGE questionnaire is a combination of four questions (resulting in a score from 0 to 4) that can be used for the screening of patients on alcoholism or alcohol dependence. Aertgeerts et al [22] performed a meta-analysis of all published studies to evaluate the diagnostic value of the CAGE questionnaire. In total they presented 10 studies published between January 1974 to December 2001, of which 5 were carried out in primary care populations and 5 in non-primary. In this data example we also include the study level covariate whether or not the patients are from primary care. If a study is carried out in a primary care population then z 1 is assigned 1 else z 1 is 0.
Besides, for the MREM approach, the slope parameter is allowed to be random and it is tested if its between-studies variance is significantly different from zero. In most cases a CAGE score of ≥ 2 is considered to indicate an alcohol problem. For the illustration of the BREM method we therefore use the threshold of ≥ 2 as test positive. Now the mean structure in (1) or (3)  The two-by-five tables from the CAGE meta-analysis were also analyzed using the MREM approach. Here the between-studies model in (5) can be rewritten as η ij = α i + β i ξ ij + γz 1 to adjust for the covariate z 1 . Note that the slope parameter is random and assumed to be independent of   As shown in figure 3 and the AUCs from table 4 the bivariate approach seems to over estimate the SROC curve, for any of the 5 choices of the type of the SROC. Again this would possibly be changed if we choose another cut-off point for positivity on the screening test for alcoholism.

Discussion
The summary ROC curve has been introduced as a way to assess the diagnostic accuracy of a diagnostic test in a meta-analysis [4,5,7,29,30]. For the most frequent situation, when one point per study is presented, the medical (and statistical) articles seem to have overlooked the problems inherent to SROCs based on studies with only one point. Although recent developments in the area have shown that the bivariate random effects meta-analysis approach has important advantages over the standard SROC approach of Littenberg and Moses [1,2,4], the problem of identifiability and therefore interpretability of the resulting SROC remains. When studies present more than one point per study, commonly the test results are reduced to two categories and meta-analysed using a well established approach such as the BREM, which is a suboptimal approach. In our data examples we illustrated this by considering a single cut-off value and applying the BREM approach. The results from the two data examples showed that differences between the estimated SROC curves based on the BREM approach can be large, as in the first example, or relatively small, as in the second example. The sizes of the differences depend on the values of the three covariance parameters. The η on ξ and ξ on η curves are always most extreme in the sense that the other three lie between them. Therefore a necessary and sufficient condition for the 5 different curves to be equal is the correlation being one, which is not very probable in practical situations. Equality of the variances of ξ i and η i is a sufficient condition for equality of the SROCs from the three intermediate approaches (D on S, Rutter and Gatsonis, and major axis). In the first example the variances differ a factor 4 and the correlation is relatively small (0.22). In the second example the variances are almost equal and the correlation is relatively large (0.82). This explains why the differences are large in the first and small in the second example.
In this article we generalized the BREM approach for one threshold to the situation where more than one point per study is available and the number of thresholds is equal across studies. In our opinion, the MREM approach is relatively easy to understand and has several advantages. First, data of the full 2 by k table is used without losing any information by dichotomizing the test results. Second, different outcome measures can be derived from the fitted model, such as SROC curves and overall sensitivity and/or specificity for any choice of the threshold. Third, in contrast to the BREM approach, the summary ROC and the study specific curves are identifiable. Fourth, the model is symmetric in the ξ ij 's and η ij 's. Interchanging their role leads to the same model. Fifth, it is straightforward to include study level covariates. They can be added directly to the intercept and slope of the SROC, and also to the threshold values. Sixth, the MREM can be fitted in standard statistical packages without extra programming. In equation (7), we specified compound symmetry for the covariance structure of the ξ ij 's. However, one can also choose another, possibly richer structure and simplify it using the likelihood ratio test. However, more covariance parameters gives more risk of non-convergence problems.
For the within-study model we used the multinomial distribution instead of the summary statistic approach usually followed in meta-analysis. This avoids problems with small numbers and zero cells are allowed.
We used NLMIXED from SAS to fit our models and noticed that convergence of the program is sensitive for starting values. It turned out that good starting values are obtained by fitting the BREMs according to all possible cut-off values.
Related work was done by Dukic and Gatsonis [12]. They used ordinal regression and a hierarchical approach based on latent variable modeling and fitted their model by Bayesian methods. To our knowledge their approach is rarely used in practice, probably due to the inherent complexity of the model and fitting methods. The difference between Dukic and Gatsonis model and the MREM is mainly in the modeling of the ξ ij 's. They treated them all as fixed parameters, leading to as many ξ parameters as there are data points, while we modeled them using the standard multivariate meta-analysis model [13]. The motivation for this is to reduce the number of parameters and to correct for the measurement errors in the 's. In our opinion, Dukic's method does not correct for measurement error in the 's and leads to an inconsistent estimate of the summary ROC curve, for reasons set out in Van Houwelingen and Senn [31]. Another difference is that Dukic's method assumes independence between the choice of the specificities as represented by the ξ ij parameters and the level of the study specific ROC curve as determined by α i ; a not necessarily realistic assumption.ξ iĵ ξ ij

Model Extension and Limitation
In this paper we focused on the situation where test results are presented with equal numbers of thresholds, but the method is more general. In practice test results can possibly be presented with different numbers of thresholds. Often the different numbers of categories arises while all studies in principle use the same categorization, but in different studies different categories are lumped together. Then our model can still be applied without any modification, since our approach allows missing data points. Many other situations can be covered by allowing the parameters of the model for the ξ ij 's (6) to be different for different groups of studies. For instance, suppose the goal of the meta-analysis is to compare two tests A and B, one with 3 and the other with 5 categories, and each study reporting either A or B. Then the ξ ij model for A can be specified to be completely different from that of B. Such a model can still be estimated in for instance using Proc NLMIXED. The total number of different thresholds across all studies is the limiting factor in our approach. If it is too large, the number of parameters might be too large to estimate and the likelihood method may not work properly.