The McNemar test for binary matched-pairs data: mid-p and asymptotic are better than exact conditional

Background Statistical methods that use the mid-p approach are useful tools to analyze categorical data, particularly for small and moderate sample sizes. Mid-p tests strike a balance between overly conservative exact methods and asymptotic methods that frequently violate the nominal level. Here, we examine a mid-p version of the McNemar exact conditional test for the analysis of paired binomial proportions. Methods We compare the type I error rates and power of the mid-p test with those of the asymptotic McNemar test (with and without continuity correction), the McNemar exact conditional test, and an exact unconditional test using complete enumeration. We show how the mid-p test can be calculated using eight standard software packages, including Excel. Results The mid-p test performs well compared with the asymptotic, asymptotic with continuity correction, and exact conditional tests, and almost as good as the vastly more complex exact unconditional test. Even though the mid-p test does not guarantee preservation of the significance level, it did not violate the nominal level in any of the 9595 scenarios considered in this article. It was almost as powerful as the asymptotic test. The exact conditional test and the asymptotic test with continuity correction did not perform well for any of the considered scenarios. Conclusions The easy-to-calculate mid-p test is an excellent alternative to the complex exact unconditional test. Both can be recommended for use in any situation. We also recommend the asymptotic test if small but frequent violations of the nominal level is acceptable.


Background
Matched-pairs data arise from study designs such as matched and crossover clinical trials, matched cohort studies, and matched case-control studies. The statistical analysis of matched-pairs studies must make allowance for the dependency in the data introduced by the matching. A simple and frequently used test for binary matchedpairs data is the McNemar test. Several versions of this test exist, including the asymptotic and exact (conditional) tests. The traditional advice is to use the asymptotic test in large samples and the exact test in small samples. The argument for using the exact test is that the asymptotic test may violate the nominal significance level for *Correspondence: morten.fagerland@medisin.uio.no 1 Unit of Biostatistics and Epidemiology, Oslo University Hospital, Oslo, Norway Full list of author information is available at the end of the article small sample sizes because the required asymptotics do not hold. One disadvantage with the exact test is conservatism: it produces unnecessary large p-values and has poor power.
Consider the data in Table 1, which gives the results from a study by Bentur et al. [1]. Airway hyperresponsiveness (AHR) status-an indication of pulmonary complications-was measured in 21 children before and after stem cell transplantation (SCT). The incidence of AHR increased from two (9.5%) children before SCT to eight (38%) children after SCT. The asymptotic test gives p = 0.034, and the exact test gives p = 0.070. The two p-values are considerably different, which often happens when we have a small sample size. The next example shows that this may also be the case for large sample sizes. http://www.biomedcentral.com/1471-2288/13/91 Table 1 Airway hyper-responsiveness (AHR) status before and after stem cell transplantation (SCT) in 21 children [1] After SCT

AHR No AHR Sum
Before SCT AHR 1 1 2 No AHR 7 12 19 Sum 8 13 21 In another study of SCT, 161 myeloma patients received consolidation therapy three months after SCT [2]. Complete response (CR) was measured before and after consolidation ( Table 2). An increase in CR following consolidation was observed: sixty-five (40%) patients had CR before consolidation compared with 75 (47%) patients after consolidation. The asymptotic test gives p = 0.033, and the exact test gives p = 0.053.
The choice between an asymptotic method and a conservative exact method-which can be summarized as a trade-off between power and preservation of the significance level-is well known from other situations involving proportions [3]. For the independent 2 × 2 table, a good compromise can be reached using the mid-p approach [4]. The Fisher mid-p test, which is a modification of Fisher's exact test, combines excellent power with rare and minor violations of the significance level [5]. The modification required to transform an exact p-value to a mid-p-value is simple: the mid-p-value equals the exact pvalue minus half the point probability of the observed test statistic.
The purpose of this article is to investigate whether a mid-p version of the McNemar exact conditional test can offer a similar improvement for the comparison of matched pairs as has been observed with independent proportions. A supplementary materials document (Additional file 1) shows how the mid-p test can be calculated using several standard software packages, including Excel, SAS, SPSS, and Stata.

Notation
Let N denote the observed number of matched pairs of binomial events A and B-where the possible outcomes are referred to as success (1) or failure (2)-and let  Table 3. Each n kl for k, l = 1, 2 corresponds to the number of event pairs (Y i1 , Y i2 ) with outcomes Y i1 = k and Y i2 = l. Let p kl denote the joint probability that Y i1 = k and Y i2 = l, which we assume independent of i. Following the notation in Agresti [6, pp. 418-420], this is a marginal or a population-averaged model. We denote the probabilities of success for events A and B-or equivalently, the marginal probabilities that Y i1 = 1 and Y i2 = 1-by p 1+ and p +1 , respectively. The null hypothesis of interest is H 0 : It might, however, be more realistic to assume that p kl also depends on the subject i. As denoted by Agresti [6, pp. 418-420], this is a subject-specific model. Further, this is a conditional model, since we are interested in the association within the pair, conditioned on the subject. Data from N matched pairs are then presented in N 2 × 2 tables, one for each pair. Collapsing over the pairs results in Table 3. Conditional independence between Y 1 and Y 2 is tested by the Mantel-Haenszel statistic [6, p.417]. But that test statistic is algebraically equal to the squared McNemar test statistic. In the following, we will not specify whether we test for marginal homogeneity or conditional independence.

The asymptotic McNemar test
The asymptotic McNemar test conditions on the number of discordant pairs (n 12 + n 21 ). Conditionally, n 12 is binomially distributed with parameters n = n 12 + n 21 and p = 1/2 under the null hypothesis. The asymptotic McNemar test statistic [7], which is the score statistic for testing marginal homogeneity, is and its asymptotic distribution is the standard normal distribution. The equivalent McNemar test statistic χ 2 = z 2 = (n 12 −n 21 ) 2 /(n 12 +n 21 ) is approximately chi-squared distributed with one degree of freedom under the null hypothesis. The asymptotic McNemar test is undefined when n 12 = n 21 = 0.

The asymptotic McNemar test with continuity correction
Edwards [8] proposed the following continuity corrected version of the asymptotic McNemar test: The asymptotic McNemar test with continuity correction (CC) approximates the exact conditional test. Hence, it combines the disadvantage of an asymptotic test (significance level violations) with the disadvantage of a conditional exact test (overly conservativeness), and we do not expect it to perform well. We include it in our evaluations because it features in influential textbooks such as Altman [9] and Fleiss et al. [10]. The asymptotic McNemar test with continuity correction is undefined when n 12 = n 21 = 0.

The McNemar exact conditional test
The test statistic in (1) measures the strength of the evidence against the null hypothesis. If we, as in the derivation of the asymptotic test, condition on the number of discordant pairs (n 12 + n 21 ), we can use the simple test statistic n 12 to derive an exact conditional test. The conditional probability under H 0 of observing any outcome x 12 given n = n 12 + n 21 discordant pairs is the binomial point probability The McNemar exact conditional one-sided p-value is obtained as a sum of probabilities: one-sided p-value = min(n 12 ,n 21 ) and the two-sided p-value equals twice the one-sided pvalue. If n 12 = (n 12 + n 21 )/2, the p-value equals 1.0. The exact conditional test is guaranteed to have type I error rates not exceeding the nominal level.

The McNemar mid-p test
A mid-p-value is obtained by first subtracting half the point probability of the observed n 12 from the exact onesided p-value, then double it to obtain the two-sided mid-p-value [4]. Hence, the McNemar mid-p-value equals where f is the probability function in (3). If n 12 = n 21 , substitute (5) with The type I error rates of the mid-p test-as opposed to those of exact tests-are not bounded by the nominal level; however, in a wide range of designs and models, both mid-p tests and confidence intervals violate the nominal level rarely and with low degrees of infringement [11][12][13]. Because mid-p tests are based on exact distributions, they are sometimes called quasi-exact [14].
Additional file 1 provides details on how to calculate the McNemar mid-p test with several standard software packages.

An exact unconditional test
The tests in the previous sections did not used the concordant pairs of observations (n 11 and n 22 ) in their calculations. The unconditional approach is to consider all possible tables with N pairs and thereby use information from all observed pairs, including the concordant ones. The exact unconditional test attributed to Suissa and Shuster [15] uses the McNemar test statistic (1). Let z obs be the observed value, and let where x = (x 11 , x 12 , x 21 , x 22 ) denotes a possible outcome with N pairs, and let n = x 12 + x 21 . If, for a one-sided test, z obs ≥ 0, the potential outcomes that provide at least as much evidence against the null hypothesis as the observed outcome-namely those with z(x) ≥ z obs -are the pairs (x 12 , n) in the region where h(n) = 0.5 · (z obs n 1/2 + n). Under the null hypothesis, the triplets (x 12 , n, N − n) are trinomially distributed with parameters N and (p/2, p/2, 1 − p), and the attained significance level is where p is the probability of a discordant pair (a nuisance parameter). We eliminate the nuisance parameter by maximizing P(p) over the range of p. After simplifying (9), we get the following expression for the exact unconditional one-sided p-value [15]: where k = int(z 2 obs + 1), F n is the cumulative binomial distribution function with parameters (n, 1/2), i n = http://www.biomedcentral.com/1471-2288/13/91 int{h(n)}, and int is the integer function. Suissa and Shuster [15] outline a numerical algorithm to find the supremum in (10). If z obs < 0, the one-sided p-value is found by reversing the inequality in (8). The two-sided p-value equals twice the one-sided p-value.

Evaluation of the tests
To compare the performances of the five tests, we carried out an evaluation study of type I error rates and power. We used complete enumeration (rather than stochastic simulations) and a large set of scenarios. Each scenario is characterized by fixed values of N (the number of matched pairs), p 1+ and p +1 (the probabilities of success for each event), and θ = p 11 p 22 /p 12 p 21 . θ can be interpreted as the ratio of the odds for the event Y 2 given Y 1 . We use θ as a convenient way to re-parameterize {p 11 , p 12 , p 21 , p 22 } into {p 1+ , p +1 , θ}, which includes the parameter of interest, namely the two marginal success probabilities. We

Type I error rates
The between tests differences in type I error rates were largely consistent across the considered scenarios. Figure 1 illustrates these differences. The type I error rates of the McNemar exact conditional test are low and barely above 3%, even for as much as 100 matched pairs. The asymptotic McNemar test with CC performs similarly to the exact conditional test but is even more conservative. The asymptotic McNemar test (without CC) has type I error rates close to the nominal level for most combinations of parameters. It violates the level quite often, although not by much. The exact unconditional and the McNemar mid-p tests perform similarly. For most combinations of parameters, the type I error rates of the two tests are identical. For some situations with small proportions, however, the exact unconditional test has type I error rates closer to the nominal level than does the mid-p test (Figure 1, upper right and lower left panels). On the other hand, the mid-p test sometimes has type I error rates closer to the nominal level than does the exact  Table 4 presents summary statistics of the calculations of type I error rates. The mean and maximum type I error rate are shown for each test over all scenarios and for subregions based on the number of matched pairs. We also show the proportion of scenarios where the nominal significance level is violated and the proportion of scenarios where the type I error rate is below 3%. The asymptotic McNemar test violates the nominal significance level in 29% of the total number of considered scenarios. We note that this proportion is only 3.7% for small sample sizes (10 ≤ N ≤ 30) and as much as 52% for large sample sizes (65 ≤ N ≤ 100). A mitigating feature is that-as indicated in Figure 1-the infringement on the nominal significance level is small: the maximum type I error rate of the asymptotic McNemar test is 5.37%. If we are concerned with aligning the mean (instead of the maximum) type I error rate with the nominal level, the results in Table 4 suggest that the asymptotic McNemar test is the superior test, both overall and in each of the subregions based on sample size.
As expected, the two exact tests do not violate the nominal significance level in any of the considered scenarios. Interestingly, neither does the McNemar mid-p test.
Finally, one important comment to the interpretation of Table 4. The values of the parameters p 1+ and p +1 were selected to represent the entire range of possible values and not to be a representative sample of the situations that might be encountered in practice. Scenarios with probabilities close to zero or one are thereby given more weight to the summary statistics in Table 4 than their impact in actual studies. Thus, the mean type I error rates of a typical study are likely closer to the nominal level than indicated in Table 4. The table is, however, a good illustration of the differences in performance between the five tests.
Further details of the results from the evaluation of type I error rates can be found in a supplementary materials document (Additional file 2), which contains box-plots of type I error rates from the total and various subregions of the evaluation study. Figure 2 shows the power of the tests as functions of the number of matched pairs with the usual yardsticks of 80% and 90% power marked in for reference. Only one combination of p 1+ , p +1 , and θ is shown, however, the results where qualitatively equal for other settings. The powers of the asymptotic McNemar, the McNemar mid-p, and the exact unconditional tests are quite similar, although the asymptotic test is slightly better than the other two tests. The powers of the exact conditional test and the asymptotic McNemar test with CC trail that of the other tests considerably. Table 5 displays the number of matched pairs needed to reach power of 50%, 60%, 70%, 80%, and 90% averaged over the 15 combinations of θ = 1.0, 2.0, 3.0, 5.0, 10.0 and p 1+ = 0.1, 0.35, 0.60. We show results for three of thevalues and note that similar results were obtained with = 0.1, 0.2, and 0.3. Values of N greater than 100 were estimated by simple linear extrapolation. The increase in sample size of using the exact unconditional or the mid-p test instead of the asymptotic McNemar test is quite small and in the range 0-3. The exact conditional test and the asymptotic McNemar test with CC, on the other hand, need a considerably greater sample size than the other tests. We emphasize that Table 5 is averaged over several combinations of parameters, and the values in it should not be used to plan the sample size of a study. The power of the tests are heavily dependent on the parameter values, even though the between tests differences in power were consistent across the different parameters in this evaluation. Table 5 thus illustrates typical sample size differences of the tests and not the actual sample size needed for a study. Table 6 presents the results of applying the five tests to the two examples introduced in the Background section. We have already observed that the asymptotic test and the exact conditional tests give quite different results for both examples. The asymptotic test with CC has p-values that are similar, but slightly higher, than the exact conditional test. The mid-p test and the exact unconditional test give results that largely agree with that of the asymptotic test. In both examples, the asymptotic, mid-p, and exact unconditional tests indicate stronger associations between airway hyper-responsiveness status and stem cell transplantation (Bentur et al. [1]) and between consolidation therapy and complete response (Cavo et al. [2]) than do the asymptotic test with CC and the exact conditional test. This difference in results is, perhaps, sufficiently great that different conclusions might be drawn. Because the asymptotic test with CC and the exact conditional test are highly conservative and have poor power, we do not recommend reporting the results of these two tests in any situation.

Discussion
The evaluation study in this article revealed several interesting observations. First, that the conservatism of the McNemar exact conditional test can be severe. A large sample size is needed to bring its type I error rates above 3% for a 5% nominal significance level. Quite often, the type I error rates of the exact conditional test were half that of the nominal level or lower. A similar conservative behavior has been observed for other exact conditional methods, for instance, Fisher's exact test for two independent binomial proportions [5] and the Cornfield exact confidence interval for the independent odds ratio [16]. This conservatism leads to poor power and a need for unnecessary large sample sizes. We do not  Second, the McNemar mid-p test is a considerable improvement over the exact conditional test on which it is based. It performs almost at the same level as the exact unconditional test. Whereas the exact tests are guaranteed to have type I error rates bounded by the nominal level, no such claim can be made for the mid-p test. Nevertheless, the mid-p test did not violate the nominal level in any of the 9595 scenarios considered in this evaluation. For practical use, the mid-p test is at an advantage vis-a-vis the exact unconditional test. As shown in the supplementary materials, the mid-p test is readily calculated in many commonly used software packages, including the ubiquitous Excel. The exact unconditional test, on the other hand, is computationally complex and only available in StatXact (Cytel Inc.). Third, the asymptotic McNemar test (without CC) performs surprisingly well, even for quite small sample sizes. It often violates the nominal significance level, but not by much. The largest type I error rate of the asymptotic McNemar test we observed in this study was 5.37% with a 5% nominal level. If that degree of infringement on the nominal level is acceptable, the asymptotic McNemar test is superior to the other tests. This is notably different from comparing two independent binomial proportions, where the asymptotic chi-squared test can produce substantial violations of the type I error rate in small samples [14].
The asymptotic test with CC performs similarly to-and sometimes even more conservatively than-the exact conditional test, and we do not recommend that it is used. This was expected, and is in line with the unequivocal recommendations against using the asymptotic chi-squared test with Yates's CC for the analysis of the independent 2 × 2 table [5,13,17].
We have only evaluated tests based on the McNemar statistic. It is also possible to construct tests using the http://www.biomedcentral.com/1471-2288/13/91 likelihood ratio statistic; however, Lloyd [18] found no practical difference between the two statistics. We prefer the much simpler-and widely used-McNemar statistic.