This article has Open Peer Review reports available.
Imputation of a true endpoint from a surrogate: application to a cluster randomized controlled trial with partial information on the true endpoint
- Richard M Nixon^{1}Email author,
- Stephen W Duffy^{2} and
- Guy RK Fender^{3}
https://doi.org/10.1186/1471-2288-3-17
© Nixon et al; licensee BioMed Central Ltd. 2003
Received: 10 April 2003
Accepted: 24 September 2003
Published: 24 September 2003
Abstract
Background
The Anglia Menorrhagia Education Study (AMES) is a randomized controlled trial testing the effectiveness of an education package applied to general practices. Binary data are available from two sources; general practitioner reported referrals to hospital, and referrals to hospital determined by independent audit of the general practices. The former may be regarded as a surrogate for the latter, which is regarded as the true endpoint. Data are only available for the true end point on a sub set of the practices, but there are surrogate data for almost all of the audited practices and for most of the remaining practices.
Methods
The aim of this paper was to estimate the treatment effect using data from every practice in the study. Where the true endpoint was not available, it was estimated by three approaches, a regression method, multiple imputation and a full likelihood model.
Results
Including the surrogate data in the analysis yielded an estimate of the treatment effect which was more precise than an estimate gained from using the true end point data alone.
Conclusions
The full likelihood method provides a new imputation tool at the disposal of trials with surrogate data.
Keywords
Background
The Anglia Menorrhagia Education Study (AMES) [1, 2] is a randomized controlled trial which tested the effectiveness of an "academic detailing" education package [3] in primary care and hospital gynaecology units to improve the management of women with menorrhagia (excessive menstrual bleeding). Here we are concerned with the first phase of this trial and only consider data from primary care.
Reported and audited outcome data
Trial phase Audited | Pre-intervention | Post-intervention | ||
---|---|---|---|---|
Intervention | Control | Intervention | Control | |
Patients seen | 307 | 209 | 418 | 237 |
Referrals | 56 | 39 | 80 | 63 |
Number of practices | 27 | 25 | 27 | 25 |
Reported | ||||
Patients seen | NA | NA | 381 | 215 |
Referrals | NA | NA | 93 | 92 |
Number of practices | NA | NA | 40 | 36 |
In analysis, one might simply exclude those practices which do not have audited data. On the other hand, it is reasonable to suppose that some information, albeit less reliable, is contained in the reported data. Surrogate endpoints have been used in a variety of studies, notably in trials of cancer screening [4]. A criterion frequently used to assess the usefulness of a surrogate variable is the Prentice criterion [5], which stipulates that the effect of the treatment on the true endpoint is entirely attributable to its effect on the surrogate. Begg and Leung [6] point out that even if this holds, the absolute magnitude of the effect on the true endpoint may be different from the magnitude of the effect on the surrogate. Begg and Leung also contend that it is more important that the surrogate be strongly correlated with the true endpoint. It is therefore desirable to estimate the effect of the surrogate on the true endpoint, even if only surrogate information is available. In our case, we have true audited endpoint data on 52 of the 100 study practices. Of these 50 out of 52 (96%) also have surrogate reported endpoint data. Of the remaining 48 practices, 26 (54%) provided surrogate data only and 22 (46%) provided no data at all. From this information, it is possible in principle to estimate the relationship between the surrogate and the true outcome, and therefore the trial results for all 78 practices providing surrogate or true end point data or both. In this paper we are using the surrogate variable to strengthen inference about the true endpoint. In this case the surrogate variable is known as an auxiliary variable [7].
General practice characteristics
All audited patients | All patients with reported data | |||||||
---|---|---|---|---|---|---|---|---|
Practice characteristic | Intervention | Control | Intervention | Control | ||||
Mean list size | 6974 | 5167 | 6965 | 5314 | ||||
Fund-holding | 7/27 | 26% | 4/25 | 16% | 11/40 | 28% | 6/36 | 17% |
Has branch surgeries | 15/27 | 56% | 16/25 | 64% | 17/40 | 43% | 24/36 | 67% |
Rural | 10/27 | 37% | 7/25 | 28% | 16/36 | 44% | 9/36 | 25% |
Has drug dispensing facilities^{1} | 0.34 | 0.42 | 0.32 | 0.44 | ||||
Male partners^{1} | 0.63 | 0.77 | 0.67 | 0.76 | ||||
Has trainees | 15/27 | 56% | 9/25 | 36% | 15/40 | 38% | 9/36 | 25% |
Partners on obstetric list^{1} | 0.92 | 1.00 | 0.89 | 0.99 |
The aim of this paper is to estimate the treatment effect using data from every practice in the study. Where the true endpoint is not available, it is estimated via a surrogate by three approaches, a regression method, multiple imputation and a full likelihood model.
Methods
Since our endpoint was referral of individual patients, but the unit of randomization was general practice, all models assessing the treatment effect incorporated a random effects component for practice, to take account of this cluster randomization [9]. Logistic regression is used in all the analysis. If r, the number of referrals is 0; or if r = n, the number of patients seen, this causes problems with some of the methods used. When this occures r is replaced by r + 0.5 and n is replaced by n + 1 [10]. For consistency all analysis are performed on the same data set, regardless of whether this change is necessary for a particular analysis.
Regression models
For the 26 practices which were not audited, but for which we had reported data, our aim was to predict what audit data (pre and post intervention numbers of patients seen and referred) would have come from these practices had they been audited. We used the post intervention reported data to predict both the pre and post intervention audited data. We also included practice characteristics in this prediction. Log linear regression models were fitted, where the audited data is a function of the reported data and the practice characteristics. Then estimated parameters from these models were used to estimate the missing values from the audited data. Finally, the overall effect of intervention was estimated by a random effects logistic regression model, where an extra random effect was included to add extra variance from the observations that were estimated and not observed.
Firstly, we fitted log linear regression models of audit data on reported data and practice characteristics. Randomization group status was included in the practice characteristics vector in the pre-intervention regression, as reported data, the crucial independent variable, is only observed after the intervention, and the relationship between reported behaviour after intervention and true behaviour before intervention may be modified by the effect of the intervention (e.g. a reduction in referral rates after intervention in the groups receiving the intervention). On the other hand, in the post-intervention regression, the reported and audited data are observed post-intervention, so the effect of intervention is already included. The following models are fitted for the 50 practices that were audited and which returned at least one reported data form:
n ^{ b }and n ^{ a }denote the number of women presenting with menorrhagia before and after intervention from the audited data respectively; n ^{ r }denotes the corresponding number from the post-intervention reported data. r ^{ b }, r ^{ a }and r ^{ r }are the corresponding number of referrals from the pre- and post-intervention audited and post-intervention reported data respectively. p _{2} is the vector of the eight practice characteristics given in table 2, and p _{1} this same vector, but also including the randomization group of the practice.
Where n _{ ijkl }and r _{ ijkl }are the number of women presenting with menorrhagia, and the number of women referred in practice i, from intervention group j (0 = control, 1 = intervention), in study period k (0 = pre-, 1 = post-intervention). This comes from observed audited data where available (l = 0), and fitted audited data where it is missing (l = 1). π_{ ijkl }denotes the true underlying probability of being referred. R, T and E are dummy variables: R _{00} = R _{01} = R _{10} = 0 (control group and intervention group pre-intervention), R _{11} = 1 (intervention group post-intervention), T _{0} = 0 (pre intervention), T _{1}= 1 (post intervention) and E _{0} = 0 (observed), E _{1} = 1 (estimated). β is used to denote the log odds ratio of being referred in an intervention practice post intervention compared to a control practice or intervention practice pre intervention. In this model we allow a variation in trend for each practice γ_{ i }, around an average trend for all practices μ_{1}. There is a common intercept for each practice, within trial arm, at the point (T _{ k }- 0.5). δ_{ i }is a random effect that is "switched off" for the practices that have observed audited values and "switched on" for the practices that use estimated audited values. In this way extra variability is allowed in the model for the practices that have estimated audit information.
Multiple imputation
Methodology
Multiple imputation using auxiliary variables can be used to strengthen the true endpoint [11]. In our case we use an approximate Bayesian bootstrap method. Each practice's audited data is regarded as a single data point. The basic method is to take bootstrap re-samples of the known audited data and from these, take smaller bootstrap samples to simulate the missing data. Underlying this is the theory that we are sampling from a scaled multinomial distribution as an approximation to a Dirichlet posterior distribution [12]. Thus the data has to be expressible as a realisation of a discrete categorical variable, which in this case holds, albeit with a large number of possible realisations.
More formally suppose we have a vector of discrete data Y, that contains observed values Y _{ obs }and missing values Y _{ mis }. Y can take values d _{1}...d _{ K }with probabilities θ = (θ_{1},..., θ_{ K }) respectively. Rubin [13] defines a Bayesian bootstrap implementation. A Dirichlet prior distribution is defined for θ, from a non-informative prior. One realisation, θ*, of θ is drawn from its posterior. Finally the components of Y _{ mis }are independently drawn from d _{1}...d _{ K }, such that the P (drawing d _{ k }) = k = 1...K. This gives one imputation of the complete data Y. The process is repeated M times to get M multiple imputations.
The Bayesian bootstrap imputation is complicated to implement as it requires sampling from a Dirichlet distribution, followed by taking a weighted sample from the possible values the components Y can take. There is a simple approximation to the Bayesian bootstrap method, also defined by Rubin [13] which is easier to compute in practice.
Suppose Y _{ obs }and Y _{ mis }is of length n _{0} is of length n _{1}. The approximate Bayesian bootstrap imputation is as follows:
• Draw n _{1} components, with replacement, from Y _{ obs }. Call this vector .
• Draw n _{0} components, with replacement from . This sample is the imputed Y _{ mis }.
In this way the approximate Bayesian bootstrap method draws θ from a scaled multinomial distribution rather then a Dirichlet posterior as in the Bayesian bootstrap case.
Suppose we wish to estimate β from the data. M different point estimates of β, and its variance will be estimated from each of the imputed data sets, and we call these . Rubin [14] gives the following rule for combining these estimates into a single estimate. The combined point estimate is the average of the M point estimates from the imputed data:
The total variance is defined as:
T = W + (1 + M ^{-1})B (6)
Inferences about β can be gained from the approximation:
Where the degrees of freedom of the t distribution is given by:
Application to the AMES data set
Let a be the fraction of missing information for a scalar estimator. Rubin [14] calculates that the relative efficiency (on the variance scale) of a point estimate based on M imputations compared to one based on an infinite number of imputations is approximately:
In this case a = 26/78 = 1/3 (52 practices have audited data and we impute for the 26 that have this data missing) so if we set M = 5 the s.e. of the estimate will be √ (1 + 1/15) = 1.033 times as large as the estimate with M → ∞.
We are only interested in imputing the missing audited data, as ultimately this is considered the most accurate, and the pre-intervention data can be used in the modelling. The missing audit data always comes in groups of four for each practice: the number of women presenting with menorrhagia and the number of referrals both pre and post intervention. The theory outlined above is for imputation of missing data in a vector. We identify the audited data for each practice (i.e. row of the data) with an element of a vector Y. That is to say, each element of Y contains the audited data for one practice. In this way data is always imputed per practice and not individually for each field.
The approximate Bayesian bootstrap imputation was then performed on the data. A random sample of 52 rows was taken with replacement from the 52 rows of complete data. From this a random sample of 26 rows was taken with replacement. This, along with the original 52 complete rows forms an imputed data set. This process was independently repeated five times, and these five data sets are each analysed.
As with the analysis of the audit data before, we wish to get an estimate of the odds of being referred in the intervention group compared to the control group. We fit the model:
Where the variable definitions are the same as those used in equation 2.
This imputation assumes that the missing audit data is missing completely at random (MCAR) [8] as all the missing data comes from the same distribution and pays no attention to the reported data when imputing the missing data. As the number of patients reported to have been seen and referred to hospital may be informative for the audited values, then it is desirable that the imputation process includes the reported data in the estimation of the missing audited data. The missing audit data is now assumed to be missing at random. To do this, the data set was stratified by reporting behaviour. Six strata were defined by the total number of patients reported to have been seen (either ≤ 6 or ≥ 7), and the proportion of patients reported to have been referred, ([0,0.15), [0.15,0.4), [0.4,1.0]). These categories were chosen as the median number of patients reported to have been seen was 6.5 and the 33^{ rd }and 66^{ th }percentiles of the proportion of patients reported to have been referred were 0.15 and 0.4. Within each strata the missing data were then imputed from the observed data.
52 practices have observed audited data. In the stratified imputation only 50 of these can be used to sample from, as two of these practices have no reported data on which to stratify.
Full likelihood model
The previous two methods used a two-stage procedure where firstly missing data was imputed and then the treatment effect estimated. Here a method is proposed that performs both these stages simultaneously. Consider the following model:
In MCMC sampling, at every iteration an estimate of every parameter is obtained. This means that missing data imputation and the randomized trial comparison were performed simultaneously and not in a two stage process.
Model fitting
The regression models given in equation 1, that are used to generate missing values, were fitted using Splus [17]. The random effects model given in equation 2, to estimate the odds of referral in an intervention practice compared to a control, was fitted using BUGS [18]. The approximate Bayesian bootstrap imputed data sets were generated using Splus; the separate log odds ratios for each imputed data set calculated from equation 11 were fitted using BUGS; and the overall multiple imputed estimate was generated by code written in Splus. The full likelihood model was fitted using BUGS.
Where BUGS was used to estimate parameters, prior distributions that are locally almost uniform were chosen, with variances at least two orders of magnitude larger than the posterior variances of the corresponding nodes. These priors are considered to be non informative. The model fitting for the full likelihood model was achieved with the BUGS code in the Appendix. Convergence was assessed by the methods of Geweke [19], Raftery & Lewis [20], and Heidelberger & Welch [21], using the BOA package [22].
Results
Odds of being referred in an education practice compared to a control: comparison of the various modelling strategies used.
Method set | Point estimate | CI | s.e. (log OR) |
---|---|---|---|
Audited data only | 0.73 | (0.47,1.08) | 0.212 |
Regression | 0.68 | (0.42,1.01) | 0.218 |
Unstratified imputation | 0.74 | (0.45,1.02) | 0.203 |
Stratified imputation | 0.75 | (0.45,1.05) | 0.212 |
Full likelihood | 0.68 | (0.44,0.91) | 0.188 |
The correlation between the audited and reported data for the number of referrals r, the total number of patients seen n and the proportion of referrals r/n, for all the post-intervention data where both are available.
r | n | r/n |
---|---|---|
0.17 | 0.36 | 0.30 |
Discussion
These results show reasonable agreement with regard to the point estimate. The educational package reduced the proportion of women who are referred to hospital by around 30%. Some of this benefit may be artificial, due to increased diagnostic activity in the intervention group.
These results differ from those previously reported [2, 9], as when performing the analysis using audited data alone, women were excluded who had post coital bleeding, pelvic mass, or bleeding between periods. As these exclusion criteria could not be applied to the surrogate reported data, they were not applied in the analysis reported here.
In all the modelling strategies used we have attempted to impute missing data from surrogate data and assess the effect of intervention. Each strategy has used different methods for imputing data, and for adjusting the variance of the outcome measure to allow for the fact that this data is estimated rather than observed.
The regression models attempt to account for extra variation, caused by using imputed values, by adding a random effect in the regression model which estimated the outcome measure. However, a weakness in this model is that the estimated values are artificially too good.
The fitted values will all lie on hyper-planes defined by the estimated parameters from model 1, whereas the observed values used when these are available will all lie around these planes, but will never lie exactly on them. The regression models used to estimate the missing values are therefore giving the exact values that one would expect and do not allow for random variation in the realised values. Extra variation is allowed in the model for these values by inclusion of an additional random effect. However, there is an element of a "self fulfilling prophecy" where the regression model 2 that estimates the outcome measure is based on data that will fit the model better at the estimated points.
A further problem with this method is that it is possible in general for r to be larger than n. This did not happen in this case as _{1} > _{3} and _{2} > _{4}. An alternative modelling strategy to protect against this could be to estimate the number of events n and the probability of a positive outcome π from the surrogates. Then estimate the number of positive outcomes as nπ.
The imputation models do not have these problems, as the missing audited data is imputed from the observed audited data. The stratified method is to be preferred as this generates data which is more likely to have occurred for the practice that the missing data is being imputed for. The results from the imputation methods give estimates of the effect of intervention and s.e. of the log odds ratio in between the other two methods. It should be noted that the formulas used to estimate this standard error have been shown to be inconsistent in certain settings [23]. The stratification goes some way towards imputing missing values which are appropriate for the practice, but it is still quite a blunt tool. An alternative method which could be considered is derived by Shafer [12]. Multiple imputation of multivariate categorical data under log linear models could be used. This method is based on the EM algorithm, where the likelihood function used for imputing the missing values can include a number of covariates. In this case the reported data, along with the practice characteristics could be used in the imputation process. An elegant application of the EM algorithm in estimation of missing data is given by Longford et al [24].
The full likelihood model has the desirable property of performing the imputation and the randomized trial comparison simultaneously. Despite not making use of pre-intervention information, this model achieves the lowest standard error of the log odds ratio of all the models considered. The validity of the point estimate is unlikely to be impaired by the absence of pre-intervention data as the audited pre-intervention probabilities of referral were similar in the intervention and control groups. This model could be improved in principle by including pre-intervention data and practice characteristics. This was tried and imposed too heavy a burden on the estimation algorithm.
The standard error of the log odds ratio obtained from a random effects logistic regression on the audited data alone was 0.212. This was improved upon by the methods here which estimate the missing audited data, with the exception of the regression method, which was conservative, probably due to too much extra variation being add by the random effect for imputed values. These improvements are due to the added information from the auxiliary reported variable. The choice of parametric assumptions used in the generation of missing values would also influence this gain in precision.
The reported data is strongly related to the audited data. The relationship of the logit reported probability with the logit audited probability is
logit (P ^{ r }) = 5.44 + 1.09 logit (P ^{ a }) (13)
The 95% credible interval of the estimate of 1.09 is (0.17,2.02), indicating a significant (p = 0.02) dependency of surrogate on true endpoint. Thus, while this surrogate is unlikely to satisfy Prentice's criteria [5], it does satisfy Begg and Leung's [6].
Conclusion
Using reported data as a surrogate for audited in the full likelihood model gives a point estimate that is accurate, and improves the precision of the estimate from that yielded using audited data alone. Regression type approaches and the Bayesian bootstrap imputation technique have already been used in other studies. The full likelihood approach provides an additional possible strategy in the case where only partial information is available on the true endpoint.
Appendix: BUGS code for full likelihood model
model{
for(i in 1 : N){
ref.a [i] ~ dbin(p.a [i], tot.a [i])
tot.a [i] ~ dpois(phi.a)
logit(p.a [i]) <- alpha1 + beta1 * treat [i] + gamma.a [i]
gamma.a [i] ~ dnorm(0, tau.a)
ref.rep [i] ~ dbin(p.rep [i], tot, rep [i])
tot.rep [i] ~ dpois(phi.rep)
lp.a [i] <- logit(p.a [i])
logit(p.rep [i]) <- alpha2 + beta2 * (lp.a [i]- Ip.a.bar)
}
Ip.a.bar <- mean(lp.a[])
tau.a <- l/(s.a*s.a)
s.a <- exp(ls.a)
#PRIORS
phi.a ~ dnorm(0,1.0E-6) I(0,)
phi.rep ~ dnorm(0,1.0E-6) I(0,)
ls.a ~ dunif(-6,6)
a ~ dnorm(0.0,1.0E-6)
b ~ dnorm(0.0,1.0E-6)
c ~ dnorm(0.0,1.0E-6)
d ~ dnorm(0.0,1.0E-6)
#EXTRA VARIABLES
exp.b <- exp(b)
Odds of being referred in an education practice compared to a control: results from the individual multiple imputations.
Imputation | Point estimate of OR | s.e. | |
---|---|---|---|
Unstratified | 1 | 0.70 | 0.116 |
2 | 0.73 | 0.115 | |
3 | 0.67 | 0.108 | |
4 | 0.85 | 0.123 | |
5 | 0.72 | 0.116 | |
Stratified | 1 | 0.84 | 0.136 |
2 | 0.67 | 0.108 | |
3 | 0.71 | 0.114 | |
4 | 0.72 | 0.117 | |
5 | 0.82 | 0.137 |
Declarations
Authors’ Affiliations
References
- Fender GRK, Prentice A, Gorst T, Nixon RM, Duffy SW, Day NE, Smith SK: The anglia menorrhagia education study: A randomised controlled trial of an educational package on the management of menorrhagia in primary care. British Medical Journal. 1999, 318: 1246-1250.View ArticlePubMedPubMed CentralGoogle Scholar
- Fender GRK, Prentice A, Nixon RM, Gorst T, Duffy SW, Day NE, Smith SK: Management of menorrhagia: An audit of practices involved in the Anglia Menorrhagia Education Study (AMES). British Medical Journal. 2001, 322: 523-524. 10.1136/bmj.322.7285.523.View ArticlePubMedPubMed CentralGoogle Scholar
- Soumerai SB, Avorn J: Principles of educational outreach (academic detailing) to improve clinical decision-making. Journal of the American Medical Association. 1990, 263: 549-556. 10.1001/jama.263.4.549.View ArticlePubMedGoogle Scholar
- Day NE, Duffy SW: Trial design based on surrogate endpoints – application to comparison of different screening frequencies. Journal of the Royal Statistical Society A. 1996, 159: 40-60.View ArticleGoogle Scholar
- Prentice RL: Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine. 1989, 8: 431-440.View ArticlePubMedGoogle Scholar
- Begg CB, Leung DHY: On the use of surrogate end points in randomized trials. Journal of the Royal Statistical Society A. 2000, 163: 15-24. 10.1111/1467-985X.00153.View ArticleGoogle Scholar
- fFeming TR, Prentice RL, Pepe MS, Glidden D: Surrogate and auxiliary end-points in clinical-trials, with potential applications in cancer and aids research. Statistics in Medicine. 1994, 13: 955-968.View ArticleGoogle Scholar
- Little RA, Rubin DB: Statistical analysis with missing data. 1987, New York: WileyGoogle Scholar
- Nixon RM, Duffy SW, Fender GRK, Day NE, Prevost TC: Randomisation at the level of General Practice : Use of pre intervention data and random effects models. Statistics in Medicine. 2001, 20: 1727-1738. 10.1002/sim.792.View ArticlePubMedGoogle Scholar
- Cox DR, Snell EJ: Analysis of binary data. 1989, Chapman and Hall, LondonGoogle Scholar
- Faucett CL, Schenker N, Taylor JMG: Survival analysis using auxiliary variables via multiple imputation, with application to aids clinical trial data. Biometrics. 2002, 58: 37-47.View ArticlePubMedGoogle Scholar
- Schafer JL: Analysis of incomplete multivariate data. Monographs on statistics and applied probability. 1997, Chapman & Hall, LondonView ArticleGoogle Scholar
- Rubin DB: The bayesian bootstrap. Annals of statistics. 1981, 9: 130-134.View ArticleGoogle Scholar
- Rubin DB: Multiple imputation for nonresponse surveys. 1987, J Wiley & son, New YorkView ArticleGoogle Scholar
- Schafer JL: Multiple imputation: a primer. Statistical Methods in Medical Research. 1999, 8: 3-15. 10.1191/096228099671525676.View ArticlePubMedGoogle Scholar
- Gelfand AE, Smith AFM: Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association. 1990, 85: 398-409.View ArticleGoogle Scholar
- Data analysis products division. S-Plus user's guide. 1997, MathSoftGoogle Scholar
- Gilks WR, Thomas A, Spiegelhalter DJ: A language and program for complex bayesian modelling. The Statistician. 1994, 43: 169-177.View ArticleGoogle Scholar
- Geweke J: Evaluating the accuracy of sampling-based approaches to calculating posterior moments. In Bayesian Statistics 4. Edited by: Bernardo LM, Berger JO, Dawid AP, Smith AFM. 1992, Oxford University PressGoogle Scholar
- Raftery AL, Lewis S: How many iterations in the Gibbs sampler?. In Bayesian Statistics 4. Edited by: Bernardo LM, Berger JO, Dawid AP, Smith AFM. 1992, Oxford University Press, 763-774.Google Scholar
- Heidelberger P, Welch PD: Simulation run length control in the presence of an initial transient. Operations Research. 1983, 31: 1109-1144.View ArticleGoogle Scholar
- Smith B: Bayesian output analysis program. [http://www.public-health.uiowa.edu/boa]
- Robins JM, Wang N: Inference for imputation estimators. Biometrika. 2000, 87: 113-124.View ArticleGoogle Scholar
- Longford NT, Ely M, Hardy R, Wadsworth MEJ: Handling missing data in diaries of alcohol consumption. Journal of the Royal Statistical Society: Series A. 2000, 163: 381-402. 10.1111/1467-985X.00174.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/3/17/prepub
Pre-publication history
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL.