Joint modelling rationale for chained equations

  • Rachael A Hughes3Email author,

    Affiliated with

    • Ian R White4,

      Affiliated with

      • Shaun R Seaman4,

        Affiliated with

        • James R Carpenter3, 4,

          Affiliated with

          • Kate Tilling3 and

            Affiliated with

            • Jonathan AC Sterne3

              Affiliated with

              BMC Medical Research Methodology201414:28

              DOI: 10.1186/1471-2288-14-28

              Received: 12 September 2013

              Accepted: 13 February 2014

              Published: 21 February 2014

              Abstract

              Background

              Chained equations imputation is widely used in medical research. It uses a set of conditional models, so is more flexible than joint modelling imputation for the imputation of different types of variables (e.g. binary, ordinal or unordered categorical). However, chained equations imputation does not correspond to drawing from a joint distribution when the conditional models are incompatible. Concurrently with our work, other authors have shown the equivalence of the two imputation methods in finite samples.

              Methods

              Taking a different approach, we prove, in finite samples, sufficient conditions for chained equations and joint modelling to yield imputations from the same predictive distribution. Further, we apply this proof in four specific cases and conduct a simulation study which explores the consequences when the conditional models are compatible but the conditions otherwise are not satisfied.

              Results

              We provide an additional “non-informative margins” condition which, together with compatibility, is sufficient. We show that the non-informative margins condition is not satisfied, despite compatible conditional models, in a situation as simple as two continuous variables and one binary variable. Our simulation study demonstrates that as a consequence of this violation order effects can occur; that is, systematic differences depending upon the ordering of the variables in the chained equations algorithm. However, the order effects appear to be small, especially when associations between variables are weak.

              Conclusions

              Since chained equations is typically used in medical research for datasets with different types of variables, researchers must be aware that order effects are likely to be ubiquitous, but our results suggest they may be small enough to be negligible.

              Keywords

              Chained equations imputation Gibbs sampling Joint modelling imputation Multiple imputation Multivariate missing data

              Background

              Multiple imputation [1] has become a popular approach for the analysis of incomplete data, with several mainstream statistical packages now incorporating multiple imputation tools. It involves making several draws of the missing data from their posterior predictive distribution given the observed data and an imputation model. For multivariate, non-monotone missing data there are two main approaches for constructing an imputation model: joint modelling and chained equations. Joint modelling imputation requires the specification of a parametric joint model for the complete data: current implementations impute under the multivariate normal model, the log linear model and the general location model [2]. However for datasets containing different types of variables the current classes of joint models [35] may not be appropriate for the joint distribution of the data. The alternative method, chained equations imputation [4, 6], is more flexible as it specifies a separate imputation model, typically a univariate regression model, for each incomplete variable and updates the missing data for each variable in turn.

              Chained equations imputation has been proposed under several different names including: fully conditional specification, stochastic relaxation, variable-by-variable imputation, regression switching, sequential regressions, ordered pseudo-Gibbs sampler, partially incompatible MCMC and iterated univariate imputation [3]. In addition to handling variables of varying types, the chained equations approach has other flexible features such as incorporating restrictions, logistical and consistency bounds (for example, to handle imputation of gender specific variables or impute only questions that were not intentionally skipped in a questionnaire [4]). van Buuren and Groothuis-Oudshoorn [7] discuss the wide range of medical fields that have used chained equations imputation (e.g. addiction [8], epidemiology [9], infectious diseases [10], genetics [11], cancer [12], obesity and physical activity [13]), and a brief review of available software that have implemented chained equations imputation is given by [14]. Given the popularity of chained equations, among users of varying degrees of expertise, there is now guidance in its practical use (e.g. [14] and [15]).

              Despite its widespread use, a known theoretical weakness of the chained equations method is that the implicit joint distribution underlying the separate models may not always exist: that is, the conditional models may be incompatible [4, 5, 1618]. In such situations, the results after chained equations imputation may systematically differ according to the order in which the missing variables are updated in the chained equations algorithm. We shall refer to this phenomenon as an “order effect”.

              Previous authors [35] have stated that chained equations imputation under a set of normal linear regression models, with all other variables as covariates and no interactions, is equivalent to a Gibbs sampler that draws from a multivariate normal distribution. van Buuren [3] also states for a dataset of three partially observed binary variables that chained equations under a set of logistic regression models, with all other variables included as main effects only, is equivalent to a joint modelling imputation under a log linear model with the three-way factor term set to zero. However, none of these papers provides a proof beyond stating that the set of conditional models is compatible [16] and are all derived from the specified joint distribution.

              Independently and concurrently with our work, Liu et al. [19] have given sufficient conditions (which include compatibility of the conditionals) under which, as the sample size tends to infinity, the stationary distribution of the Markov chain generated by the chained equations algorithm (assuming that this stationary distribution exists and that the chain converges to it) converges to the posterior predictive distribution of the missing data implied by a joint Bayesian model. That is, under these sufficient conditions, the total variation of the distance between the chained equations stationary distribution and the posterior predictive distribution tends to zero as the sample size tends to infinity. As a corollary, Liu et al show the equivalence of the two imputation methods in finite samples under a condition we have independently identified and named the “non-informative margins” condition.

              Our work is complementary to that of Liu et al. Firstly, we have taken a different approach to prove the equivalence of the two imputation methods in finite samples. Additionally, in specific examples, we prove whether the non-informative margins condition is satisfied or not, and in a simulation study we demonstrate the consequences when the conditional models are compatible but do not satisfy the non-informative margins condition.

              In this paper, we provide a “non-informative margins” condition that, together with compatibility of the conditionals (and assuming that the Markov chain generated by the chained equations converges to a stationary distribution), guarantees that the imputed values obtained using chained equations (at convergence) are drawn from the posterior predictive distribution of the missing data implied by a Bayesian joint model. We give examples of chained equations algorithms that satisfy the non-informative margins condition when the joint model is the multivariate normal model and the saturated multinomial model, and examples where this condition is not satisfied when the joint model is an unsaturated multinomial model and the general location model. A simulation study considers a simple chained equations algorithm in which the conditional models are compatible but do not satisfy the non-informative margins condition, and shows that it is not equivalent to any joint model procedure.

              Methods

              Notation

              Suppose K random variables http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq1_HTML.gif are intended to be observed on N subjects. We use subscripts i and j to index subjects and variables respectively (i = 1,…,N; j = 1,…,K). Let x = (x ij ) denote an (N × K) matrix, whose i,j element is x ij . Column j of matrix x is denoted by http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq2_HTML.gif . It is assumed that the rows of matrix x are independent and identically distributed draws from a probability distribution with probability distribution function p (X ∣ θ), where θ is an unknown parameter.

              In practice some subjects have missing observations on up to K - 1 variables and we write x j = (xjobs,xjmis) for any j, x = (x obs ,x mis ) and p (x ∣ θ) = p(x obs ,x mis  ∣ θ), with superscript obs and mis denoting the observed and missing data respectively. In keeping with the assumptions of joint modelling imputation and chained equations imputation, the missing data mechanism is assumed to be ignorable for Bayesian inference [20] p. 120, so that inferences about θ can be based on the marginal observed data posterior p (θ ∣ x obs ).

              Joint modelling imputation

              Joint modelling imputation requires the specification of a parametric joint model p(x obs ,x mis∣ θ) for the complete data and a prior distribution p(θ) for parameter θ. Imputations are independent draws from the posterior predictive distribution of the missing data given the observed data p(x mis∣ x obs ) [2] p. 105, which under the ignorability assumption is
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equa_HTML.gif

              Therefore, to draw from this posterior predictive distribution, first draw θ  ∼ p (θ ∣ x obs ) followed by x mis ∼ p (x mis  ∣ x obs ,θ ) [2] p. 105. When it is difficult to draw from the observed data posterior p(θx obs ), Markov chain Monte Carlo methods can be used. For example, the data augmentation algorithm of Tanner and Wong [21] draws missing values from the posterior predictive distribution x mis ∼ p (x mis∣ x obs ,θ ) and then draws θ from the complete data posterior θ  ∼ p (θ ∣ x obs ,x mis), where ∗ denotes the last drawn values of θ or x mis . Upon convergence this produces a draw from the joint posterior distribution p(θ,x mis  ∣ x obs ).

              Chained equations imputation

              For every incomplete variable the chained equations algorithm requires an imputation model, typically a univariate regression model, and an accompanying prior distribution for the model’s parameter. Let X -j  = (X 1,…,X j-1,X j+1,…,X K )T denote the vector of random variables excluding variable X j  and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq3_HTML.gif the submatrix of x corresponding to variables X -j . We write p (x j  ∣ x -j ,ψ j ) for the probability distribution function of the imputation model for variable X j and p (ψ j ) for the prior distribution of the unknown parameter ψ j .

              Chained equations draws the imputations using an iterative algorithm, typically with 10 to 20 iterations [15]. To start off, the missing values of each incomplete variable are replaced by its mean or a random sample of its observed values. Suppose, without loss of generality, that variables X 1,…,X R  (R ≤ K) are incomplete and variables X R+1,…,X K  are fully observed. Given the imputations from the last iteration http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq4_HTML.gif , iteration t of the chained equations algorithm consists of the following draws [18]
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equb_HTML.gif

              During each iteration the following two steps are applied to each incomplete variable X j  in turn: http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq5_HTML.gif is drawn from the posterior distribution proportional to http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq6_HTML.gif and missing values http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq7_HTML.gif are drawn from the predictive posterior http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq8_HTML.gif . The imputations from the last iteration form the imputed dataset. The whole iterative algorithm is repeated to obtain further imputed datasets.

              Equivalence of joint modelling and chained equations imputation

              We investigated, in finite samples, sufficient conditions under which a chained equations algorithm with compatible conditional models imputes missing data from the predictive distribution of the missing data implied by the joint model and its accompanying prior. We provide examples of chained equations algorithms (with compatible conditional models) where our identified condition is satisfied and examples where it is not satisfied.

              Simulation study

              We conducted a simulation study to explore the consequences for chained equations imputation when the conditional models were compatible with the same joint model but the non-informative margins condition of Proposition 1 was not satisfied. In particular, we looked for evidence of “order effects”, where the distribution from which the final imputed values of the variables were drawn differed according to the order in which the variables were updated in the chained equations sampler. If the chained equations algorithm imputes all variables from the predictive distribution of the missing data implied by a specific joint model, then order effects cannot occur [22]. Thus, the existence of order effects implies that the chained equations algorithm is not equivalent to imputing from any joint model.

              The simulation study was based on a general location model, discussed in the Theoretical results section below, with one incomplete binary variable Y and two continuous variables W 1 and W 2, where W 1 was also incompletely observed. We compared joint modelling imputation under the general location model, considered as a gold standard, with the chained equations algorithm that imputes the binary variable Y under a logistic regression model and the continuous variable W 1 under a normal linear regression model.

              We generated 500 datasets, each with a sample size of 100. For each dataset, the rows were independent, identically distributed realizations of the general location model Y ∼ Bernoulli (3/10), W 1 ∣ Y ∼ N (10 + βY,9) and W 2 ∣ W 1,Y ∼ N (9 + 8/9 + 1/9W 1 + β Y,8 + 8/9). The data model was a simplified version of data that can occur in the medical literature [23]. The simulation study was repeated when β, the regression coefficient for covariate Y, was set to 1 and 3. The analysis of interest was the normal linear regression of W 2 on W 1 and Y. To ensure that any observed order effects could only be due to the failure of the non-informative margins condition we considered the simplest setting, that of data missing completely at random [20] p. 16, and set the values of Y and W 1 to be missing for the first 50 individuals in the dataset. Below we describe the joint modelling imputation procedure and the chained equations algorithm that were separately applied to the same 500 datasets.

              We used the data augmentation algorithm (as described under the heading “Joint modelling imputation”) to perform joint modelling imputation under the general location model and the joint prior given in the general location example (see example 4 of the Results), setting hyperparameters τ = ν = 1/2 and κ = 3/2. The number of imputed datasets generated, the burn-in period and the number of iterations between imputed datasets was 100. The analysis model was applied to each dataset separately and the mean of the multiple estimates of β, the coefficient for Y, was calculated.

              In the (standard) chained equations algorithm, a logistic regression model for Y given W 1 and W 2 was first fitted to those rows of the dataset in which Y was observed. Let http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq9_HTML.gif denote the maximum likelihood estimate of the parameters of this model and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq10_HTML.gif denote its associated estimated variance-covariance matrix. A draw http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq11_HTML.gif was then made from the multivariate normal approximation http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq12_HTML.gif and used to impute the missing Y values. The continuous variable W 1 was imputed using the linear regression model W 1 ∣ Y,W 2 ∼ N (λ + ξ Y + ϕ W 2,ω) and prior distribution p (λ,ξ,ϕ,ω) ∝ ω -3/2.

              To start off the chained equations algorithm the missing values of Y and W 1 were replaced with a random sample of their observed values. We augmented the chained equations algorithm such that, within each iteration we fitted the analysis model immediately after updating the binary variable Y and also immediately after updating the continuous variable W 1. The simulation study focused on systematic differences between the two resulting estimates of β. Given the imputations from the last iteration http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq13_HTML.gif , iteration t of the augmented chained equations algorithm consisted of the following steps:
              1. 1.

                Generate y (t) by imputing values for the missing binary observations, conditioning on http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq14_HTML.gif and w 2.

                 
              2. 2.

                Linearly regress w 2 on http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq15_HTML.gif and y (t) and store the estimate for the coefficient of Y, denoted by http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq16_HTML.gif .

                 
              3. 3.

                Generate http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq17_HTML.gif by imputing values for the missing continuous observations, conditioning on y (t) and w 2.

                 
              4. 4.

                Linearly regress w 2 on http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq18_HTML.gif and y (t) and store the estimate for the coefficient of Y, denoted by http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq19_HTML.gif .

                 

              The chained equations algorithm was implemented with 10010 iterations. The first 10 iterations were regarded as burn-in and the estimates from these iterations discarded. The remaining 10000 estimates of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq20_HTML.gif were averaged, and likewise for http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq21_HTML.gif . We denote these means as http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq22_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq23_HTML.gif and their difference by http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq24_HTML.gif . The quantity http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq25_HTML.gif can be interpreted as an estimate of the order effect for imputation in one dataset. We estimated the (Monte Carlo) standard error of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq26_HTML.gif using the batch-means method, a method for computing standard errors for correlated output [24] p. 124, and calculated a 95% confidence interval from this.

              Linear discriminant analysis is an alternative way to estimate a logistic regression [25, 26]. A modified chained equations algorithm using linear discriminant analysis on all individuals with observed Y has been proposed as an alternative way to impute the binary variable Y[27]. Because the linear discriminant likelihood is the joint distribution of Y, W 1 and W 2, this model has the advantage of recovering some information about ψ Y  in the W margin. We repeated the simulation study using this modified chained equations algorithm.

              We assessed the sensitivity of our results by repeating the simulation study using different specifications. For joint modelling imputation we increased the number of imputed datasets generated, the burn-in period and the number of iterations between imputed datasets to 250. For the standard and modified chained equations procedures we (1) increased the burn-in period of the chained equations sampler to 1000 iterations and (2) sampled every 50 th  iteration thereby reducing serial correlation (with a burn-in period of 10 iterations). To check that our results were not dependent upon our choice of prior distributions we repeated the simulation study with improper imputation procedures; that is, using maximum likelihood estimates of ψ j  instead of Bayesian draws ψ j  from its posterior distribution http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq27_HTML.gif . Lastly, we also repeated the simulation study with a sample size of 1000 observations.

              Results

              Theoretical results

              Equivalence of joint modelling and chained equations imputation

              In this section, we give our key result Proposition 1, which shows that, in finite samples, compatibility of the conditionals and our proposed non-informative margins condition are sufficient for chained equations and joint modelling to yield imputations from the same predictive distribution.

              Consider a joint model p (x ∣ θ) and prior p (θ). From here onwards we shall use p (.∣.) to refer to any probability distribution derived from this joint model. In particular, for each j = 1,…,R, p (x j ∣ x -j ,θ) is the conditional distribution of x j given x -j and θ implied by the joint model, and p(x -j ∣ θ) is the conditional distribution of x -j given θ. The distribution of x given θ factorizes as
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equc_HTML.gif

              Let ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq28_HTML.gif be functions of θ such that p (x j ∣ x -j ,θ) = p(x j ∣ x -j ,ψ j ) and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq29_HTML.gif . That is, the distribution of x j given x -j and the distribution of x -j depend on θ only through the functions ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq30_HTML.gif , respectively, of θ. Let http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq31_HTML.gif denote the joint prior for ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq32_HTML.gif implied by p (θ), and let p (ψ j ) and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq33_HTML.gif denote the corresponding marginal priors.

              The chained equations algorithm applies the following two steps for each x j in turn:

              Step CE1 Draw http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq34_HTML.gif from the distribution proportional to http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq35_HTML.gif .

              Step CE2 Draw http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq36_HTML.gif from http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq37_HTML.gif .

              The choice of the parameterizations ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq38_HTML.gif does not affect the output of step CE2, but a parsimonious choice will help to make the condition of Proposition 1 hold.

              Proposition 1. Upon convergence, the chained equations algorithm defined by CE1 and CE2 and the joint model p (x ∣ θ) and prior p (θ) draw from the same predictive distribution of x mis if, for each incomplete variable x j , http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq39_HTML.gif , i.e. if the joint prior distribution for ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq40_HTML.gif factorizes into independent priors for ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq41_HTML.gif .

              Proof. Using the condition of Proposition 1,
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equd_HTML.gif
              Now, integrating out http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq42_HTML.gif ,
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Eque_HTML.gif

              Therefore, step CE1 yields a draw from http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq43_HTML.gif .

              Next, http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq44_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq45_HTML.gif are conditionally independent given http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq46_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq47_HTML.gif so http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq48_HTML.gif and step CE2 yields a draw from http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq49_HTML.gif .

              Hence steps CE1 and CE2 together yield a draw from http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq50_HTML.gif , and in particular they draw http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq51_HTML.gif from http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq52_HTML.gif . The latter is a full-conditional distribution corresponding to the joint density p (x) implied by the joint model. Once http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq53_HTML.gif has been sampled, http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq54_HTML.gif , the sampled value of ψ j , is not used again in the chained equations algorithm. So, the application of steps CE1 and CE2 to each j in turn and then iterating is a Gibbs sampler which, at convergence, yields a draw from p(x mis ∣ x obs ), the predictive distribution implied by the joint model and its accompanying prior.

              Comment 1. The condition of Proposition 1 does not hold if the conditional and marginal parameters ψ j  and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq55_HTML.gif are not distinct (i.e. if their joint parameter space is not the product of their separate parameter spaces), and in particular if the combined dimension of ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq56_HTML.gif is greater than that of θ. Distinctness of parameter spaces is a property of the model p (x ∣ θ) and not of the prior p (θ). It will be used in the examples of the unsaturated multinomial distribution and the general location model to identify joint models where the condition of Proposition 1 does not hold. Comment 2. Heuristically, the condition of Proposition 1 says that there is no information about ψ j  in the marginal distribution of x -j , so we call it the “non-informative margins” condition. When such information does exist, it is used by the joint modelling sampler but not by the chained equations algorithm, so they may draw from different distributions. Our simulation study will show that this occurs in an example by demonstrating order effects. Comment 3. As the non-informative margins condition has only been shown to be sufficient, then potentially if this condition is not satisfied the chained equations algorithm defined by CE1 and CE2 and the joint model p (xθ) and prior p (θ) could still draw from the same predictive distribution of x mis . Comment 4. This proof holds for improper prior distributions provided the posterior distributions are proper. Comment 5. When the non-informative margins condition holds true the chained equations algorithm is a Gibbs sampler, and so order effects cannot occur [22].

              Example 1: Multivariate normal

              Consider the multivariate normal joint model X ∼ N (μ,Σ) with parameter θ = (μ,Σ) and prior distribution p (θ) ∝ |Σ | -κ http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq57_HTML.gif . We show that the corresponding chained equations algorithm, which imputes under a set of normal linear regression models, satisfies the non-informative margins condition of Proposition 1 (and hence draws from the same joint model as joint modelling imputation).

              For each j we partition the mean vector μ as http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq58_HTML.gif and the covariance matrix Σ as
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equf_HTML.gif

              such that X j ∼ N (μ j ,σ j ) and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq59_HTML.gif . The conditional distribution of X j given X -j is the normal linear regression model http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq60_HTML.gif where http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq61_HTML.gif , http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq62_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq63_HTML.gif [2] p. 157. Using our notation for chained equations imputation (see under the subsection “Chained equations imputation” of the Methods section) ψ j = (α j ,β j ,ω j ) and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq64_HTML.gif .

              The joint prior for ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq65_HTML.gif derived from p (θ) is http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq66_HTML.gif [2] p. 158-159. So, using a standard result from matrix algebra for the determinant of a partitioned matrix,
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equg_HTML.gif

              Therefore, the non-informative margins condition of Proposition 1 is satisfied.

              Example 2: Saturated multinomial distribution

              We now show for joint modelling imputation under a saturated multinomial model and a Dirichlet prior for θ, that the corresponding chained equations algorithm satisfies the non-informative margins condition of Proposition 1.

              Consider K categorical variables X = (X 1,…,X K ), where each X j takes possible values 1,…,I j (j = 1,…,K). Variables X define a K-way contingency table. Let c = (c 1,…,c K ) denote a generic cell of the contingency table, θ c denote the cell probability pr (X = c) and  denote the set of all cells of the contingency table. The joint distribution of X is a multinomial distribution with parameter θ = (θ c  : c ∈ ) and index equal to 1.

              Summing the table counts over variable X j produces a collapsed contingency table defined by variables X -j . Let c -j = (c 1,…,c j-1,c j+1,…,c K ) denote a generic cell of the collapsed table, http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq67_HTML.gif denote the cell probability pr (X -j = c -j ) and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq68_HTML.gif denote the set of all cells of the collapsed table. The marginal distribution of X -j  is multinomial with parameter http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq69_HTML.gif , where http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq70_HTML.gif . The conditional distribution of X j given X -j = c -j is the multinomial distribution with parameters http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq71_HTML.gif . So, the full set of parameters for the conditional distribution of X j given X -j is http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq72_HTML.gif .

              If the prior distribution for θ is Dirichlet with hyperparameter α = {α c : c ∈ }, then the implied prior distributions for http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq73_HTML.gif and ψ j are independent: the prior for http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq74_HTML.gif is Dirichlet with hyperparameter http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq75_HTML.gif , where http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq76_HTML.gif , and the prior distribution for ψ j is the product of the set of independent Dirichlet distributions http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq77_HTML.gif , where http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq78_HTML.gif is a subset of α[2] p. 256. Since the prior for http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq79_HTML.gif can be factored into independent distributions for ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq80_HTML.gif , the non-informative margins condition of Proposition 1 is satisfied.

              Example 3: Unsaturated multinomial distribution

              When the joint model is an unsaturated multinomial model, we give an example where the conditional and marginal parameters ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq81_HTML.gif are not distinct (see comment 1 of Proposition 1). Consequently the non-informative margins condition of Proposition 1 is not satisfied.

              Consider K categorical variables X = (X 1,…,X K ) as described in the saturated multinomial example. Assume that all cell probabilities are positive, θ (c) gt; 0 for all c, to ensure that every multinomial distribution considered can be written as a log linear model and that all possible conditional distributions exist [28] p. 202. Let the joint model be the hierarchical log linear model that contains all two-way factors between the K variables and no higher order factors. We shall refer to this as the all two-way factor hierarchical model. Under this model, for any j the conditional distribution of X j given X -j follows a multinomial logistic regression model (or a logistic regression model when X j is binary) where the regression model includes variables X -j as main effects only (i.e. no interaction terms).

              In generating class notation [29] the all two-way factor hierarchical model is written as [ 1 2]…[ 1 K]… [ (K - 1) K]. Any hierarchical log linear model for X can be represented by an undirected graph in which the graph vertices are the variables X 1,…,X K and two vertices X g and X h are connected by an edge if and only if the log linear model contains two-factor or higher order terms involving variables X g and X h . Any model containing all two-factor terms is therefore represented by the complete graph. Asmussen and Edwards [29] state that two vertices in a graph are adjacent if there is an edge between them. For any subset a of the vertices, Asmussen and Edwards define the boundary of a to be the set of vertices that are not in a but that are adjacent to one or more vertices in a. So, for the complete graph, the boundary of X j is X -j . Theorem 2.3 of [29] states that a hierarchical log linear model is collapsible onto X -j if and only if the boundary of X j is contained in a generator of the hierarchical log linear model. Further, Theorem 4.1 of [29] states that if a hierarchical log linear model is not collapsible onto X -j , then parameters ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq82_HTML.gif are not distinct. For any j, X -j , the boundary of X j in the complete graph, contains all of the remaining K - 1 variables. When K ≥ 4, X -j  is not contained in any of the generators, [ 1 2]…[ 1 K]… [ (K - 1) K], of the all two-way factor hierarchical model, and hence the log linear model is not collapsible onto X -j . Therefore, parameters ψ j and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq83_HTML.gif are not distinct, and so the non-informative margins condition of Proposition 1 is not satisfied when K ≥ 4.

              Example 4: General location model

              We give an example of a chained equations algorithm derived from joint modelling under a general location model that does not satisfy the non-informative margins condition of Proposition 1. Our simulation study is based on this example.

              Suppose that the data on each individual consist of one incomplete binary variable Y and K - 1 continuous variables W = (W 1,…,W K - 1)T, where one or more of the continuous variables are also incomplete. Let the joint distribution of X = (Y,W)T be the general location model Y ∼ Bernoulli (γ) and W ∣ Y ∼ N (μ 0 + μ 1 Y,Σ), for unknown parameters θ = (γ,μ 0,μ 1,Σ) [30]. Let the joint prior for θ be p (θ) = γ τ-1 (1-γ) ν-1 ∣ Σ ∣ -κ with hyperparameters τ,ν gt; 0 and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq84_HTML.gif .

              From the multivariate normal example above it is straightforward to show that the non-informative margins condition of Proposition 1 holds for imputing any W j .

              The conditional distribution of Y given W, p(Y ∣ W, ψ Y ), is the logistic regression model with covariates W[25]. The marginal distribution of W, http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq85_HTML.gif , can be written as a mixture of normal distributions
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equh_HTML.gif

              This cannot be parameterized more parsimoniously than http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq86_HTML.gif . As ψ Y  is a function of θ, it is determined by http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq87_HTML.gif .

              Consequently, given http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq88_HTML.gif the parameters of the logistic regression model ψ Y are fully determined, and so http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq89_HTML.gif and ψ Y are not distinct. Therefore, as discussed in comment 1 above, the non-informative margins condition of Proposition 1 does not hold.

              Using the same argument, the non-informative margins condition of Proposition 1 does not hold for a chained equations algorithm derived from joint modelling under the restricted general location model, with cell probabilities restricted by the all two-factor hierarchical model (discussed in the unsaturated multinomial example) and cell means restricted to be a linear function of the categorical variables.

              Simulation study results

              This section reports the results of the simulation study, where the chained equations conditional models were compatible with the same joint model but the non-informative margins condition of Proposition 1 was not satisfied.

              Figures 1a and 1b show, for the first 30 of the 500 datasets, the value of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq90_HTML.gif (estimate of the order effect in one dataset) along with its 95% confidence interval, for the (standard) chained equations procedure (i.e., binary variable Y imputed under the logistic regression model). In a number of the datasets the 95% confidence intervals did not cross zero. Thus, there was clear evidence of order effects, with the magnitude of such effects varying between datasets. Such statistically significant evidence of an order effect occurred in 164 and 386 of the 500 datasets for β = 1 and β = 3 respectively. The range of absolute values of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq91_HTML.gif was larger for β = 3 than for β = 1. These results confirm that the chained equations procedure was not equivalent to any joint model procedure.
              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Fig1_HTML.jpg
              Figure 1

              Four forest plots of the posterior mean differences http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq92_HTML.gif . Each panel is a forest plot of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq93_HTML.gif for the first 30 datasets, 95% confidence intervals calculated using the Monte Carlo standard error. Panels (a) and (b)  correspond to binary variable Y imputed under the logistic regression model, with β = 1 and β = 3 respectively. Panels (c) and (d) correspond to Y imputed under the linear discriminant model, with β = 1 and β = 3 respectively.

              The average magnitudes of the order effects http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq94_HTML.gif , http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq95_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq96_HTML.gif over the 500 datasets are shown in Table 1. Consider the results for the chained equations algorithm (labelled LR). In keeping with Figure 1, the average magnitude of the order effect was larger for β = 3 than β = 1. The means of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq97_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq98_HTML.gif did not differ systematically, consistent with the direction of the order effect being arbitrary and dataset dependent. Estimates http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq99_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq100_HTML.gif appeared to be equally good estimates of β.
              Table 1

              Over 500 datasets, average of the complete case estimates, joint modelling imputation estimates and values of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq101_HTML.gif , http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq102_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq103_HTML.gif for the chained equations algorithm and the modified chained equations algorithm, with confidence intervals [mean ± 1 . 96 × (standard deviation ÷ 500)]

              β

              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equi_HTML.gif

              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equj_HTML.gif

               

              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equk_HTML.gif

              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equl_HTML.gif

              http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_Equm_HTML.gif

              1

              1 · 01

              0 · 97

              LR

              0 · 98

              0 · 98

              0 · 0040

               

              [0 · 93,1 · 09]

              [0 · 89,1 · 05]

               

              [0 · 90,1 · 06]

              [0 · 90,1 · 06]

              [0 · 0034,0 · 0046]

                 

              LDA

              0 · 97

              0 · 97

              0 · 0022

                  

              [0 · 89,1 · 05]

              [0 · 89,1 · 05]

              [0 · 0020,0 · 0024]

              3

              3 · 00

              2 · 90

              LR

              2 · 93

              2 · 93

              0 · 0166

               

              [2 · 91,3 · 09]

              [2 · 82,2 · 98]

               

              [2 · 85,3 · 02]

              [2 · 85,3v· 02]

              [0 · 0145,0 · 0188]

                 

              LDA

              2 · 89

              2 · 89

              0 · 0067

                  

              [2 · 81,2 · 98]

              [2 · 81,2 · 97]

              [0 · 0062,0 · 0073]

               complete case analysis estimate of β from the observed data only;

               estimate of β from joint modelling imputation;

              ∇ logistic regression;

              ¶ linear discriminant analysis.

              The forest plots of http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq104_HTML.gif with 95% confidence intervals corresponding to the modified chained equations algorithm (i.e Y imputed under the linear discriminant model) are shown in Figures 1c and 1d. Order effects were smaller than in Figures 1a and b, but were still present because the modified algorithm did not use information about W 1 and W 2 when Y was missing. Out of the 500 datasets the number of datasets that showed statistically significant evidence of an order effect was 113 for β = 1 and 330 for β = 3.

              From Table 1, the complete case estimates of β were unbiased. When β = 3 the linear discriminant analysis values http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq105_HTML.gif and http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq106_HTML.gif , and the joint modelling imputation estimate http://static-content.springer.com/image/art%3A10.1186%2F1471-2288-14-28/MediaObjects/12874_2013_1050_IEq107_HTML.gif were slightly biased towards the null. This bias was due to the prior (which was the same for imputation under the linear discriminant analysis model and joint modelling imputation under the general location model) and it disappeared when the sample size was increased to 1000 observations and when the simulation study was conducted using improper imputation; i.e using maximum likelihood estimates instead of Bayesian draws of parameters. For joint modelling imputation, and the standard and modified chained equations procedures changing their specifications (e.g., larger number of iterations, burn-in period) gave the same pattern of results as above (results not shown but available on request from the authors).

              In preliminary simulation studies, when the non-informative margins condition was satisfied the results were consistent with a zero order effect (results not shown but available on request from the authors).

              Discussion

              We have defined a non-informative margins condition which, together with compatibility of the conditional models, we have proved is sufficient for a chained equations algorithm to impute missing data from the predictive distribution of the missing data implied by the joint model and its prior distribution. Also, we have shown that compatibility of the conditional models is not alone a sufficient condition. In a scenario where the conditionals models were compatible but the non-informative margins condition failed, our simulation study showed that the distribution from which the final imputed values of the variables were drawn differed, in a dataset-dependent manner, according to the order in which the variables were updated in the chained equations sampler.

              In work that is complementary to the finite-sample results presented in this paper, Liu et al. [19] identified sufficient conditions for chained equations imputation and imputation under a fully Bayesian model to be asymptotically equivalent; that is, for the supremum of the difference between the two imputation distributions to converge to zero as the sample size tends to infinity. This implies that when the non-informative margins condition is not satisfied but the conditional models are compatible with the same joint model, the order effects identified in our simulation study will disappear as the sample size tends to infinity.

              In our simulation study the average magnitude of the order effects was small and did not induce bias. Given that chained equations imputation is a widely used approach to imputation, these results are somewhat reassuring. However, the scope of these simulations was limited and it remains possible that chained equations imputation could lead to more substantial bias in different situations; for example, when there are many partially observed variables.

              When the non-informative margins condition does not hold we expect some loss of efficiency in general (because some information is discarded). However, in our simulation study we did not detect any sizable loss of efficiency. The issue of variance estimation for chained equations imputation is beyond the scope of this paper.

              The advantage of chained equations imputation is that we do not need to specify the joint distribution of the variables. In cases where it is not known that there is a joint distribution, several methods for checking compatibility have been proposed (e.g., [16, 3135]). In practice, these methods are either limited to discrete distributions or are difficult to apply for multivariate distributions of more than 2 or 3 dimensions. This means that it may not be possible to check that the conditionals are compatible with the same joint model or that our non-informative margins condition holds true. van Buuren and other authors [3, 18], in the examples they considered, concluded that chained equations is a robust approach even when the set of conditionals are not compatible with the same joint model. The findings of our simulation study support this body of work. Other studies [3, 4, 36, 37] have compared chained equations and joint modelling, when missingness is multivariate, nonmonotone and ignorable, in settings which reflect real data (e.g. mixture of different types of variables, non-linear relationships and interactions between variables, semi-continuous variables). None of these studies has reported substantial differences in the performances of joint modelling imputation and chained equations imputation. Nonetheless many authors emphasize the need for further understanding of the theoretical underpinnings of the chained equations approach and the establishment of the robustness of the chained equations method (e.g. [3, 7, 14, 38]).

              Conclusions

              In finite samples, compatibility of the conditionals and our non-informative margins condition are sufficient for chained equations and joint modelling to yield imputations from the same predictive distribution. Furthermore, our simulation study demonstrated that, even in a simple setting, a chained equations procedure that does not satisfy the non-informative margins condition is not necessarily equivalent to a joint model procedure, even though when its conditional models are compatible.

              When conditionals are incompatible or the non-informative margins condition is not satisfied, the distribution from which the imputed values are drawn can differ according to the order in which the variables are updated in the chained equations sampler, thereby introducing order effects.

              Given the widespread use of chained equations imputation in medical research for datasets with different types of variables, researchers must be aware that order effects are likely to be ubiquitous. As noted by van Buuren [3], further work is needed to verify the robustness of chained equations to incompatibility of the conditional models in more general and realistic settings. Equally, future work could evaluate the robustness of chained equations imputation when the sample size is small-to-moderate, the conditionals are compatible and the non-informative margins condition is not satisfied.

              Declarations

              Acknowledgements

              This work was supported by the Medical Research Council [Grant G0900724, Unit Programme number U105260558].

              Authors’ Affiliations

              (1)
              School of Social and Community Medicine, University of Bristol
              (2)
              MRC Biostatistics Unit, Institute of Public Health
              (3)
              London School of Hygiene and Tropical Medicine
              (4)
              MRC Clinical Trials Unit

              References

              1. Rubin DB: Multiple Imputation for Nonresponse in Surveys. New York: John Wiley & Sons, Inc; 1987.View Article
              2. Schafer JL: Analysis of Incomplete Multivariate Data. London: Chapman & Hall; 1997.View Article
              3. van Buuren S: Multiple imputation of discrete and continuous data by fully conditional specification . Stat Methods Med Res 2007, 16:219–242.PubMedView Article
              4. Raghunathan TE, Lepkowski JM, van Hoewyk J, Solenberger P: A multivariate technique for multiply imputing missing values using a sequence of regression models . Surv Methodol 2001, 27:85–95.
              5. Gelman A, Raghunathan TE: [Conditionally specified distributions: An introduction]: comment . Stat Sci 2001, 16:268–269.
              6. van Buuren S, Boshuizen HC, Knook DL: Multiple imputation of missing blood pressure covariates in survival analysis . Statist Med 1999, 18:681–694.View Article
              7. van Buuren S, Groothuis-Oudshoorn K: mice: Multivariate Imputation by Chained Equations in R . J Stat Softw 2011, 45:1–67.
              8. Morgenstern M, Wiborg G, Isensee B, Hanewinkel R: School-based alcohol education: Results of a cluster-randomized controlled trial . Addiction 2009, 104:402–412.PubMedView Article
              9. Mueller B, Cummings P, Rivara F, Brooks M, Terasaki R: Injuries of the head, face, and neck in relation to ski helmet use . Epidemiology 2008, 19:270–276.PubMedView Article
              10. Nash D, Katyal M, Brinkhof M, Keiser O, May M, Hughes R, Dabis F, Wood R, Sprinz E, Schechter M, Egger M: Long-term immunologic response to antiretroviral therapy in low-income countries: A collaborative analysis of prospective studies . AIDS 2008, 22:2291–2302.PubMed CentralPubMedView Article
              11. Souverein O, Zwinderman A, Tanck T: Multiple imputation of missing genotype data for unrelated individuals . Ann Hum Genet 2006, 70:372–381.PubMedView Article
              12. Huo D, Adebamowo C, Ogundiran T, Akang E, Campbell O, Adenipekun A, Cummings S, Fackenthal J, Ademuyiwa F, Ahsan H, Olopade O: Parity and breastfeeding are protective against breast cancer in nigerian women . Br J Cancer 2008, 98:992–996.PubMed CentralPubMedView Article
              13. Wiles N, Jones G, Haase A, Lawlor D, Macfarlane G, Lewis G: Physical activity and emotional problems amongst adolescents . Soc Psychiatry Psychiatric Epidemiol 2008, 43:765–772.View Article
              14. Azur M, Stuart E, Frangakis C, Leaf P: Multiple imputation by chained equations: what is it and how does it work? Int J Methods Psychiatric Res 2011, 20:40–49.View Article
              15. White IR, Royston P, Wood A: Multiple imputation using chained equations: Issues and guidance for practice . Stat Med 2011, 30:377–399.PubMedView Article
              16. Arnold BC, Press SJ: Compatible conditional distributions . J Am Statist Assoc 1989, 84:152–156.View Article
              17. Heckerman D, Chickering DM, Meek C, Rounthwaite R, Kadie C: Dependency networks for inference, collaborative filtering, and data visualization . J Mach Learn Res 2000, 1:49–75.
              18. van Buuren S, Brand JPL, Groothuis-Oudshoorn CGM, Rubin DB: Fully conditional specification in multivariate imputation . J Stat Comput Simulat 2006, 76:1049–1064.View Article
              19. Liu J, Gelman A, Hill J, Su Y, Kropko J: On the stationary distribution of iterative imputations . Biometrika 2013. doi: 10.1093/biomet/ast044
              20. Little RJA, Rubin DB: Statistical Analysis with Missing Data. New York: John Wiley & Sons, Inc; 2002.
              21. Tanner MA, Wong WH: The calculation of posterior distributions by data augmentation . J Am Statist Assoc 1987, 82:528–540.View Article
              22. Arnold B, Castillo E, Sarabia J: Conditionally specified distributions an introduction . Stat Sci 2001, 16:249–265.View Article
              23. Kirkwood B, Sterne J: Essential Medical Statistics. Hoboken, New Jersey, US: Wiley-Blackwell; 2003.
              24. Albert J: Bayesian computation with R. Dordrecht, Heidelberg, London, New York: Springer; 2009.View Article
              25. Efron B: The efficiency of logistic regression compared to normal discriminant analysis . J Am Statist Assoc 1975, 70:892–898.View Article
              26. Cox D, Snell E: Analysis of Binary Data. second edition. London, UK: Chapman and Hall; 1989.
              27. van Buuren S, Oudshoorn C: Multivariate Imputation by Chained Equations (Mice V1.0 User’s Manual). 2000. http://​www.​stefvanbuuren.​nl/​publications/​MICE%20​V1.​0%20​Manual%20​TNO00038%20​2000.​pdf
              28. Whittaker J: Graphical models in applied multivariate statistics. New York: John Wiley & Sons, Inc; 1990.
              29. Asmussen S, Edwards D: Collapsibility and response variables in contingency tables . Biometrika 1983, 70:567–578.View Article
              30. Olkin I, Tate RF: Multivariate correlation models with mixed discrete and continuous variables . Ann Math Stat 1961, 32:448–465.View Article
              31. Arnold B, Castillo E, Sarabia J: Compatibility of partial or complete conditional probability specifications . J Stat Plann Inference 2004, 123:133–159.View Article
              32. Ip E, Wang Y: Canonical representation of conditionally specified multivariate discrete distributions . J Multivariate Anal 2009, 100:1282–1290.View Article
              33. Tian G, Tan M, Ng K, Tang M: A unified method for checking compatibility and uniqueness for discrete conditional distributions . Commun Stat: Theory Methods 2009, 38:115–129.View Article
              34. Chen H: Compatibility of conditionally specified models . Stat Probability Lett 2010, 80:670–677.View Article
              35. Kuo K, Wang Y: A simple algorithm for checking compatibility among discrete distributions . Comput Stat Data Anal 2011, 55:2457–2462.View Article
              36. Horton N, Kleinman K: Much ado about nothing: A comparison of missing data methods and software to fit incomplete data regression models . Am Stat 2007, 61:79–90.PubMed CentralPubMedView Article
              37. Yu LM, Burton A, Rivero-Arias O: Evaluation of software for multiple imputation of semi-continuous data . Stat Methods Med Res 2007, 16:243–258.PubMedView Article
              38. Kenward M, Carpenter J: Multiple imputation: current perspectives . Stat Methods Med Res 2007, 16:199–218.PubMedView Article
              39. Pre-publication history

                1. The pre-publication history for this paper can be accessed here: http://​www.​biomedcentral.​com/​1471-2288/​14/​28/​prepub

              Copyright

              © Hughes et al.; licensee BioMed Central Ltd. 2014

              This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://​creativecommons.​org/​licenses/​by/​2.​0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.