A framework for power analysis using a structural equation modelling procedure
© Miles 2003
Received: 28 May 2003
Accepted: 11 December 2003
Published: 11 December 2003
Skip to main content
© Miles 2003
Received: 28 May 2003
Accepted: 11 December 2003
Published: 11 December 2003
This paper demonstrates how structural equation modelling (SEM) can be used as a tool to aid in carrying out power analyses. For many complex multivariate designs that are increasingly being employed, power analyses can be difficult to carry out, because the software available lacks sufficient flexibility.
Satorra and Saris developed a method for estimating the power of the likelihood ratio test for structural equation models. Whilst the Satorra and Saris approach is familiar to researchers who use the structural equation modelling approach, it is less well known amongst other researchers. The SEM approach can be equivalent to other multivariate statistical tests, and therefore the Satorra and Saris approach to power analysis can be used.
The covariance matrix, along with a vector of means, relating to the alternative hypothesis is generated. This represents the hypothesised population effects. A model (representing the null hypothesis) is then tested in a structural equation model, using the population parameters as input. An analysis based on the chi-square of this model can provide estimates of the sample size required for different levels of power to reject the null hypothesis.
The SEM based power analysis approach may prove useful for researchers designing research in the health and medical spheres.
Structural equation modelling (SEM) was developed from work in econometrics (simultaneous equation models; see for example Wansbeek and Meijer ) and latent variable models from factor analysis [3, 4]. Structural equation modelling is an enormously flexible technique – it is possible to use a structural equation modelling approach to carry out direct equivalents of many analyses, including (but not limited to): ANOVA, correlation, ANCOVA, multiple regression, multivariate analysis of variance, and multivariate regression. This flexibility will be exploited in the approach set out in this article.
A necessarily very brief introduction to the logic of structural equation modelling is presented here – for a more thorough introduction to the basics of structural equation modelling the reader is directed towards one of the many good introductory texts, (Steiger has recently reviewed several such texts ). For more details on the statistical and mathematical aspects of structural equation modelling, the reader is directed toward texts by Bollen  (a second edition of this text is in press) or Wansbeek and Meijer  and Jöreskog .
The data to be analysed in a structural equation model comprise the observed covariance matrix S, and may include the vector of means M. If k represents the number of variables in the dataset to be analysed, the number of non-redundant elements (p) is given by:
p = k (k + 1)/2 + k
This formula gives the number of non-redundant elements in the covariance matrix (S) and vector of means (M).
However many models exclude the mean vector, and so the number of non-redundant elements in the covariance matrix is given by:
p = k (k + 1)/2
The model is a set of r parameters, fixed to certain values (usually 0 or 1), constrained to be equal to one another, or allowed to be free. The estimated parameters of the model are used to calculate an implied covariance matrix, Σ and an implied vector of means. An iterative search is carried out, which attempts to minimise the discrepancy function (F), a measure of the difference between S and Σ. The discrepancy function (maximum likelihood, for covariances only) is given by:
F = log |Σ| + tr (SΣ-1) - log |S| - p
The discrepancy function, multiplied by N - 1, follows a χ2 distribution, with degrees of freedom (df) equal to p - r. The value of the discrepancy function multiplied by N - 1 is usually referred to as the χ2 test of the model.
In addition to the χ2 test, standard errors (and hence t-values [although these values are referred to as t-scores, they are more properly described as asymptotic z-scores] and probability values) can be calculated, and these can be used to test the statistical significance of the difference between that parameter value, and any hypothesised population value (most commonly zero).
When p >r, the model is referred to as being over-identified, in this case it will not necessarily be possible (or usual) to find values for the set of parameters, r, which can ensure that S = Σ. However, where r = p it is possible (given that the correct parameters are estimated) to provide values for the parameters in the model, such that S = Σ. This type of model, where r = p is described as being just identified. The df of the model will be equal to zero, and the value of χ2 will also be equal to zero.
It is possible to use the standard errors of the parameter estimates to test the statistical significance of the values of these parameters. In the next section, we shall see how it is also possible to use the χ2 test to evaluate hypotheses regarding the value of these parameters in a model. This is most easily described using path diagrams as a tool to represent the parameters in a structural equation model.
The commonest representation of a structural equation model is in a path diagram. In a path diagram, a box represents a variable, a straight, single-headed arrow represents a regression path, and a curved arrow represents a correlation or covariance (in addition, an ellipse represents a latent, or unobserved variable; the methods described in this paper do not use latent variables, however the interested reader is directed towards a recent chapter by Bollen ). Different conventions exist for path diagrams – throughout this paper the RAM specification will be used (Neale, Boker, Xie and Maes, give a full description of this approach. ) In RAM specification, a double headed, curved arrow, which curves back to the same point represents the variance of the variable (the covariance of a variable with itself being equal to the variance). In the case of an endogenous (dependent) variable, the variance represents the residual, or unexplained variance of the variable.
A multiple regression model is shown in Figure 3. Here there are 4 predictor variables (x 1 to x 4) which are used to predict an outcome variable (y). There are 5 variables in the model, and therefore the data comprise 15 elements (5 variances and 10 covariances), and p = 15. The model comprises 4 variances of the predictor variables, 1 unexplained variance in the outcome variable, 4 regression weights, and 6 correlations amongst the independent variables. Therefore, r = p = 15, and the model is just identified. Two different kinds of parameters are usually tested for statistical significance in this model. First, the unexplained variance in the outcome variable is tested against 1 (if standardised) to determine whether the predictions that can be made from the predictor variables is likely to be better than chance. This is the equivalent of the ANOVA test of R2 in multiple regression. Second, the values of the individual regression estimates can be tested for statistical significance.
The logic extends to the case of a multivariate ANOVA or regression, which has multiple outcome variables. The multivariate approach allows each of the individual paths to be estimated, and tested for statistical significance, however, it also allows groups of paths to be tested simultaneously, using the χ2 difference test – the multivariate F test in MANOVA is equivalent to the simultaneous test of all of the paths from the predictor variables to the outcome variables. The multivariate test can prove more powerful than two separate univariate analyses .
When a model is restricted – that is not all paths are free to be estimated, it becomes over-identified. The difference between the data implied by the model and the observed data can be tested for statistical significance using the χ2 test. Each restriction in the model adds 1 df, and this can be used to interpret the difference between the data implied by the model, and the observed data.
In a multivariate regression, a set of outcomes is regressed on a set of predictors. As well as including a test of each parameter, a multivariate test is also carried out, testing the effect of each predictor on each set of outcome variables. Again using data from Wolfradt, et al., I carried out a multivariate regression (using the SPSS GLM procedure). The three predictor variables were mother's warmth, mother's rules, and mother's demands. The two outcomes were active coping and passive coping.
Four models were estimated, in the first, all parameters were free to be estimated. In the second, the two paths from warmth were restricted to zero, in the second, the two paths from rules were restricted to zero, and in the final model, the two paths from demands were restricted to zero. The first model has 0 df, and hence the implied and observed covariance matrices are always equal, and the χ2 is equal to zero. Each of the other models has 2 df, because two restrictions were added to the model.
Results of multivariate F test, and χ2 difference test, for multivariate regression
Multivariate F (df = 2, 263)
χ2 (df = 2)
In addition to relationships between variables being incorporated into structural equation models it is also possible to incorporate means (or, in the case of endogenous variables, intercepts). In the RAM specification, a mean is modelled as a triangle. The path diagram shown in Figure 4 effectively says "estimate the mean of the variable x". In the circumstances where there are no restrictions, the estimated mean of x in the model will be the mean of x in the data. However, it is possible to place restrictions on the data, and test the model, again with a χ2 test. A simple example would be to place a restriction on the mean to a particular value – this would be the equivalent of a one sample t-test. A further example, shown in Figure 5 is a paired samples t-test. Here, the means are represented by the parameter a – both paths are given the same label, meaning that the two means are fixed to be equal. Again, this can be estimated with a χ2 test.
Means and covariances of warmth, demands and rules (variances are shown in the diagonal)
Comparison of results from GLM test carried out using SPSS and test using SEM framework, using Mx.
1. μ1 = μ2 = μ3 (rules = demands = warmth)
16.530 (2, 18)
2. μ1 = μ2 (rules = demands)
34.5 (1, 19)
3. μ1 = μ3 (rules = warmth)
19.29 (1, 19)
4. μ2 = μ3 (demands = warmth)
3.32 (1, 19)
The SEM analyses considered so far have only considered single groups, however, it is possible to carry out analyses across groups, where the parameters in two (or more) groups can be constrained to be equal. This multiple group approach can be used to analyse data from a mixed design, with a repeated measures factor, and an independent groups factor. The model is shown in Figure 7 (again using the data from Wolfradt, et al.). There are data from two groups, males and females. Each group has measures taken on two variables (x1 and x2). The parameter labelled b in the males represents the intercept of the two variables x 1 and x 2, which in this case is the mean of x 2, the parameter labelled a is the slope parameter, or the difference between the means of x1 and x2. There are three separate hypotheses to test:
1) Main effect of sex.
2) Main effect of type (x 1 vs x 2)
3) Interaction effect of type and sex.
Results of multivariate F test, and χ2 difference test, for multivariate regression
Multivariate F (df = 1, 266)
χ2 given by:
χ2 (df = 1)
Model 2 - Model 1
Type (rules vs demands)
Model 3 - Model 2
Sex x Type
Model 3 - Model 0
Model 1: b = d, a = c = 0. This model has three restrictions, and hence 3 df.
Model 2: a = c = 0. This model has two df. By removing the b = d restriction, the means of the two measures are allowed to vary across the groups.
Model 3: a = c. This model has one df. By removing the a = c = 0 restriction, the means of rules and demands are allowed to vary. However, because the a = c restriction is still in place, the variation is forced to be equal across groups. The χ2 difference test between this model and model 2 provides the probability associated with the null hypothesis that there is no effect of rules vs demands (type).
Model 0: All parameters free. This model has zero df, and hence χ2 will equal zero. The χ2 difference test between this model and model 3 tests the interaction effect, although this will be equal to the χ2 and df of model 3. This allows the difference between x 1 and x 2 to vary across gender, thereby testing the null hypothesis of no interaction effect.
χ2, df and p for models 0 to 4. Differences between these models are used to test hypotheses of main effects and interactions)
1 (b = d, a = c = 0)
2 (a = c = 0)
3 (a = c)
0 (no restrictions)
The power of a statistical test is the probability that the test will find a statistically significant effect in a sample of size N, at a pre-specified level of alpha, given that an effect of a particular size exists in the population. Power of statistical tests is considered increasingly important in medical and social sciences, and most funding bodies insist that power analysis is used to determine the appropriate number of participants to use. It is increasingly recognised that power is not just a statistical or methodological issue, but an ethical issue. In medical trials, patients give their consent to take part in studies which they hope will help others in the future – if the study is underpowered, the probability of finding an effect may be minimal. The CONSORT statement (CONsolidated Standards Of Reporting Trials ), a checklist adopted by a large number of medical journals (see http://www.consort-statement.org), states that published research should give a description of the method used to determine the sample size. Whilst the basis for power calculations is relatively simple, the mathematics behind them is complex, as they require calculation of areas under the curve for non-central distributions.
[When using statistics for any amount of time, we become familiar with central distributions – these are distributions such as the t, the F or the χ2. However, these are the distribution of the statistic when the null hypothesis is true. To calculate the distribution when the null hypothesis is false, we must know the non-centrality parameter – the expected mean value of the distribution, and then examine the probability of finding a result which would be considered significant at our pre-specified level of alpha.]
Whilst it is possible in some statistical packages to calculate values for non-central distributions, it is not straightforward (although it is possible) to use these for power calculations.
There are a range of resources available for power analysis, including commercial books containing tables [16–19], commercial software (e.g. SamplePower, nQuery), freeware software (e.g. GPower ), and web pages which implement the routines. However, software for power analysis has some problems coping with the range of complex designs that are possible in research. Including covariates in a study can increase the power to detect a difference  but can also increase the complexity of the power analysis. In a multiple regression analysis, calculation of the power to detect a statistically significant value for R2 is relatively straightforward, using tables or books. However, the power to detect significant regression weights for the individual predictors is more difficult. Incorporating interactions into power analysis is also not straightforward.
An alternative way to approach power is to use a structural equation modelling framework. Satorra and Saris  proposed a procedure for estimating power for structural equation models, which is as follows:
First, a model is set up which matches the expected effect sizes in the study. From this the expected means and covariances are calculated. These data are treated as population data. Second, a model is set up where the parameters of interest are restricted to zero (or the values expected under the null hypothesis). This model is then estimated, and the χ2 value of the discrepancy function is calculated. This can be used to calculate the non-centrality parameter, which is then used to estimate the probability of detecting a significant effect. It should be noted that the power estimates using the SEM approach are asymptotically equivalent to the GLM approach employed in OLS modelling, at smaller sample sizes, larger discrepancies will occur between the two methods .
Three examples of power analysis are presented. Example 1 shows how to use SEM to power a study to detect a correlation; the second is used for a mixed ANOVA, using a 2 × 2 design; the third shows how to power a study that uses a multivariate ANOVA / regression.
Power to detect a population correlation r = 0.3, by three programs
Mx (SEM approach)
The second example to be examined is the case of a multivariate ANOVA. It is well known that a multivariate design can be more powerful than a univariate design, though calculating how much more powerful can be difficult. 
The simple multivariate design is shown in Figure 8. Here the effects of a single independent variable on three dependent variables are assessed. It is necessary to calculate the population covariance matrix for this example. The covariance matrix of the dependent variables is found by multiplying the vector of regression weights by its transpose, and adding the residual variances and covariances of the dependent variables.
The correlations between the dependent variables is therefore given by:
It is usually more straightforward to enter the values as fixed parameters into the SEM program, and estimate the population covariance matrix in this way.
This analysis can proceed via one of two means – three univariate analyses or one multivariate analysis. Calculation of power for the univariate analysis by conventional methods (power analysis table or program) is uncomplicated, however calculation of power for the multivariate approach is less so.
Power can be estimated for two different types of effects. First, the power to detect each of the univariate effects can be examined, second the power to detect the multivariate effect of x on the three dependent variables simultaneously. For the purposes of this example, the standardised population parameter estimates were as follows: each of the standardised univariate effects was equal to 0.5, and the three residual correlations between the dependent variables were set to -0.05.
Standardised population covariance matrix for example 2
Power for univariate and multivariate tests
Sample size required for 80% power
Test that x → y1 = 0 (univariate) (df = 1)
Multivariate test (df = 3)
Relationship between correlation between DVs and Sample size required for 80% power, in multivariate ANOVA.
Correlation between DVs
Sample size required for 80% Power
Repeated measures analysis presents a number of additional challenges to the researcher, in terms of both methodological issues  and statistical issues . In a repeated measures analysis, the researcher must examine both the difference between the means of the variables, and also the covariance/correlation between the variables. In Example 3, I examine the effect of differences in the magnitude of the correlation between variables in a repeated measures ANOVA, comparing the mean of three variables.
The three variables x 1, x 2 and x 3, have population means of 0.8, 1.0 and 1.2 respectively, and variances of 1.0. The correlations between them were fixed to be equal in all models, and were fixed to be 0, 0.2, 0.8, or -0.2. A simple analysis was carried out, to investigate the sample size required to attain 80% power to detect a statistically significant difference, at p < 0.05, using an Mx script example 3 [see additional file 4]. The correlation between the variables was estimated to be 0.0, 0.2, 0.5, 0.8 or -0.2.
Variation in power for repeated measures design given different level of correlation between measurements.
Size of correlations between variables
Sample size required for 80% power
This paper has presented an approach to power analysis developed by Saris and Satorra for structural equation models, that can be adapted to a very wide range of designs. The approach has three related applications.
First, in carrying out power analyses for studies, there are frequently complex relationships between different relationships in different studies. For example, the power to detect a difference in a repeated measures design is dependent upon the correlation between the variables. It may be possible to give power estimates based on 'best guess' and on upper and lower limits for these measures.
Second, for some types of studies, adequate power analysis is very complex using other approaches. To investigate, for example, the power to detect a significant difference between two partial correlations is difficult to calculate.
Third, and finally, in planning instruments to use in research. Many applied areas of research in health have multiple potential outcome measures; for example consider the range of instruments available for the assessment of quality of life. Many of these measures will have been used together in previous studies, and therefore the correlation between them may be known, or able to be estimated. The effect of these correlations on the power of the study can be investigated using this approach, which may affect the choice of measure.
For those unfamiliar with the package, and perhaps unfamiliar with SEM, the learning curve for Mx can be steep. The path diagram tool within Mx is extremely useful – the model is drawn, and restrictions can be added. The program will then use the diagram to produce the Mx syntax which can then be edited. This approach leads to faster, and more error-free, syntax. The author is happy to be contacted by email to attempt to assist with particular problems that readers may encounter. A document is available which describes how SPSS can be used to calculate the power, given the χ2 of the model [see additional file 5].
Finally, for readers who may be interested in further exploration of these issues, it should be noted that an alternative approach to estimating fit in SEM has been presented by MacCallum, Browne and Sugawara. 
Thanks to Diane Miles and Thom Baguley, for their comments on earlier drafts of this paper, and to Keith Widaman and Frühling Rijsdijk who reviewed this paper, pointing out a number of areas where clarifications and improvements could be made.
This article is published under license to BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL.