Skip to main content
  • Research article
  • Open access
  • Published:

Sample size calculations for model validation in linear regression analysis

Abstract

Background

Linear regression analysis is a widely used statistical technique in practical applications. For planning and appraising validation studies of simple linear regression, an approximate sample size formula has been proposed for the joint test of intercept and slope coefficients.

Methods

The purpose of this article is to reveal the potential drawback of the existing approximation and to provide an alternative and exact solution of power and sample size calculations for model validation in linear regression analysis.

Results

A fetal weight example is included to illustrate the underlying discrepancy between the exact and approximate methods. Moreover, extensive numerical assessments were conducted to examine the relative performance of the two distinct procedures.

Conclusions

The results show that the exact approach has a distinct advantage over the current method with greater accuracy and high robustness.

Peer Review reports

Background

Regression analysis is the most commonly applied statistical method of all scientific fields. The extensive utility incurs continuous investigations to give various interpretations, extensions, and computing algorithms for the development and formulation of empirical models. General guidelines and fundamental principles on regression analysis have been well documented in the standard texts of Cohen et al. [1], Kutner et al. [2], and Montgomery, Peck, and Vining [3], among others. Among the methodological issues and statistical implications of regression analysis, model adequacy and validity represent two vital aspects for justifying the usefulness of the underlying regression model. In the process of model selection, residual analysis and diagnostic checking are employed to identify influential observations, leverage, outliers, multicollinearity, and other lack of fit problems. Alternatively, model validation refers to the plausibility and generalizability of the regression function in terms of the stability and suitability of the regression coefficients.

In particular, it is emphasized in Kutner et al. ([2], Section 9.6), Montgomery, Peck, and Vining ([3], Section 11.2), and Snee [4] that there are three approaches to assessing the validity of regression models: (1) comparison of model predictions and coefficients with physical theory, prior experience, theoretical models, and other simulation results; (2) collection of new data to check model predictions; and (3) data splitting in which reservation of a portion of the available data is used to obtain an independent measure of the model prediction accuracy. Essentially, the fundamental utilities between model selection and model validation should be properly recognized and distinguished because a refined model that fits the data does not necessarily guarantee prediction accuracy. Further details and related issues can be found in the importance texts of Kutner et al. [2] and Montgomery, Peck, and Vining [3] and the references therein.

The present article focuses on the validation process of linear regression analysis for comparison with postulated or acclaimed models. In linear regression, the focus is often concerned with the existence and magnitude of the slope coefficients. However, the quality of estimation and prediction in associating the response variable with the predictor variables is determined by the closely intertwined intercept and slope coefficients. It is of practical importance to conduct a joint test of intercept and slope coefficients in order to verify the compatibility with established or theoretical formulations. For example, Maddahi et al. [5] compared left ventricular myocardial weights of dogs by nuclear magnetic resonance imaging with actual measurements for different methods using simple linear regression analysis. The results were tested, both individually and simultaneously, whether the intercept was different from zero and the slope was different form unity. Also, Rose and McCallum [6] proposed a simple regression formula for estimating the logarithm of feta weight with the sum of the ultrasound measurements of biparietal diameter, mean abdominal diameter, and femur length. Note that the birth weights differ among ethic groups, cohort characteristics, and time periods. Thus, it is of considerable interest for related research to validate or compare the magnitudes of intercept and slope coefficients in their formulation.

The importance and implications of statistical power analysis in research studies are well addressed in Cohen [7], Kramer and Blasey [8], Murphy, Myros, and Wolach [9], and Ryan [10], among others. In the context of multiple regression and correlation, the distinct notions of fixed and random regression settings were emphasized and explicated in power and sample size calculations by Gatsonis and Sampson [11], Mendoza and Stafford [12], Sampson [13], and Shieh [14,15,16]. On the other hand, Kelley [17], Krishnamoorthy and Xia [18], and Shieh [19] discussed sample size determinations for constructing precise confidence intervals of strength of association. It is noteworthy that analysis of covariance (ANCOVA) models involving both categorical and continuous predictors incur different hypothesis testing procedures. Accordingly, they require unique power procedures as discussed in Shieh [20] and Tang [21], among others.

For the purposes of planning research designs and validating model formulation, a sample size procedure was presented in Colosimo et al. [22]. The presented formula has a computationally appealing expression and maintains reasonable accuracy in their simulation study. However, the particular method involves a convenient substitution of fixed mean parameter for random predictor variables. Their illustrations were not detailed enough to address the extent and impact of such simplification in sample size computations. Consequently, the adequacy of the sample size procedure described in Colosimo et al. [22] requires further clarification and no research to date has examined its properties under different situations.

The statistical inferences for the regression coefficients are based on the conditional distribution of the continuous predictors. However, unlike the fixed factor configurations and treatment levels in analysis of variance (ANOVA) and other experimental designs, the continuous measurements of the predictor variables in regression studies are typically available only after the data has been collected. For advance planning research design, the distribution and power functions of the test procedure need to be appraised over possible values of the predictors. Thus, it is important to recognize the stochastic nature of the predictor variables. The fundamental differences between fixed and random models have been explicated in Binkley and Abbot [23], Cramer and Appelbaum [24], Sampson [13], and Shaffer [25]. Despite the complexity associated with the unconditional properties of the test procedure, the inferential procedures are the same under both fixed and random formulations. Hence, the usual rejection rule and critical value remain unchanged. The distinction between the two modeling approaches becomes critical for power analysis and sample size planning.

The joint test of intercept and slope coefficients in linear regression is more involved than the individual tests of intercept or slope parameters. A general linear hypothesis setting is required to perform the simultaneous test of both intercept and slope coefficients as shown in Rencher and Schaalje ([26], Section 8.4.2). However, it is essential to emphasize that they did not address the corresponding power and sample size issues. In view of the limited results in current literature, this article aims to present power and sample size procedure for the joint test of intercept and slope coefficients with specific recognition of the stochastic features of predictor variables. First, exact power function and sample size procedure for detecting intercept and slope differences of simple linear regression are derived under random modeling framework assuming predictor variables have independent and identical normal distribution. Then, the technical presentation is extended to the general context of multiple linear regression. Then, a numerical example of model validation is employed to demonstrate the essential discrepancy between the exact and approximate methods. The accuracy and robustness of the contending methods are appraised through simulation studies under a wide range of model configurations with normal and non-normal predictors.

Methods

Simple linear regression

Consider the simple linear regression model for associating the response variable Y with the predictor variable X:

$$ {Y}_i={\upbeta}_I+{X}_i{\upbeta}_S+{\upvarepsilon}_{i,} $$
(1)

where Yi is the observed value of the response variable Y; Xi is the recorded value of the continuous predictor X; βI and βS are unknown intercept and slope parameters; and εi are iid N(0, σ2) random errors for i = 1, …, N. To examine the existence and magnitude of the intercept and slope coefficients {βI, βS}, the statistical inferences are based on the least squares estimators \( {\widehat{\upbeta}}_I \) and \( {\widehat{\upbeta}}_S \), where \( {\widehat{\upbeta}}_I \) = \( \overline{Y} \) – \( \overline{X}{\widehat{\upbeta}}_S \), \( {\widehat{\upbeta}}_S \) = SSXY/SSX, \( \overline{Y} \) = \( \sum \limits_{i=1}^N \)Yi/N, \( \overline{X} \) = \( \sum \limits_{i=1}^N \)Xi/N, SSXY = \( \sum \limits_{i=1}^N \)(Xi – \( \overline{X} \))(Yi – \( \overline{Y} \)), and SSX = \( \sum \limits_{i=1}^N \)(Xi – \( \overline{X} \))2. It follows from the standard results in Rencher and Schaalje ([26], Section 7.6.3) that the estimators {\( {\widehat{\upbeta}}_I \), \( {\widehat{\upbeta}}_S \)} have the bivariate normal distribution

$$ \widehat{\boldsymbol{\upbeta}}\sim {N}_2\left(\boldsymbol{\upbeta}, {\upsigma}^2{\mathrm{W}}_X\right), $$
(2)

where

$$ \widehat{\boldsymbol{\upbeta}}=\left[\begin{array}{c}{\widehat{\upbeta}}_I\\ {}{\widehat{\upbeta}}_S\end{array}\right],\kern0.5em \boldsymbol{\upbeta} =\left[\begin{array}{c}{\upbeta}_I\\ {}{\upbeta}_S\end{array}\right],\kern0.5em {\mathbf{W}}_X=\left[\begin{array}{cc}{W}_{X11}& {W}_{X12}\\ {}{W}_{X21}& {W}_{X22}\end{array}\right], $$

WX11 = 1/N + \( {\overline{X}}^2 \)/SSX, WX12 = WX21 = −\( \overline{X} \)/SSX, and WX22 = 1/SSX. The subscript X of WX emphasizes the elements {WX11, WX12, WX21, WX22} of the variance and covariance matrix are functions of the predictor variables. Also, \( {\widehat{\upsigma}}^2 \) = SSE/ν is the usual unbiased estimator of σ2, where SSE = SSY – SSXY2/SSX is the error sum of squares, SSY = \( \sum \limits_{i=1}^N \)(Yi – \( \overline{Y} \))2, and ν = N – 2. Note that the least squares estimators \( {\widehat{\upbeta}}_I \) and \( {\widehat{\upbeta}}_S \) are independent of \( {\widehat{\upsigma}}^2 \).

A joint test of the intercept and slope coefficients can be conducted with the hypothesis

$$ {\mathrm{H}}_0:\kern1em \left[\begin{array}{c}{\upbeta}_I\\ {}{\upbeta}_S\end{array}\right]=\left[\begin{array}{c}{\upbeta}_{I0}\\ {}{\upbeta}_{S0}\end{array}\right]\;\mathrm{versus}\ {\mathrm{H}}_1:\kern0.5em \left[\begin{array}{c}{\upbeta}_I\\ {}{\upbeta}_S\end{array}\right]\ne \left[\begin{array}{c}{\upbeta}_{I0}\\ {}{\upbeta}_{S0}\end{array}\right]. $$
(3)

Following the model assumption in Eq. 1, the likelihood ratio statistic for the joint test of intercept and slope is

$$ {F}_J=\frac{\left({\widehat{\boldsymbol{\upbeta}}}_D^{\mathrm{T}}{\mathbf{W}}_X^{\hbox{-} 1}{\widehat{\boldsymbol{\upbeta}}}_D\right)/2}{{\widehat{\sigma}}^2}, $$
(4)

where \( {\widehat{\boldsymbol{\upbeta}}}_D \) = [\( {\widehat{\upbeta}}_{ID} \), \( {\widehat{\upbeta}}_{SD} \)]T, \( {\widehat{\upbeta}}_{ID} \) = \( {\widehat{\upbeta}}_I \) – βI0, and \( {\widehat{\upbeta}}_{SD} \) = \( {\widehat{\upbeta}}_S \) – βS0. Under the null hypothesis, it can be shown that

$$ {F}_J\sim F\left(2,\upnu \right), $$
(5)

where F(2, ν) is an F distribution with 2 and ν degrees of freedom. Hence, H0 is rejected at the significance level α if

$$ {F}_J>{F}_{2,\upnu, \upalpha}, $$
(6)

where F2, ν, α is the upper (100·α)th percentile of the F(2, ν) distribution. In general, the joint test statistic FJ has the nonnull distribution for the given values of \( \overline{X} \) and SSX:

$$ {F}_J\left|\left[\overline{X}, SSX\right]\sim F\left(2,\upnu, {\Delta}_J\right)\right. $$
(7)

where

$$ {\Delta}_J=\kern0.5em \frac{\left\{N{\left({\upbeta}_{ID}+\overline{X}{\upbeta}_{SD}\right)}^2+{\upbeta}_{SD}^2 SSX\right\}}{\upsigma^2}. $$
(8)

Hence, the noncentral F distribution F(c, ν, ΔJ) is a function of the predictor values {Xi, i = 1, …, N} only through the summary statistics \( \overline{X} \) and SSX.

The joint test of the intercept and slope coefficients given in Eq. 3 can be viewed as a special case of the general linear hypothesis considered in Rencher and Schaalje ([26], Section 8.4.2). However, two important aspects of this study should be pointed out. First, unlike the current consideration, the associated F test and related statistical properties in Rencher and Schaalje [26] are presented under the standard settings with fixed predictor values. Second, they did not address the power and sample size issues under random modeling formulations. Accordingly, their fundamental results are extended here to accommodate the predictor features in power and sample size calculations for the validation of simple linear regression models.

The statistical inferences about the regression coefficients are based on the conditional distribution of the continuous variables {Xi, i = 1, …, N}. Therefore, the resulting analysis would be specific to the observed values of the predictors. It is clear that, before conducting a research study, the actual values of predictors are not available beforehand just as the major responses. In view of the stochastic nature of the summary statistics \( \overline{X} \) and SSX, it is essential to recognize and assess the distribution of the test statistic over possible values of the predictors. To demonstrate the impact of the predictor features on power and sample size calculations, the normality setting is commonly employed to provide a convenient basis for analytical derivation and empirical examination of random predictors as in Gatsonis and Sampson [11], Sampson [13], and Shieh [14]. However, it is important to note that the power and sample size calculations of Gatsonis and Sampson [11], Sampson [13], and Shieh [14, 15] for detecting slope coefficients in multiple regression analysis are not applicable for assessing differences in intercept and slope coefficients considered here.

Specifically, the continuous predictor variables {Xi, i = 1, ..., N} are assumed to have independent and identical normal distribution N(μX, \( {\upsigma}_X^2 \)). With the normal assumption, it can be readily established that \( \overline{X} \) ~ N(μX, \( {\upsigma}_X^2 \)/N) and K = SSX/\( {\upsigma}_X^2 \) ~ χ2(κ) where κ = N – 1. Thus, the noncentrality ΔJ in Eq. 8 can be expressed as

$$ {\Delta}_J=\kern0.5em \frac{\left\{N{\left(a+ bZ\right)}^2+ dK\right\}}{\upsigma^2}, $$
(9)

where a = βID + μXβSD, b = (d/N)1/2, d = \( {\upbeta}_{SD}^2{\upsigma}_X^2 \), and Z = (\( \overline{X} \) – μX)/(\( {\upsigma}_X^2 \)/N)1/2 ~ N(0, 1). As a consequence, the FJ statistic has the two-stage distribution

$$ {F}_J\mid \left[K,\kern0.5em Z\right]\kern1em \sim \kern0.5em F\left(2,\kern0.5em \upnu, \kern0.5em {\Delta}_J\right),\kern0.5em K\sim {\upchi}^2\left(\upkappa \right),\mathrm{and}\kern0.5em Z\sim N\left(0,1\right). $$
(10)

Note that the two random variables K and Z are independent. Moreover, the corresponding power function for the simultaneous test can be formulated as

$$ {\Psi}_J={E}_K{E}_Z\left[P\left\{F\left(2,\upnu, {\Delta}_J\right)>{F}_{2,v,\alpha}\right\}\right], $$
(11)

where the expectations EK and EZ are taken with respect to the distributions of K and Z, respectively.

Alternatively, Colosimo et al. ([22], Section 3.2) described a simple and naive method to obtain an unconditional distribution of FJ. They substituted the sample values of the predictor variables in the noncentrality ΔJ with the corresponding expected value E[Xi] = μX for i = 1, ..., N. Thus, the distribution of FJ is approximated by a noncentral F distribution:

$$ {F}_J\sim F\left(2,\ \upnu,\ {\Delta}_C\right), $$
(12)

where ΔC = (Na2)/σ2. The suggested power function of Colosimo et al. [22] for the joint test of intercept and slope coefficients is

$$ {\Psi}_C=P\left\{F\left(2,\upnu, {\Delta}_C\right)>{F}_{2,v,\alpha}\right\}. $$
(13)

It is vital to note that the approximate power function ΨC only involves a noncentral F distribution, whereas the normal predictor distributions lead to the exact and more complex power formula ΨJ that consists of a joint chi-square and normal mixture of noncentral F distributions. Evidently, the power function ΨC is relatively simpler to compute than the exact formula ΨJ. But the approximate nature of ΨC does not involve all of the predictor features in power computations.

It follows from large sample theory that Z and K/N converge to 0 and 1, respectively. Hence, the sample-size-adjusted noncentrality quantity ΔJ/N approaches \( {\Delta}_J^{\ast } \) as the sample size N increases to infinity, where

$$ {\Delta}_J^{\ast }=\kern0.5em \frac{{\left({\upbeta}_{ID}+{\upmu}_X{\upbeta}_{SD}\right)}^2+{\upbeta}_{SD}^2{\upsigma}_X^2}{\upsigma^2}. $$
(14)

Hence, \( {\Delta}_J^{\ast } \) provides a convenient measurement of effect size for the joint appraisal of intercept and slope coefficients. It can be immediately seen from the noncentrality term of the approximate power function ΨC that \( {\Delta}_C^{\ast } \) = ΔC/N = (βID + μXβSD)2/σ2 < \( {\Delta}_J^{\ast } \) with the exceptions that βSD = 0 and/or \( {\upsigma}_X^2 \) = 0. Consequently, the estimated power ΨC is generally less than that of ΨJ even for large sample sizes when all other configurations remain constant. It is shown later that while the computation is more involved for the complex power function ΨJ, the exact approach has a clear advantage over the approximate procedure in accurate power calculations. For advance planning of a research design, the presented power formulas can be employed to calculate the sample size N needed to attain the specified power 1 – β for the chosen significance level α, null values {βI0, βS0}, coefficient parameters {βI, βS}, variance component σ2, and predictor mean and variance {μX, \( {\upsigma}_X^2 \)}. It usually involves an incremental search with a small initial value to find the optimal solution for achieving the desired power performance.

Multiple linear regression

The power and sample size calculations for the general scenario of multiple linear regression with more than one predictor are discussed next. Consider the multiple linear regression model with response variable Yi and p predictor variables (Xi1, ..., Xip) for i = 1, ..., N:

$$ \mathbf{Y}=\mathbf{X}\boldsymbol{\upbeta } +\boldsymbol{\upvarepsilon}, $$
(15)

where Y = (Y1, ..., YN)T is an N × 1 vector with Yi being the observed measurement of the ith subject; X = (1N, XS) with 1N is the N × 1 vector of all 1’s, XS = (XS1, ..., XSN)T is an N × p matrix, XSi = (Xi1, ..., Xip)T, Xi1, ..., Xip are the observed values of the p predictor variables of the ith subject; β = (βI, \( {\boldsymbol{\upbeta}}_S^{\mathrm{T}} \))T is a (p + 1) × 1 vector with βS = (β1, ..., βp)T and βI, β1, ..., βp are unknown coefficient parameters; and ε = (ε1, ..., εN)T is an N × 1 vector with εi are iid N(0, σ2) random variables.

For the joint test of intercept and slope coefficients in terms of

$$ {\mathrm{H}}_0:\boldsymbol{\upbeta} =\boldsymbol{\uptheta}\ \mathrm{versus}\ {\mathrm{H}}_1:\boldsymbol{\upbeta} \ne \boldsymbol{\uptheta}, $$
(16)

it can be shown from Rencher and Schaalje ([6], Section 8.4.2) that the test statistic is

$$ {F}_{MJ}=\frac{\left\{{\left(\widehat{\boldsymbol{\upbeta}}-\boldsymbol{\uptheta} \right)}^{\mathrm{T}}\left({\mathbf{X}}^{\mathrm{T}}\mathbf{X}\right)\left(\widehat{\boldsymbol{\upbeta}}-\boldsymbol{\uptheta} \right)\right\}/\left(p+1\right)}{{\widehat{\upsigma}}^2}, $$
(17)

where \( {\widehat{\upsigma}}^2 \) = SSE/ν is the usual unbiased estimator of σ2. Under the null hypothesis, FMJ has an F distribution with p + 1 and ν degrees of freedom

$$ {F}_{MJ}\sim F\left(p+1,v\right) $$
(18)

The joint test can be conducted by reject H0 at the significance level α if FMJ > F(p + 1), ν, α. In general, FMJ has the nonnull distribution for the given values of XS:

$$ {F}_{MJ}\sim F\left(p+1,\ \upnu,\ {\Delta}_{MJ}\right), $$
(19)

where F(p + 1, ν, ΔMJ) is a noncentral F distribution with p + 1 and ν degrees of freedom and noncentrality parameter ΔMJ with

$$ {\Delta}_{MJ}=\frac{\left\{{\left(\boldsymbol{\upbeta} -\boldsymbol{\uptheta} \right)}^{\mathrm{T}}\left({\mathbf{X}}^{\mathrm{T}}\mathbf{X}\right)\left(\boldsymbol{\upbeta} -\boldsymbol{\uptheta} \right)\right\}}{\upsigma^2}. $$
(20)

It is essential to emphasize that the inferences in Rencher and Schaalje [26] are concerned mainly with the slope coefficients βS. As noted in the context of simple linear regression, the fundamental results concerning fixed predictor values are extended here to power and sample size calculations for the validation of linear regression models under random predictor settings.

In view of the random nature of the predictor variables, the continuous predictor variables {XSi, i = 1, ..., N} are assumed to have independent multinormal distributions Np(μX, ΣX). With the multinormal assumptions, it can be readily established that \( {\overline{\mathbf{X}}}_S \) = \( \sum \limits_{i=1}^N \)XSi/N ~ Np(μX, ΣX/N) and A = \( \sum \limits_{i=1}^N \)(XSi – \( {\overline{\mathbf{X}}}_S \))(XSi – \( {\overline{\mathbf{X}}}_S \))T ~ Wp(κ, ΣX), where Wp(κ, ΣX) is a Wishart distribution with κ degrees of freedom and covariance matrix ΣX, and κ = N – 1. Thus, the noncentrality ΔMJ can be rewritten as

$$ {\Delta}_{MJ}=\kern0.5em \frac{\left\{N{\left({\upbeta}_{ID}+{\boldsymbol{\upbeta}}_{SD}^{\mathrm{T}}{\overline{\mathbf{X}}}_S\right)}^2+{\boldsymbol{\upbeta}}_{SD}^{\mathrm{T}}{\mathbf{A}\boldsymbol{\upbeta}}_{SD}\right\}}{\upsigma^2}, $$
(21)

where βID = βI – θI and βSD = βS – θS. Using the prescribed distributions of \( {\overline{\mathbf{X}}}_S \) and A, it can be shown that βID + \( {\boldsymbol{\upbeta}}_{SD}^{\mathrm{T}}{\overline{\mathbf{X}}}_S \) = a + bZ ~ N(a, b2), Z ~ N(0, 1), and K = \( {\boldsymbol{\upbeta}}_{SD}^{\mathrm{T}}{\mathbf{A}\boldsymbol{\upbeta}}_{SD} \)/d ~ χ2(κ), where a = βID + \( {\boldsymbol{\upbeta}}_{SD}^{\mathrm{T}} \)μX, b = (d/N)1/2, and d = \( {\boldsymbol{\upbeta}}_{SD}^{\mathrm{T}}{\boldsymbol{\Sigma}}_X{\boldsymbol{\upbeta}}_{SD} \). Note that the two random variables K and Z are independent. It is conceptually simple and computationally convenient to subsume the stochastic features of \( {\overline{\mathbf{X}}}_S \) and A in terms of Z and K. Accordingly, the noncentrality quantity ΔJ is formulated as

$$ {\Delta}_{MJ}=\frac{\left\{N{\left(a+ bZ\right)}^2+ dK\right\}}{\upsigma^2}. $$
(22)

Thus, under the multinormal predictor assumptions, the FMJ statistic has the two-stage distribution

$$ {F}_{MJ}\mid \left[K,Z\right]\sim F\left(p+1,\kern0.5em \upnu, \kern0.5em {\Delta}_{MJ}\right),K\sim {\upchi}^2\left(\upkappa \right),\mathrm{and}\kern0.5em Z\sim N\left(0,1\right). $$
(23)

The corresponding power function for the simultaneous test can be termed as

$$ {\Psi}_{MJ}={E}_K{E}_Z\left[P\left\{F\left(p+1,\kern0.5em \upnu, \kern0.5em {\Delta}_{MJ}\right)>{F}_{\left(p+1\right),\upnu, \upalpha}\right\}\right], $$
(24)

where the expectations EK and EZ are taken with respect to the distribution of K and Z, respectively. Evidently, when p = 1, the test statistic FMJ and power function ΨMJ reduce to the simplified formulas of FM and ΨJ given in Eqs. 4 and 11, respectively.

Results

An illustration

To demonstrate the prescribed power and sample size procedures, the simplified formula for estimating fetal weight in Rose and McCallum [6] is used as a benchmark for validation. Although there are several different methods for estimating the fetal weight, it was demonstrated in Anderson et al. [27] that the simple linear regression formula of Rose and McCallum [6] compares favorably with other techniques. Based on the ultrasound examinations conducted in the Stanford University Hospital labor and delivery suite between January 1981 and March 1984, they presented a useful formula for predicting the natural logarithm of birth weight with the sum of head, abdomen, and limb ultrasound measurements as given by the equation: ln(BW) = 4.198 + 0.143·X, where X = biparietal diameter + mean abdominal diameter + femur length (in centimeters). The average birth weight of their study population was 2275 g with a range of 490–5300 g. The detailed comparisons and related discussions of viable equations for estimating fetal weight can be found in Anderson et al. [27] and the references therein.

Conceivably, there are underlying differences in fetal weight between different ethnic origins, cohort groups, and time periods. To validate the simple formula for a target population, it requires a detailed scheme to determine the necessary sample size so that the conducted study has a decent assurance in detecting the potential discrepancy. For illustration, the intercept and slope coefficients are set as βI = 4.1 and βS = 0.15, respectively. The error component is selected to be σ2 = 0.095. The characteristics of the ultrasound measurements are represented by the mean μX = 24.2 and variance \( {\upsigma}_X^2 \) = 6. Note that these configurations assure that the expected fetal weight of the designated population E[BW] = E[exp(4.1 + 0.15·X + ε)] = 2275.52 coincides the average magnitude of birth weighs reported in Rose and McCallum [6]. To test the hypothesis of H0: (βI, βS) = (4.198, 0.143) versus H1: (βI, βS) ≠ (4.198, 0.143) with the significance level α = 0.05, numerical computations showed that the sample sizes of NE = 173 and 227 are required for the exact approach to attain the target power of 0.8 and 0.9, respectively. Because of the sample sizes need to be integer values in practice, the attained power is slightly greater than the nominal power level. In these two cases, the achieved powers of the two sample sizes are ΨJ = 0.8001 and 0.9010, respectively. These results were computed with the supplementary algorithms presented in Additional files 1 and 2. For ease of application, the prescribed configurations are incorporated in the user specification sections of the SAS/IML programs.

On the other hand, the matching sample sizes computed with the approximate method of Colosimo et al. [22] are NC = 183 and 239 with the attained powers of ΨC = 0.8010 and 0.9002, respectively. Therefore, the simple method of Colosimo et al. [22] clearly requires 183–173 = 10 and 239–227 = 12 more babies than the exact formula to satisfy the nominal power performance. Actually, the exact power function gives the values ΨJ = 0.8236 and 0.9161 with the sample sizes 183 and 239, respectively. Hence, the resulting power differences between the two magnitudes of sample size are 0.8236–0.8001 = 0.0235 and 0.9161–0.9010 = 0.0151. To enhance the illustration, the computed sample size, estimated power, and difference for the exact and approximate procedures are summarized in Table 1. The sample size and power calculations show that the approximate power function ΨC tends to underestimate powers because the simplification of noncentrality parameter in the noncentral F distribution. Correspondingly, the approximate method of Colosimo et al. [22] often overestimates the required sample sizes for validation analysis. It is essential to note that adopting a small sample size will cause a study that has insufficient power to demonstrate model difference. In this case of Colosimo et al. [22], their method may lead to an over-sized study that wastes time, money, and other resources. More importantly, the hypothesis tests of validation studies are over-rejected and yield erroneous conclusions. It is of both practical usefulness and theoretical concern to further assess the intrinsic implications of the two distinct procedures for other settings. Detailed empirical studies are described next to evaluate and compare their accuracy under a wide variety of model configurations.

Table 1 Computed sample size, estimated power, and difference for the exact and approximate procedures with {βI, βS} = {4.1, 0.15}, {βI0, βS0} = {4.198, 0.143}, σ2 = 0.095, μX = 24.2, \( {\sigma}_X^2 \) = 6, and Type I error α = 0.05

Numerical comparisons

In view of the potential discrepancy between the exact and approximate procedures, numerical investigations of power and sample size calculations were conducted under a wide range of model configurations in two studies. The first assessment focuses on the situations with normal predictor variables, while the second study concerns the robustness of the two methods under several prominent situations of non-normal predictors.

Normal predictors

For ease of comparison, the model settings in Colosimo et al. [22] are considered and expanded to reveal the distinct behavior of the contending procedures. Specifically, the null and alternative hypotheses are

$$ {\mathrm{H}}_0:\kern1em \left[\begin{array}{c}{\upbeta}_I\\ {}{\upbeta}_S\end{array}\right]=\left[\begin{array}{c}0\\ {}1\end{array}\right]\kern0.5em \mathrm{versus}\ {\mathrm{H}}_1:\kern0.5em \left[\begin{array}{c}{\upbeta}_I\\ {}{\upbeta}_S\end{array}\right]\ne \left[\begin{array}{c}0\\ {}1\end{array}\right], $$

where {βI, βS} = {d, 1 + d} and {βID, βSD} = {d, d} with d = 0.3, 0.4, and 0.5. Note that these coefficient settings are equivalent to those with {βI, βS} = {βI0 + d, βS0 + d} because they lead to the same differences {βID, βSD} = {d, d} and the resulting power functions remain identical. The error component is fixed as σ2 = 1 and the predictors X are assumed to have normal distributions with mean μX = {0, 0.5, 1} and variance \( {\upsigma}_X^2 \) = {0.5, 1, 2}. Overall these considerations result in a total of 27 different combined settings. These combinations of model configurations were chosen to represent the possible characteristics that are likely to be encountered in actual applications and also to maintain a reasonable range for the magnitudes of sample size without making unrealistic assessments.

Throughout this empirical investigation, the significance level and nominal power are fixed as α = 0.05 and 1 – β = 0.90, respectively. With the prescribed specifications, the required sample sizes are computed for the exact procedure with the power function ΨJ. The computed sample sizes of the nine combined predictor mean and variance patterns are summarized in Table 2, Table S1 and Table S2 for the coefficient difference d = 0.3, 0.4, and 0.5, respectively. As suggested by a referee, Tables S1 and S2 are presented in Additional files 3 and 4, respectively. In order to evaluate the accuracy of power calculations, the estimated power of the exact and approximate procedures are also presented. Note that the attained values of the exact approach are marginally larger than the nominal level 0.90. In contrast, the estimated powers of the approximation of Colosimo et al. [22] are all less than 0.90 and the difference is quite substantial in some cases. Then, Monte Carlo simulation studies of 10,000 iterations were performed to compute the simulated power for the designated sample sizes and parameter configurations. For each replicate, N predictor values were generated from the designated normal distribution N(μX, \( {\upsigma}_X^2 \)). The resulting values of normal predictor, intercept and slope coefficients {βI, βS}, and error variance σ2, in turn, determine the configurations for producing N normal outcomes of the simple linear regression model defined in Eq. 1. Next, the test statistic FJ was computed and the simulated power was the proportion of the 10,000 replicates whose test statistics FJ exceed the corresponding critical value F2, ν, 0.05. The adequacy of the two sample size procedures is determined by the error between the estimate power and the simulated power. The simulated power and error are also summarized in Table 2, Table S1 and Table S2 for all twenty-seven design schemes.

Table 2 Computed sample size, estimated power, and simulated power for Normal predictors with {βI, βS} = {0.3, 1.3}, {βI0, βS0} = {0, 1}, σ2 = 1, Type I error α = 0.05, and nominal power 1 – β = 0.90

It can be seen from these results that the discrepancy between the estimated power and the simulated power is considerably small for the proposed exact technique for all model configurations considered here. Specifically, the resulting errors of the 27 designs are all within the small range of − 0.0087 to 0.0056. On the other hand, the estimated powers of the approximate method are constantly smaller than the simulated powers. The outcomes show a clear pattern that absolute error decreases with coefficient difference d and predictor mean μX, and increases with predictor variance \( {\upsigma}_X^2 \), when all other configurations are held constant. Notably, the associated absolute errors can be as large as 0.4456, 0.4295, and 0.4183 when μX = 0 and \( {\upsigma}_X^2 \) = 2 for d = 0.3, 0.4, and 0.5 in Table 2, Table S1, Table S2, respectively. It should be noted that most of the sample sizes reported in the empirical examination of Colosimo et al. [22] (Table 1) are rather large and impractical. This may explain why the performance of the approximate formula was acceptable in their study. In fact, some of their cases with smaller sample sizes also showed the same phenomenon that the simple method leads to an underestimate of power level and an overestimated sample size required to achieve the nominal power. Essentially, the simplicity of the approximate formula does come with a huge price in terms of inaccurate power and sample size calculations.

Non-normal predictors

To address the sensitivity issues of the two techniques, power and sample size calculations were also conducted for the regression models with non-normal predictors. For illustration, the model settings in Table 2 with {βID, βSD} = {0.3, 0.3} are modified by assuming the predictors have four different sets of distributions: Exponential(1), Gamma(2, 1), Laplace(1), and Uniform(0, 1). For ease of comparison, the designated distributions were linearly transformed to have mean μX and variance \( {\upsigma}_X^2 \) as reported in the previous study. Hence, the computed sample sizes associated with the exact procedure and estimated powers of the two methods remain identical for the four non-normal distributions. The simulated powers were obtained with the Monte Carlo simulation studies of 10,000 iterations under the selected model configurations and non-normal predictor distributions. Similar to the numerical assessments in the preceding study, the computed sample sizes, simulated powers, estimated powers, and associated errors of the two competing procedures are presented in Tables S3-S6 of Additional files 5, 6, 7, 8 for the four types of non-normal predictors, respectively.

Regarding the robustness properties of the two procedures, the results in suggest that the performance of the exact approach is slightly affected by the non-normal covariate settings. The high skewness and kurtosis of the Exponential distribution apparently has a more prominent impact on the normal-based power function than the other three cases of Gamma, Laplace, and Uniform distributions. Note that the approximate method only depends on the mean values of the predictors and is presumably less sensitive to the variation of predictor distributions. However, the accuracy marginally improved in some cases, but generally maintains almost the same performance as in the normal setting presented in Table 2. In short, the sensitivity and robustness of the suggested exact technique depends on the level of how badly predictor distributions depart from normality structure. On the other hand, the performance assessments show that the exact procedure still give acceptable results even in the situations with non-normal predictors considered here. More importantly, these empirical evidences reveal that the exact approach is relatively more reliable and accurate than the approximate method to be recommended as a trustworthy technique for power and sample calculations.

Discussion

In practice, a research study requires adequate statistical power and sufficient sample size to detect scientifically credible effects. Although multiple linear regression is a well-recognized statistical tool, the corresponding power and sample size problem for model validation has not been adequately examined in the literature. To enhance the usefulness of the joint test of intercept and slope coefficients in linear regression analysis, this article presents theoretical discussions and computational algorithms for power and sample size calculations under the random modeling framework. The stochastic nature of predictor variables is taken into account by assuming that they have an independent and identical normal distribution. In contrast, the existing method of Colosimo et al. [22] adopted a direct replacement of mean values for the predictor variables. Consequently, the proposed exact approach has the prominent advantage of accommodating the complete distributional features of normal predictors whereas the simple approximation of Colosimo et al. [22] only includes the mean parameters of the predictor variables.

Conclusions

The presented analytic derivations and empirical results indicate that the approximate formula of Colosimo et al. [22] generally does not give accurate power and sample size calculations. According to the overall accuracy and robustness, the exact approach clearly outperforms the approximate methods as a useful tool in planning validation study. Although the numerical illustration only involves a predictor variable, it embodies the underlying principle and critical feature of linear regression that can be useful in conducting similar evaluations for the more general framework of multiple linear regression.

Abbreviations

ANCOVA:

Analysis of covariance

ANOVA:

Analysis of variance

References

  1. Cohen J, Cohen P, West SG, Aiken LS. Applied multiple regression/correlation analysis for the behavioral sciences. 3rd ed. Mahwah: Erlbaum; 2003.

    Google Scholar 

  2. Kutner MH, Nachtsheim CJ, Neter J, Li W. Applied linear statistical models. 5th ed. New York: McGraw Hill; 2005.

    Google Scholar 

  3. Montgomery DC, Peck EA, Vining GG. Introduction to linear regression analysis. 5th ed. Hoboken: Wiley; 2012.

    Google Scholar 

  4. Snee RD. Validation of regression models: methods and examples. Technometrics. 1977;19:415–28.

    Article  Google Scholar 

  5. Maddahi J, Crues J, Berman DS, et al. Noninvasive quantification of left ventricular myocardial mass by gated proton nuclear magnetic resonance imaging. J Am Coll Cardiol. 1987;10:682–92.

    Article  CAS  Google Scholar 

  6. Rose BI, McCallum WD. A simplified method for estimating fetal weight using ultrasound measurements. Obstet Gynecol. 1987;69:671–4.

    CAS  PubMed  Google Scholar 

  7. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: Erlbaum; 1988.

    Google Scholar 

  8. Kraemer HC, Blasey C. How many subjects?: Statistical power analysis in research. 2nd ed. Los Angeles: Sage; 2015.

    Google Scholar 

  9. Murphy KR, Myors B, Wolach A. Statistical power analysis: a simple and general model for traditional and modern hypothesis tests. 4th ed. New York: Routledge; 2014.

    Book  Google Scholar 

  10. Ryan TP. Sample size determination and power. Hoboken: Wiley; 2013.

    Book  Google Scholar 

  11. Gatsonis C, Sampson AR. Multiple correlation: exact power and sample size calculations. Psychol Bull. 1989;106:516–24.

    Article  CAS  Google Scholar 

  12. Mendoza JL, Stafford KL. Confidence interval, power calculation, and sample size estimation for the squared multiple correlation coefficient under the fixed and random regression models: a computer program and useful standard tables. Educ Psychol Meas. 2001;61:650–67.

    Article  Google Scholar 

  13. Sampson AR. A tale of two regressions. J Am Stat Assoc. 1974;69:682–9.

    Article  Google Scholar 

  14. Shieh G. Exact interval estimation, power calculation and sample size determination in normal correlation analysis. Psychometrika. 2006;71:529–40.

    Article  Google Scholar 

  15. Shieh G. A unified approach to power calculation and sample size determination for random regression models. Psychometrika. 2007;72:347–60.

    Article  Google Scholar 

  16. Shieh G. Exact analysis of squared cross-validity coefficient in predictive regression models. Multivar Behav Res. 2009;44:82–105.

    Article  Google Scholar 

  17. Kelley K. Sample size planning for the squared multiple correlation coefficient: accuracy in parameter estimation via narrow confidence intervals. Multivar Behav Res. 2008;43:524–55.

    Article  Google Scholar 

  18. Krishnamoorthy K, Xia Y. Sample size calculation for estimating or testing a nonzero squared multiple correlation coefficient. Multivar Behav Res. 2008;43:382–410.

    Article  CAS  Google Scholar 

  19. Shieh G. Sample size requirements for interval estimation of the strength of association effect sizes in multiple regression analysis. Psicothema. 2013;25:402–7.

    PubMed  Google Scholar 

  20. Shieh G. Power and sample size calculations for contrast analysis in ANCOVA. Multivar Behav Res. 2017;52:1–11.

    Article  Google Scholar 

  21. Tang Y. Exact and approximate power and sample size calculations for analysis of covariance in randomized clinical trials with or without stratification. Stat Biopharm Res. 2018;10:274–86.

    Article  Google Scholar 

  22. Colosimo EA, Cruz FR, Miranda JLO, et al. Sample size calculation for method validation using linear regression. J Stat Comput Simul. 2007;77:505–16.

    Article  Google Scholar 

  23. Binkley JK, Abbot PC. The fixed X assumption in econometrics: can the textbooks be trusted? Am Stat. 1987;41:206–14.

    Google Scholar 

  24. Cramer EM, Appelbaum MI. The validity of polynomial regression in the random regression model. Rev Educ Res. 1978;48:511–5.

    Article  Google Scholar 

  25. Shaffer JP. The Gauss-Markov theorem and random regressors. Am Stat. 1991;45:269–73.

    Google Scholar 

  26. Rencher AC, Schaalje GB. Linear models in statistics. 2nd ed. Hoboken: Wiley; 2007.

    Book  Google Scholar 

  27. Anderson NG, Jolley IJ, Wells JE. Sonographic estimation of fetal weight: comparison of bias, precision and consistency using 12 different formulae. Ultrasound Obstet Gynecol. 2007;30:173–9.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and two reviewers for their constructive comments that led to an improved article.

Funding

No funding.

Availability of data and materials

The summary statistics are available from the following article: [6].

Author information

Authors and Affiliations

Authors

Contributions

SLJ conceived of the study, and participated in the development of theory and helped to draft the manuscript. GS carried out the numerical computations, participated in the empirical analysis and drafted the manuscript. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Gwowen Shieh.

Ethics declarations

Authors’ information

SLJ is a professor of Applied Mathematics, Chung Yuan Christian University, Taoyuan, Taiwan 32023. GS is a professor of Management Science, National Chiao Tung University, Hsinchu, Taiwan 30010.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

SAS/IML program for computing the power for the joint test of intercept and slope coefficients. (PDF 60 kb)

Additional file 2:

SAS/IML program for computing the sample size for the joint test of intercept and slope coefficients. (PDF 60 kb)

Additional file 3:

Table S1. Computed sample size, estimated power, and simulated power for Normal predictors with {βI, βS} = {0.4, 1.4}, {βI0, βS0} = {0, 1}, σ2 = 1, Type I error α = 0.05, and nominal power 1 – β = 0.90. (PDF 95 kb)

Additional file 4:

Table S2. Computed sample size, estimated power, and simulated power for Normal predictors with {βI, βS} = {0.5, 1.5}, {βI0, βS0} = {0, 1}, σ2 = 1, Type I error α = 0.05, and nominal power 1 – β = 0.90. (PDF 96 kb)

Additional file 5:

Table S3. Computed sample size, estimated power, and simulated power for transformed Exponential predictors with {βI, βS} = {0.3, 1.3}, {βI0, βS0} = {0, 1}, σ2 = 1, Type I error α = 0.05, and nominal power 1 – β = 0.90. (PDF 95 kb)

Additional file 6:

Table S4. Computed sample size, estimated power, and simulated power for transformed Gamma predictors with {βI, βS} = {0.3, 1.3}, {βI0, βS0} = {0, 1}, σ2 = 1, Type I error α = 0.05, and nominal power 1 – β = 0.90. (PDF 95 kb)

Additional file 7:

Table S5. Computed sample size, estimated power, and simulated power for transformed Laplace predictors with {βI, βS} = {0.3, 1.3}, {βI0, βS0} = {0, 1}, σ2 = 1, Type I error α = 0.05, and nominal power 1 – β = 0.90. (PDF 95 kb)

Additional file 8:

Table S6. Computed sample size, estimated power, and simulated power for transformed Uniform predictors with {βI, βS} = {0.3, 1.3}, {βI0, βS0} = {0, 1}, σ2 = 1, Type I error α = 0.05, and nominal power 1 – β = 0.90. (PDF 96 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jan, SL., Shieh, G. Sample size calculations for model validation in linear regression analysis. BMC Med Res Methodol 19, 54 (2019). https://doi.org/10.1186/s12874-019-0697-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-019-0697-9

Keywords