 Debate
 Open Access
 Open Peer Review
Improving measurementinvariance assessments: correcting entrenched testing deficiencies
 Leslie A. Hayduk^{1}Email author
https://doi.org/10.1186/s1287401602303
© The Author(s). 2016
 Received: 18 August 2015
 Accepted: 19 September 2016
 Published: 6 October 2016
Abstract
Background
Factor analysis historically focused on measurement while path analysis employed observed variables as though they were errorfree. When factor and pathanalysis merged as structural equation modeling, factor analytic notions dominated measurement discussions – including assessments of measurement invariance across groups. The factor analytic tradition fostered disregard of model testing and consequently entrenched this deficiency in measurement invariance assessments.
Discussion
Applying contemporary model testing requirements to the socalled configural model initiating invariance assessments will improve future assessments but a substantial backlog of deficient assessments remain to be overcome.

summarizes the issues,

demonstrates the problem using a recent example,

illustrates a superior model assessment strategy,

and documents disciplinary entrenchment of inadequate testing as exemplified by the journal Organizational Research Methods.
Summary
Employing the few methodologically and theoretically best, rather than precariouslymultiple, indicators of latent variables increases the likelihood of achieving properly causally specified structural equation models capable of displaying measurement invariance. Just as evidence of invalidity trumps reliability, evidence of configural model misspecification trumps invariant estimates of misspecified coefficients.
Keywords
 Invariance
 Factor analysis
 Testing
 Close fit
 Structural equation model
 SEM
Background
Structural equation models meld a measurement “model” composed of the causal connections between latent and observed variables, to a latentlevel “model” composed of the causal connections between the latentlevel variables. The measurement and latent modelcomponents tended to be viewed as distinct because the measurement modelsegment historically developed from the factor analytic tradition [1–3] while the latentlevel modelsegment followed the path analytic tradition [4–8]. These modelsegments can be appropriately and beneficially statistically combined but the factor tradition of downplaying and evading model testing, conflicts with the path analytic tradition of attentive model testing. Researchers following factor analytic traditions and confronting failing models were inclined to report fitindices rather than correct the problems detected through model testing [9].
Structural equation model measurement and latentlevel specifications should be granted equal scrutiny because the latent level cannot function appropriately without proper measurement, and measurement is not assured unless the supposedlymeasured latents function appropriately [10–12]. The question of measurement invariance arises when researchers consider whether an indicator item measures the same thing in different contexts – whether in different laboratories, different countries, different religions, at different times, or with different languages [13]. Comparing variables’ effects or means between groups makes sense when the same variables are being considered in the two groups. It is reasonable to compare attitudes/apples in one group with attitudes/apples in another group, but usually less instructive to compare one group’s attitudes/apples to another group’s desires/oranges. Comparing structural equation models demands attention to both the observed indicator variables and their underlying latent causes because even if the indicators are identical the underlying latent sources might differ between the groups. Asking a group of men how frequently they shave taps into something different than asking women the identical question.
Measurementinvariance assessments frequently begin with a factorstructured model that is progressively constrained by adding betweengroup equality constraints on the loadings, measurement error variances, and measurement intercepts. Changes in loadings (the causal actions leading from the latents to indicators) are usually granted priority because differences in loadings directly signal differences between the observed indicators and the underlying latents. If an indicator is more strongly responsive to a latent in one group than another, this signals a change in the causal source of the indicator – something different may be being measured in the two groups. Parallel comments apply to measurement error variances, intercepts, and other model coefficients [13]. Consequently, testing the tenability of betweengroup equality constraints on loadings, and other model coefficients, underpins assessments of measurement consistency or invariance. Assessing measurement invariance by investigating betweengroup coefficient constraints is reasonable as long as inattention to model testing has not already undermined the very foundation of this approach. Unfortunately, the concern for testing coefficientconstraints has not routinely extended to testing the baseline model containing the constrained coefficients [14].
A factorstructured model, called the configural model, often constitutes the initial model in the invariance testing process [13]. The configural model typically results from factor analyses which place each indicator under a specific factor. The clustering of indicators under latent factors is conveyed by requiring zero crossloadings from each latent to the indicators of the other latent factors, and zero measurement error covariances (corresponding to the presumption of statistically independent errors). The identical clustering of indicators under latent factors in the two groups constitutes the basic configuration that grants the name configural model. The configural model places no betweengroup constraints on the estimated coefficients, so the loadings, measurement error variances, and other estimates may differ between the groups, though the clustering of indicators under latents (the placement of the loadings) retains the same configuration in both groups.
This raises the issue of whether or not the configural model, with its many zero loadings and other coefficients that are constrained to be identical in the two groups, must be carefully tested. The perspective taken here is that the configural model must be carefully tested [14] – which conflicts with the uncomfortablycommon practice of treating significant inconsistency between the data and the configural model as “acceptable” or “tolerable” by appealing to fit indices such as RMSEA or CFI [13, 1518]. Defending measurements as invariant on the basis of consistency between groups is untenable if the model’s structure is inconsistent with the world’s causal structure [19, 20]. If the configural model’s structure does not correspond to the world’s causal structure, asking about invariance between groups is asking whether the groups agree in their misrepresentation of the connections between the indicators and the underlying latent variables! Measurement invariance makes sense not merely because the groups agree with one another but because there is betweengroup agreement as well as consistency between the model and the worldly causal structures that provided the data.
It is important to differentiate between model fit and model properness, because seriously causally misspecified/wrong factor models can fit. Hayduk [11], for example, illustrates that it is sometimes possible for a onefactor model to perfectly fit data generated by three realworld latent factors. It would be unreasonable to claim adequate or invariant measurement of onefactor if the real world contains three factors, not one! Given that causally wrong factor models can provide perfectfit, it should be obvious that more extreme causal misspecifications can produce nearfit, or closefit. The ability of importantly causallyincorrect models to nearlyfit makes it unreasonable to employ mere closenessoffit as the benchmark for assessing measurement, or measurement invariance. The appropriate measurement concern is not some indexed amount of ill fit or closeness to fit, but whether or not the data are consistent with the indicated number of underlying latents having the specified connections to the indicators [20]. Evidence of inconsistency between the model’s structure and the world’s structure destroys the very foundation of any claim that the model provides adequate measurement.
Data and procedures
The ISSP Work Indicators^{a}
Indicator Wording  Designation here, and in Cheung & Lau  ISSP  Great Britain  United States  

Mean  Std. Deviation  Mean  Std. Deviation  
My job is secure  y _{1}  V59  2.46  1.073  2.09  .977 
My income is high  y _{2}  V60  3.38  .956  3.22  1.009 
My opportunities for advancement are high  y _{3}  V61  3.29  1.030  3.00  1.115 
My job has flexible working hours  y _{4}  V67  3.16  1.218  2.82  1.192 
My job is interesting  y _{5}  V63  2.11  .852  2.12  .967 
I can work independently  y _{6}  V64  2.08  .838  2.09  .965 
In my job I can help other people  y _{7}  V65  2.28  .951  2.07  .913 
{how often}…are you bored at work?  y _{8}  V71  2.19  .937  2.24  .953 
{how often}…do you have to do hard physical work?  y _{9}  V69  3.46  1.253  3.48  1.186 
{how often}…do you work in dangerous conditions?  y _{10}  V72  4.09  1.115  3.97  1.164 
{how often}…do you work in unhealthy conditions?  y _{11}  V73  4.06  1.100  4.14  1.042 
{how often}…do you work in physically unpleasant conditions?  y _{12}  V74  4.16  1.067  4.10  1.022 
Sex  x _{1}  V85  1.44  .496  1.47  .499 
Age  x _{2}  V86  39.31  11.463  38.62  11.914 
Analysis and context
C&L say: “As good model fit is a prerequisite for meaningful interpretation of BC bootstrap confidence intervals, it is necessary to ensure that the configural invariance model shows adequate model fit” [17]:172–173. C&L’s focus on BiasCorrected bootstrapping is moot, and their concern for the adequacy of the configural model is laudable, but C&L’s statement nonetheless remains problematic. The problem centers on “adequate model fit”. C&L continued the effete factor tradition of disregarding evidence of measurement problems when they required that their configural model show only “acceptable model fit” [17]:173 (emphasis added) rather than requiring consistency between the data and their configural model’s causal structure. Switching away from modelproperness, to modelfit, is fundamentally problematic because it inappropriately pretends measurement could be reported as adequate and invariant even if their configural model’s causal structure was inconsistent with the world’s structure.
Unfortunately, C&L really meant, trusted, and dependedon, fit as opposed to respecting evidence of worldmodel causal inconsistency. C&L report that their configural model provides χ ^{2} = 399.6 with 102 degrees of freedom – namely with 51 degrees of freedom in each group. C&L did not report the corresponding probability, though anyone knowing that a χ ^{2} having many degrees of freedom is nearly normally distributed with mean equal to the degrees of freedom and variance twice the degrees of freedom (so a standard deviation is \( \sqrt{2df} \)) should not need a χ ^{2} calculator to determine C&L’s configural model’s χ ^{2} is about 20 standard deviations from the mean, and hence has a p < 0.000001.
That is convincing evidence of inconsistency between the data and C&L’s configural model. This probability informs us that there is essentially no chance that random sampling variations could account for the difference between the available data and C&L’s configural model, even after the configural model is supplemented with optimal loading and measurement error variance estimates in each group. To say this another way – there is essentially no chance that the observed data could have arisen via random sampling if the worldly causal forces were structured as in C&L’s configural model. And yet another way – something about C&L’s configural model’s specification must be changed in order to match or correspond to the world’s causal structure.
The model might be wrong for claiming three latents exist. If there were four or more latents in each country, it would be unreasonable to claim the indicators adequately measured three latents! Similarly, it would be unreasonable to claim adequate invariant measurement of three latents, if the available data are inconsistent with three modeled latent factors after constraining the loadings or other estimates to be equal between the GB and USA groups. In fact, all 24 models with betweengroup constraints reported in C&L’s Table 6 [17]:181 are highly significantly inconsistent with the International Social Survey Program data.
It is strange, but consistent with problematic factor analytic tradition, that C&L attend to significant increases in χ ^{2} upon insertion of additional betweengroup constraints (where about half the significant χ ^{2} changes are on the order of 10 to 20), yet they disregard the huge 399.6 χ ^{2} resulting from the constraints comprising their base configural model. They attend to comparatively small χ ^{2} changes resulting from estimating coefficients constrained to equality between the groups, but fail to attend to the huge χ ^{2} change resulting from the constraints providing the configural model which also required equality (in number of latents, loading placements, zero crossloadings, and error covariances) between the groups.
One option for investigating what might be wrong with C&L’s configural model would be to check the modification indices for crossloadings or error covariances in the configural factor model, but this approach is hampered by capitalization on chance, and it would fail to investigate the more difficult possibility that C&L’s configural model might contain the wrong number of latents. We chose the alternative approach of adding new evidence to the discussion by introducing some possible causes of the three latents C&L postulated. Given that men and women, and the young and old, likely have different work experiences and concerns, Sex and Age likely impact at least some of the workfocused indicators. If C&L’s three latents are in fact the appropriate causal foundations of the work indicators, Sex and Age should only impact those indicators indirectly through those three work latents. The covariances between Sex, Age and the work indicators provide new evidence regarding C&L’s postulated configural causal structure. If effects leading from Sex and Age to the three modeled work factors (now more accurately called work latents) are unable to account for the covariances between Sex, Age and the specified work indicators, that constitutes evidence suggesting the postulated work latentfactors are not the appropriate causal foundations of the indicators. This is an instance of singleindicated latents (Sex and Age) being used to more fully assess multipleindicated latents/factors, namely C&L’s work latentfactors [24, 25].
We began by attempting to replicate the failure of C&L’s configural model with the International Social Survey Program data. The resultant χ ^{2} = 448.4 with 102 degrees of freedom and p=.0000, was similar though not identical to C&L’s χ ^{2}. The difference in χ ^{2} values may be partially due to our analysis having three more respondents in each country than was reported by C&L. Cheung and Lau [17] report 645 and 820 cases, while we obtained 648 and 823 cases for Great Britain and the USA respectively. (Cheung & Rensvold [16] also report 823 USA cases in these data, so Cheung and Lau’s slightly reduced Ns seem inexplicable). The difference in χ ^{2} values may also be partially due to our using maximum likelihood in LISREL rather than C&L’s bootstrapping in Mplus. Whatever the reason for the difference in χ ^{2} values, both agree C&L’s configural model is definitely inconsistent with the data. Furthermore, C&L’s configural model’s specification problems seem spread throughout the model rather than being localized – as evidenced by 9 and 11 (of the 24 possible) crossloadings in the GB and US models having modification indices exceeding 4.0 (respectively).
It is unsurprising that the Sex and Age supplemented model fails (χ ^{2} = 680.2 with 138 degrees of freedom and p=.0000) because this model’s structure cannot rectify the ill fit previously observed among the work indicators (where χ ^{2} was 448.4). (C&L’s three work latents’ loadings, variances and covariances continue to constitute the only explanation for relationships among the work indicators). Nonetheless, the substantial increase in χ ^{2} constitutes important news because this increase originates primarily in the work indicators’ covariances with Sex and Age. The 18 new degrees of freedom in each group come from attempting to explain 24 new covariances (between Sex, Age, and the 12 work indicators), with the six new effects leading from the Sex and Age latents to the three work latent “factors”. The substantial jump in χ ^{2} signals that the six latent effects of Sex and Age on the worklatents are insufficient to account for the 24 covariances between Sex, Age and the workindicators. The standardized covariance residuals for 10 and 13 of these 24 covariances exceed 2.0 in GB and the US respectively, and hence would be significant if tested individually. If C&L’s configural model’s work latents constituted the correct causal foundations of the work indicators, those three latents would be able to appropriately distribute the causal impacts of Sex and Age to the work indicators. The multiple inconsistencies between the observed Sex and Age covariances and the covariances required to conform to C&L’s configural worklatent workindicator specification, clearly report that the work latents in C&L’s configural model are inconsistent with the new evidence.
Another perspective on this is obtained by specifying Sex and Age in an allη model (so the Sex and Age latents become η _{4} and η _{5}) [26]. This produces the same estimates and ill fit of the workindicators’ covariances with Sex and Age as reported for the Fig. 2 model above, but provides additional modification indices corresponding to potential effects leading directly from Sex and Age to each work indicator. Of the 24 potential “loadings” leading from the Sex and Age latents to the work indicators, 10 and 13 of the modification indices exceed 4.0 for GB and the US respectively. (As in the basic C&L configural model, with no Sex or Age, many of the crossloadings from the worklatents to the workindicators continued to have modification indices exceeding 4.0). Each real direct Sex or Age effect to a specific work indicator would challenge C&L’s presumption that three work latentfactors constitute the causal foundations of the work indicators. Such effects are inconsistent with their configural model’s causal structure because each direct effect to a workindicator makes Sex or Age an additional latent commoncause of the work indicators – via that direct effect and the indirect effects of Sex and Age functioning through the worklatents. Thus both the covariance inconsistencies reported by the jump in χ ^{2}, and the modification indices potentially connecting Sex and Age to specific work indicators, report that the work latent “factors” in C&L’s configural model are problematic, and that more, and/or different, latents actually produce the workindicators.
We caution that we are NOT recommending following the modification indices to obtain fit, because problems created by an incorrect number of underlying latent factors cannot be resolved by following the modification indices. Indeed, the weakness of modificationindex coefficient suggestions becomes evident if you notice that modification indices suggesting effects leading directly from Sex or Age to the work indicators do not correspond to the coefficients suggested by the modification indices for the crossloadings or workindicator error covariance in the basic C&L model. That is, following the original C&L model’s modification indices would not have addressed the kinds of model misspecifications being currently encountered. Similarly the current modification indices might improve fit without correcting the underlying problems – which may require changing the number and identity of the underlying latent variables being measured by the available work indicators.
So both the previously available evidence from within the set of work indicators themselves (as reported by C&L), and the new evidence from attempting to connect the work indicators to Sex and Age as exogenous causes, speak against the causal structuring of the worklatents in C&L’s configural model. But there remains the possibility that only the work indicators surviving C&L’s additional invariance investigations might fare better than the full set of indicators. C&L’s subsequent investigation of invariant intercepts might have coincidentally weeded out some covarianceproblematic indicators, leaving only indicators appropriately modeled by C&L’s work latents.
This was investigated by setting up a model similar to Fig. 2, but employing only the work indicators C&L report as additionally displaying intercept, or scalar, invariance. C&L report the relevant indicators: for Work Content as y _{5}, y _{6}, and y _{8}; for Work Environment as y _{9}, y _{10}, and y _{12}; but left Work Context to be indicated either by the pair y _{1} y _{4} or the pair y _{2} y _{3} “based on theoretical interpretation and the research question” [17]:178. Using the y _{2} y _{3} pair for Work Context; y _{5}, y _{6}, y _{8} for Work Content; and y _{9} y _{10} y _{12} for Work Environment, along with Sex and Age, results in a model that continues to be severely inconsistent with the covariance data (χ ^{2} = 287.0 with 54 degrees of freedom and p=.0000) and displays the same general pattern of inconsistencies reported above. Using the y _{1} y _{4} pair produces similar disconfirmation but with additional evidence that y _{1} y _{4} in the GB group are too inconsistent with one another to support any reasonable covariances among the work latents. Collectively, these observations convincingly report that appealing to interceptconsistency cannot dispel, or overcome, the covarianceinconsistencies introduced by overlooked configural model causal misspecification.
We estimated one additional model employing Sex, Age, and two indicators of each work latent: y _{2} and y _{3} for Work Content, y _{5} and y _{6} for Work Context, and y _{9} and y _{10} for Work Environment. This model also fails to match the covariance data (χ ^{2} = 132.9 with 12 degrees of freedom and p=.0000) and displays the same scattered pattern of residual ill covariance fit, and numerous substantial modification indices connecting Sex, Age, and the work latents (via crossloadings) to the work indicators. This instructs us that even pairs of indicators can sometimes detect problematic configural models, and implicitly instructs us that appropriate causal model specifications for these particular data may require some singleindicated work latents [25]. The diagnostics in the pairedindicators model became more focused than the diagnostics for the models having multiple indicators, and highlighted specific theoretical/methodological issues and options. This suggests it may be useful to begin measurement invariance testing with a configural model containing the few best indicators, rather than beginning with multiple indicators.
Summary and discussion
Cheung and Lau [17] are not alone in disrespecting evidence of misspecification of the configural model initiating measurement invariance assessments. Indeed, there is a long and inglorious history of disrespect of configural model testing among even oftcited foundational papers. Byrne, Shavelson and Muthen, for example, say that “A nonsignifiant χ ^{2} (or a reasonable fit as indicated by some alternate index) is justification that the baseline models fit the observed data” [15]:457. Notice the problematic focus on fit rather than on whether or not there is evidence the configural model is improperly causally structured. Byrne, Shavelson and Muthen’s configural factor models had χ ^{2} values more than a dozen standard deviations from the mean (and hence p values < 0.000001), and even after 5 modificationindex prompted changes both groups remained significantly inconsistent with the data – one with so small a p value that it could only be reported as 0 by two different web χ ^{2} calculators. These models’ inconsistency with the data, directly contradicts Byrne, Shavelson and Muthen’s claim that their baseline configural model constitutes a “reasonable representation of the data” [15]:460.
A decade later, Vandenberg and Lance [13] said: “Overall model fit refers to evaluating the ability of the a priori model to (at least approximately) reproduce the observed covariance matrix” [13]:43), where the laxity of “at least approximately” is obvious, and where the concern is again inappropriately expressed as fit rather than measurement’s requirement of proper model causal specification. And nearly yet another decade later Schmitt and Kuljanin [27] reported that a configural model with χ ^{2} = 1183.86 and df = 174 whose p is reported as < 0.01, but that is also < 0.000001, “was accepted because considerable prior research confirmed the discriminant and convergent validity of these items” [27]:218 – as if clear evidence of problems in the current model’s specification could be justifiably disregarded because it would conflict with others’ claims! Schmitt and Kuljanin acknowledge that their review of more than 80 recent measurement invariance studies discovered that what authors “accepted as adequate evidence of configural invariance varied considerably across studies” and that what “constituted adequate fit was invariably subjective” [27]:212 – again notice the misguided emphasis on fit, which easily but inappropriately translates into fitindices rather than concern for testing the causal properness of the model.
In that same year Meade, Johnson and Braddy [28] provided a statistically sophisticated simulation of configural measurement invariance testing, which unfortunately failed to acknowledge the study’s key limitation, namely that it disregarded the power of tests to differentiate between factorstructured configural models and nonfactor worldlymodels that can be confused with factor structures [11]. By simulating minor and intentionallytrivial factor model variations, while disregarding the more challenging issue of detecting incorrect nonfactor structures, Meade et al. contributed (possibly unintentionally) to the myth that Nbased power only detects trivial problems. Understanding that important model misspecifications can mimic the minor covariance residuals resulting from trivial factormodel misspecifications [11] makes it a glaring statistical mistake to claim that only trivial things become detectable with increasing N. Subsequently, downgrading χ ^{2} on the basis that it is “highly sensitive to sample size” [28] becomes a backhanded way of slighting the power provided by large samples – power capable of potentially detecting important model misspecifications. The supposed excuse of χ ^{2} being highly sensitive to sample size becomes a demonstration of factormodel myopia (seeing only factorstructured alternatives and misspecifications) and not a reasonable scientific response to a world whose causal features are currently unknown, and potentially not factor structured [29]. Fitindexpropelled disregard of evidence of model causal misspecification has undoubtedly led to more than a few optimisticyeterroneous measurement invariance reports.
Several observations seem warranted. First, if a researcher intends to investigate the invariance of measurements between groups, the basic model structure – the configural model initiating the invariance assessment – must be consistent with both groups’ data. Evidence of inconsistency between the configural model and either group’s data, may be signaling that the model contains incorrect latents. Incorrect latents render all “measurements”, including “measurements” that are consistent between groups, dubious because measurement is meaningless if the modeled latents do not correspond to worldly features [20]. When assessing measurement, evidence of invalidity trumps reliability [30].
Second, configural models initiating invariance testing need not be factor structured. It is reasonable to start with a full structural equation model that includes exogenous variables like Sex and Age. Indeed, it seems preferable to begin with a configural model whose latent structure is consistent with the researchers’ theory, their methodological understanding, and the data. Measurement and measurementinvariance assessments should be integrated with latent level structural understandings. Latents are known through their indicators – the basic factor claim – but latents are not onlyknown through their indicators. Latents are also known through the latentlevel causal structures in which they participate – like Sex and Age structures [10, 31]. A substantial but avoidable factorbias, and corresponding latenttheory weakness, accompanies routinely initiating measurement invariance testing with factorstructured configural models.
Third, it seems selfdestructive to begin invariance testing with multiple indicators of factorlatents whenever it is likely to be difficult to obtain reasonablyfunctioning indicators. Occasionally, obtaining adequate measurement with even pairs of indicators may prove difficult – recall the failure of C&L’s model with only two indictors per worklatent. If a factorstructured configural model fails, and retaining the full set of indicators is desired, add latents instead of persisting with problematic factor structuring of the indicators. Again, validity trumps reliability when assessing measurement.
Fourth, researchers are urged to think causally about all their modeled variables. The covariances within each set of indicators, the covariances between diverse sets of indicators, and the covariances between latents, all result from productive/impactful/consequential effects in the real world. Faithful modeling of the worldly causal structures is required to attain adequate and invariant measurements. If the model’s structure is inconsistent with the indicators’ worldly causal milieu, the very notion that the indicators are measures is rendered dubious – even if those indicators function invariantly (consistently incorrectly). Yes, even in the context of invariance, validity concerns trump reliability when assessing measurement. Our configural models must structurally mirror worldly causal impacts if they are to testify to the adequacy and invariance of measures of worldly features [19, 20, 32].
The causal considerations should include the possibility of contextdependent causal impacts. For example, the above models represented Sex as having effects on C&L’s work latent variables, whereas proper measurement might require modeling statistical interactions because the relevant latent causal effects may differ between the sexes. For example, physically demanding work and unhealthy work conditions might be embedded in different causal networks for males than females, due to differences between typical male and female work environments. If so, the configural model should contain interactions with Sex when assessing betweencountry invariance. In general, understanding a latent factor as “something common to the items” is likely to be too causally imprecise to support a meticulouslycausal configural model. For example, causal consideration of C&L’s factorstructured models requires considering whether the same basic latent variable Job Content causes workers to feel both that their job helps other people and that their job is interesting; and to consider whether the latent Work Environment causes both doing hard physical work and unhealthy work conditions (like exposure to diseases or dangerous chemicals). Our focus on statistical matters means that we, like C&L, are limited in the depth to which we can investigate the work indicators’ methodology and causal embeddedness, but the important point remains that proper or valid causal specification, however complex, constitutes a mandatory foundational requirement for measurement invariance assessments.
Disciplinaryentrenchment of problems
The invariance testing problem illustrated above is more strongly entrenched in, and will be more difficult to dislodge from, some disciplines than others – presumably the disciplines committed to factor analysis. Researchers in such disciplines might consider the archived backstory to the current publication, namely SEMNET exchanges between Gordon Cheung and Les Hayduk in April, 2015 [33]. Those discussions resulted in an earlier version of this article being submitted to Organization Research Methods. Cheung is a senior scholar who is on the editorial board of Organizational Research Methods, and Cheung and Lau’s [17] article appeared in Organization Research Methods, so it seemed appropriate for that journal to participate in correcting a problem it helped (possibly unknowingly) propagate. The manuscript was rejected, with no invitation for resubmission or response to the reviews, by the editor (James LeBreton) in agreement with the associate editor’s (Adam Meade’s) recommendation – which pointed out that “Fundamentally, the reviewers do not agree with the philosophy espoused in the manuscript related to the necessity for sole reliance on chisquare as a method for testing”. It is patently silly to disregard the “sole” strongestavailable testing on the basis of an alternative philosophy espousing weaker/deficient testing, so we might be inclined to laugh off this comment as reflecting the philosophical folly of junior scholars unfamiliar with the recent literature [10–12]. But the editor later indicated the reviewers “are individuals who are currently serving on the editorial boards of a number of leading journals in psychology, management, and quantitative methods. All three reviewers have served as chief editor or AE at one (or more) of the leading journals in their fields”. The appropriate academic response to this selectivelyentrenched testing deficiency is to air the disagreements. To that end, I provided an Additional file 1 containing the original ORM manuscript, some editorial correspondence, and the anonymous reviews into which I inserted responses providing the contrasting testing philosophy. (Both editors of Organizational Research Methods, and their publisher, have agreed to the publication of their editorial correspondence and the peer review reports under Creative Common license). I invite you to weigh each spat, render your adjudication, and employ the victorious arguments to improve invariance testing. If the journals in your area do not yet exhibit a “philosophy” of respecting EVIDENCE of model misspecification and invalidity, you will have to either submit to a philosophy of evidencedisrespect, or become an agent of change [34].
Conclusion
Ensure that a theoryappropriate and methodsappropriate causal configural model is consistent with your indicators before moving to any other steps in measurement invariance assessment.
Declarations
Acknowledgements
I think Dr. James LeBreton, and Dr. Adam Meade for permission to publish their editorial correspondence, and Sage publishing for permission to publish the anonymous ORM reviews.
Funding
None.
Availability of data and materials
The data are publically available from the International Social Survey Program. ISSP Research Group, International Social Survey Program: Work Orientations I – ISSP 1989., Cologne: GESIS Data Archive, Cologne. ZA1840 Data file Version 1.0.0, doi:10.4232/1.1840., 1991.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Both editors of Organizational Research Methods, and their publisher, have agreed to the publication of their editorial correspondence and the ORM peer review reports under Creative Common license.
Ethics approval and consent to participate
Not applicable.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Authors’ Affiliations
References
 Thurstone L. Multiple Factor Analysisl. Chicago: University of Chicago Press; 1947.Google Scholar
 Harmann H. Modern Factor Analysis 2nd edition. Chicago: University of Chicago Press; 1967.Google Scholar
 Lawley DN, Maxwell AE. Factor Analysis as a Statistical Method. 2nd ed. London: Butterworth & Co.; 1971.Google Scholar
 Wright S. Correlation and causation. J Agric Res. 1921;20:557–85.Google Scholar
 Wright S. The method of path coefficients. Ann Math Stat. 1934;5:161–215.View ArticleGoogle Scholar
 Blalock HMJ. Causal Inference in Nonexperimental Research. Chapel Hill: University of North Carolina Press; 1964.Google Scholar
 Duncan OD. Introduction to Structural Equation Models. New York: Academic; 1975.Google Scholar
 Heise DR. Causal Analysis. New York: Wiley; 1975.Google Scholar
 Sorbom D. Karl Joreskog and LISREL: A personal story. In: Cudeck R, du Toit S, Sorbom D, editors. Structural Equation Modeling: Present and Future. A Festschrift in Honor of karl Joreskog. Lincolnwood: Scientific Software International; 2001.Google Scholar
 Hayduk LA, Glaser DN. Jiving the FourStep, Waltzing Around Factor Analysis, and Other Serious Fun. Struct Equ Model. 2000;7(1):1–35.View ArticleGoogle Scholar
 Hayduk LA. Seeing Perfectly Fitting Factor Models That Are Causally Misspecified: Understanding That CloseFitting Models Can Be Worsethat l. Educ Psychol Meas. 2014;74(6):905–26.View ArticleGoogle Scholar
 Hayduk LA. Shame for Disrespecting Evidence: The Personal consequences of Insufficient Respect for Structural Equation Model Testing. BMC Med Res Methodol. 2014;14:1–10. doi:10.1186/1471228814124.View ArticleGoogle Scholar
 Vandenberg RJ, Lance CE. A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organ Res Methods. 2000;3(1):4–70.View ArticleGoogle Scholar
 Hayduk L, Cummings G, Boadu K, PazderkaRobinson H, Boulianne S. Testing! Testing! One, two, three – Testing the theory in structural equation models. Personal Individ Differ. 2007;42(5):841–50.View ArticleGoogle Scholar
 Byrne BM, Shavelson RJ, Muthen B. Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance. Psychol Bull. 1989;105(3):456–66.View ArticleGoogle Scholar
 Cheung GW, Rensvold RB. Testing factorial invariance across groups: A reconceptualization and proposed new method. J Manag. 1999;25:1–27.Google Scholar
 Cheung GW, Lau RS. A Direct Comparison Approach for Testing Measurement Invariance. Organ Res Methods. 2012;15(2):167–98.View ArticleGoogle Scholar
 Little TD. Longitudinal Structural Equation Modeling. New York: Guilford Press; 2013.Google Scholar
 Borsboom D, Mellenbergh GJ, vanHeerden J. The theoretical status of latent variables. Psychol Rev. 2003;110(2):203–19.View ArticlePubMedGoogle Scholar
 Borsboom D, Mellenbergh GJ, vanHeerden J. The concept of validity. Psychol Rev. 2004;111(4):1061–71.View ArticlePubMedGoogle Scholar
 ISSP Research Group. International Social Survey Program: Work Orientations I – ISSP 1989. Cologne: GESIS Data Archive, Cologne; 1991. doi:10.4232/1.1840. ZA1840 Data file Version 1.0.0.Google Scholar
 IBM. IBMSPSS 22. Armonk: International Business Machines Inc; 2013.Google Scholar
 Joreskog K, Sorbom D. LISREL 9.1 March 2013. Skokie: Scientific Software International; 2013.Google Scholar
 Hayduk LA. LISREL Issues, Debates, and Strategies. Baltimore: Johns Hopkins University Press; 1996.Google Scholar
 Hayduk LA, Littvay L. Should Researchers use Single Indicators, Best Indicators, or Multiple Indicators, in Structural Equation Model? BMC Med Res Methodol. 2012;12:1–17. doi:10.1186/1471228812159.View ArticleGoogle Scholar
 Hayduk L. Structural Equation Modeling with LISREL. Baltimore: Johns Hopkins University Press; 1987.Google Scholar
 Schmitt N, Kuljanin G. Measuring invariance: Review of practice and implications. Hum Resour Manag Rev. 2008;18:210–22.View ArticleGoogle Scholar
 Meade AW, Johnson EC, Braddy PW. Power and sensitivity of alternative fit indices in tests of measurement invariance. J Appl Psychol. 2008;93(3):568–92.View ArticlePubMedGoogle Scholar
 Rensvold RB, Cheung GW. Testing measurement models for factorial invariance: A systematic approach. Educ Psychol Meas. 1998;58(6):1017–34.View ArticleGoogle Scholar
 Hayduk LA, PazderkaRobinson H, Cummings GG, Boadu K, Verbeek ELPTA. The weird world, and equally weird measurement models: Reactive indicators and the validity revolution. Struct Equ Model. 2007;14(2):280–310.View ArticleGoogle Scholar
 Hayduk LA, PazderkaRobinson H. Fighting to understand the world causally: Three battles connected to the causal implications of structural equation models. In: Outhwaite W, Turner S, editors. Sage Handbook of Social Science Methodology. London: Sage; 2007. p. 147–71.Google Scholar
 Borsboom D, Mellenbergh GJ. True scores, latent variables, and constructs: A comment on Schmidt and Hunter. Intelligence. 2002;30:505–14.View ArticleGoogle Scholar
 SEMNET. The Structural Equation Modeling Discussion Network. http://www2.gsu.edu/~mkteer/semnet.html. Accessed Apr 2015.
 Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2(8):0696–701.View ArticleGoogle Scholar