Skip to main content

Frameworks for estimating causal effects in observational settings: comparing confounder adjustment and instrumental variables

Abstract

To estimate causal effects, analysts performing observational studies in health settings utilize several strategies to mitigate bias due to confounding by indication. There are two broad classes of approaches for these purposes: use of confounders and instrumental variables (IVs). Because such approaches are largely characterized by untestable assumptions, analysts must operate under an indefinite paradigm that these methods will work imperfectly. In this tutorial, we formalize a set of general principles and heuristics for estimating causal effects in the two approaches when the assumptions are potentially violated. This crucially requires reframing the process of observational studies as hypothesizing potential scenarios where the estimates from one approach are less inconsistent than the other. While most of our discussion of methodology centers around the linear setting, we touch upon complexities in non-linear settings and flexible procedures such as target minimum loss-based estimation and double machine learning. To demonstrate the application of our principles, we investigate the use of donepezil off-label for mild cognitive impairment. We compare and contrast results from confounder and IV methods, traditional and flexible, within our analysis and to a similar observational study and clinical trial.

Peer Review reports

Background

A common goal in healthcare is to estimate the causal effect of a treatment or intervention on medical outcomes. The gold standard for this task remains a well-controlled randomized control trial (RCT). In this setting, randomization allows us to establish cause and effect estimates because, on average, observed differences between treatment arms will be due to either the assigned treatment or chance.

RCTs may not always be feasible due to ethical, logistical, or monetary constraints. When this is the case, we may turn to observational studies in an attempt to isolate causal effects of interest. Observational studies can provide hypothesis generating evidence to help inform future investigations of treatments such as extensions to different populations or use cases. In addition, observational studies give researchers real world evidence surrounding the effectiveness and safety of a treatment to augment RCT findings (e.g. Phase 4 trials).

On the path to isolating causal effects, observational studies must address potential bias due to the non-randomization of the intervention. A common source of bias is confounding by indication, or treatment selection bias, where factors affect both the assignment of treatment and the targeted medical condition. These factors, called confounders, range from patient characteristics to other concurrent treatments.

There are two broad classes of approaches to mitigate treatment selection bias based on confounding variables and instrumental variables (IVs). Briefly, confounder approaches aim to “adjust” for all factors that both explain treatment assignment and the outcome. In contrast, IVs determine only the assignment of the treatment but, otherwise, are not associated with the outcome. IVs are used to define a subset of the population whose treatment assignment is free from confounding.

Fundamentally, confounder and IV approaches are characterized by untestable assumptions in practice. For example, the notion that all possible confounders have been adjusted for cannot be verified with data. Therefore, analysts must be able to operate under the assumption that these methods will work imperfectly. In other words, neither approach will fully overcome treatment selection bias but can provide a less biased estimate than had they not been used. Navigating this indefinite paradigm requires a general set of reasoning and intuition surrounding observational studies.

In this paper, we formalize a set of general principles and heuristics for estimating causal effects under treatment selection bias. Stemming from the two approaches, we outline three general steps. Firstly, one must be able to identify potential confounders and IVs in the scientific context of the study. Then, looking at the available dataset, judge which variables identified in step one are present, somewhat present (e.g. proxies), or missing. Thirdly, weighing the pros and cons of each methodology to consider which one could provide a more reasonable causal estimate. Our discussion is led by the criteria of internal validity (the ability for the approach to estimate the causal effect), external validity (the ability for the approach to generalize to relevant populations), and reproducibility (the ability for the results to be replicated in similar studies).

Throughout the paper, we will generally assume that the relationships in the data are linear in functional form so we may focus on issues related to unobserved confounding as opposed to model misspecification. We relax this linearity assumption when we speak about causal inference in non-linear settings and flexible modeling approaches.

To demonstrate the use of the principles outlined in the paper, as an illustrative example, we investigate the use of donepezil (brand name: Aricept) in mild cognitively impaired patients (MCI) to mitigate cognitive decline due to dementia. In the present day, donepezil is indicated for mild to severe Alzheimer’s Disease (AD) but not MCI due to failure to show efficacy in clinical trials [1,2,3]. Nevertheless, study of donepezil and related compounds in MCI has continued in both observational studies of off-label practice [4, 5] and clinical trials [6, 7].

We analyze data from the Alzheimer’s Disease Neuroimaging Initiate (ADNI), an observational, multi-center, natural history dataset that tracks cognitively normal, MCI, and AD subjects over time [8]. Because the use of donepezil is non-randomized in ADNI there may be confounding by indication where MCI patients who are prescribed donepezil are potentially suffering from more severe cognitive decline than those who do not. Under this setting, we will compare confounder and IV model estimates to each other and to that of related observational studies and clinical trials.

In the literature, there exists other tutorials and textbooks for causal inference methodology. In particular, Baiocchi, Cheng, and Small give a detailed overview of IVs in health research and, in section two of their paper, share a similar philosophy to this paper in that choosing between confounder and IV approaches includes weighing unmeasured confounding against the hypothesized validity of the IV [9]. Nevertheless, the scope of their tutorial surrounds a pedological overview of IVs whereas ours incorporates this information as part of a larger goal to provide detailed guidelines and heuristics for conducting observational studies. As such, we cover a broader range of topics such as the interaction between the confounders and IV approaches, comparing confounder adjustment and propensity score weighting methods, non-collapsibility, and external validity with each topic centered around potential assumption violations. We further include a full data analysis with a detailed comparison and discussion of each approach in order to clearly demonstrate the discussed principles.

Another important contribution of our tutorial is that we cover the recent trend of using machine learning (ML) for causal inference such as targeted minimum loss-based estimation (TMLE) and double machine learning (DML) [10,11,12]. As these tools are now widely used in applied data analyses, any modern tutorial on causal inference methodology should include commentary on this topic. By first outlining general principles for causal inference in observational studies with “traditional” methodology, our tutorial can more effectively discuss the potential benefits and pitfalls of “data-driven” modeling for estimating causal effects. Furthermore, our applied data analysis, that includes the use of ML methods, provides a further critical evaluation of the novel ML methods. There exist little by way of practical comparisons and guidelines of traditional and novel methodology across confounder and IV methodology in the literature. Angrist and Frandsen consider this topic but focus on economic applications and the work is limited in terms of methodology utilized relative to this review [13].

The remainder of the paper is organized as follows. We begin by outlining the assumptions related to confounder adjustment and IV methods as well as the consequences of violating each assumption. We then describe common methodologies for confounder adjustment and IVs and their machine learning extensions, discussing further assumptions and considerations. Next, we outline scenarios where one approach is preferable to another. After presenting our analysis of the ADNI donepezil data and related remarks, we conclude with an overall discussion of the content of this paper.

Main text

Two approaches: confounders and instruments

We begin by considering the simple scenario depicted in Fig. 1. Such figures are called directed acyclic graphs (DAGs) where the nodes are variables, the edges represent a directed causal effect, and the greek letters represent the magnitude of the causal effect of the respective edge. Contextually, D is an indicator for which treatment was prescribed, Y represents the outcome of interest, U represents a confounding variable, and Z is an IV. In the donepezil example, D indicates the prescription of donepezil or not and Y is cognitive function as measured by the Alzheimer’s Disease Assessment Scale (ADAS-cog) [14].

Fig. 1
figure 1

A directed acyclic graph with one confounder and one IV

Firstly, we must define the causal estimand of interest using the potential outcomes framework [15]. Let \(Y_{i,t}(1)\) be the potential outcome at time t had the individual taken the treatment (\(D = 1\)) and \(Y_{i,t}(0)\) be the potential outcome after 2 years had the individual not taken the treatment (\(D = 0\)). In contrast, \(Y_{t}\) is the observed outcome at time t. Therefore, in the situation characterized by Fig. 1, we want to isolate \(\beta\), the average treatment effect (ATE). In the donepezil example, the ATE would be change from baseline in ADAS-cog after two years formalized below with \(t=0\) being baseline:

$$\begin{aligned} \beta= & {} E\left[ Y_{t=2}(1) - Y_{t=2}(0)\right] = E\left[ Y_{t=2}(1) - Y_{t=2}(0) + (Y_{t=0} - Y_{t=0})\right] \\= & {} E\left[ (Y_{t=2}(1) - Y_{t=0}) - (Y_{t=2}(0) - Y_{t=0})\right] . \end{aligned}$$

We cannot observe both potential outcomes for any individual and thus we must use observed values from those who were prescribed either treatment option. Under the stable unit treatment values assumption (SUTVA), treatment assignment ignorability (i.e. \(Y(0),Y(1) \bot \!\!\!\bot D\)), and positivity (\(0< P(D=d) < 1\) with \(d=0,1\)), we could use the following estimand

$$\begin{aligned} \beta= & {} E\left[ (Y_{t=2}(1) - Y_{t=0}) - (Y_{t=2}(0) - Y_{t=0})\right] \\= & {} E\left[ (Y_{t=2}(1) - Y_{t=0})|D=1\right] - E\left[ (Y_{t=2}(0) - Y_{t=0})|D=1\right] \\= & {} E[Y_{t=2} - Y_{t=0}|D=1] - E[Y_{t=2} - Y_{t=0}|D=0]. \end{aligned}$$

Throughout this paper, we focus on mitigating issues when the assumption of ignorability fails to hold. Graphically, the \(\theta\) and \(\eta\) edge weights in Fig. 1 represent the magnitude of U’s influence on the assignment of donepezil and ADAS-cog, respectively. Clearly we have a violation because U is a confounder when \(\theta \ne 0\) and \(\eta \ne 0\) resulting in Y(0), Y(1) \(\not\!\perp\!\!\!\perp\)  D. Simply estimating the difference in the group means (the final lines) will not recover \(\beta\) due to the second equality failing. In short, indirect paths from D to Y (via U) pose major issues in obtaining causal estimates.

There are two ways we will describe how the confounder approach can isolate \(\beta\). First, we need to find a set of variables \({\textbf {X}}\) such that conditional ignorability, \(D \bot \!\!\!\bot Y(0),Y(1)|{\textbf {X}}\) is achieved. Alternatively, we need to condition upon variables \({\textbf {X}}\) such that all “backdoor” paths from D to Y are blocked [16]. Backdoor paths are non-causal, indirect paths that connect D and Y. Visually, if we think of a DAG as a set of pipes, blocking all backdoor paths restricts the flow of water or “information” to only go through the pipe flowing from D to Y. Hence, we recover our direct causal effect. These two notions of confounding leads us to conclude that in Fig. 1 it is sufficient and necessary to condition on U and use the finite estimator of \(E\left[ E[Y_{t=2} - Y_{t=0}|D=1,U=u] - E[Y_{t=2}-Y_{t=0}|D=0,U=u]\right]\) to obtain \(\beta\). Note that there are many other descriptions of confounding and we will focus on the conditional ignorability and graphical definitions [17, 18].

Alternatively, we may identify the treatment effect through an IV denoted as Z. For simplicity, we assume for now that only one IV is sufficient. Briefly, the definition of an IV is that Z must influence D (relevance), or \(\alpha \ne 0\), does not cause Y conditioning on X (exclusion restriction), and is not associated with any unobserved confounders (independence). In Fig. 1, Z is an IV because \(\alpha \ne 0\) and there are no other arrows in or out of Z that go to Y. Figure 2 demonstrates how this latter notion can be violated if either \(\delta\), \(\epsilon\), or \(\phi\) are non-zero. In this case, Z is, in fact, a confounder but if \(\delta = 0\) and we are able to condition upon U, then Z is re-classified as an IV. Compared with the confounder approach, if we have access to an IV, we may be able to obtain a causal estimate without having to account for all possible confounders, which is a major potential advantage of using IVs over confounder adjustment methods.

Fig. 2
figure 2

Dotted lines disqualify Z from being an IV

The building of a valid causal network requires both knowledge of all variables in the network and the arrows between them. Unfortunately, we cannot be sure that a posited DAG is correct using data. For example, to suggest unobserved confounding requires knowledge that goes beyond the dataset at hand. Nevertheless, we can still use DAGs to capture assumption violations as clearly as possible and move towards the best option in a given scenario. By first thinking outside the scope of the current dataset, one captures a fuller picture of the study.

The untestability of causal assumptions suggests that users of the confounder and IV approaches must think relatively and not in absolutes. For example, rather than arguing a set of confounders is sufficient for conditional ignorability, one should instead find confounders to condition upon that potentially bring the estimate closer to the true causal estimand. For IVs, rather than justifying whether have a true IV or not, we can think about how strongly the treatment is identified relative to potential violations in the assumptions of Z and the hypothesized overall magnitude of confounding.

Identifying and using confounders

Without directly modeling the response, in order to avoid multiplicity bias, a reasonable strategy to identify confounders is to first postulate variables that affect the response and then distinguish which of these variables may influence treatment assignment [19]. In the latter step, caution must be taken in the direction of causality: mistakenly adjusting for mediating variables on the pathway from D to Y may produce unintended consequences such as attenuation of the estimated treatment effect. To see this, consider Fig. 3 - a scenario where W is a mediating variable. A simple example of a meditator is the ADAS-cog measurement at one year, \(Y_{t=1}\). Adjusting for \(Y_{t=1}\) will decrease a possible treatment effect because the measurement at one year is on the causal pathway to \(Y_{t=2}\) and we have blocked this path.

Fig. 3
figure 3

DAG of a Mediator, W

For a more formal attenuation scenario, denote the edge weights in Fig. 3 as \(\{\beta _{i}: i = 1,2,3\}\) (the decomposed \(\beta\) in Fig. 1). If we assume linear relationships between variables then \(\beta = \beta _1\beta _2 + \beta _3\). If \(|\beta | > 0\) and \(sign(\beta _1\beta _2))=sign(\beta _3)\), then adjusting for W will yield \(\beta _3\) but there will be attenuation as \(|\beta |>|\beta _3|\).

The first and strongest reference for determining causal links for confounders should be the underlying scientific mechanism. Such information can be based on prior basic science, epidemiological findings, and historical trials. These sources often help one to identify a vast majority of relevant confounding factors. Another source but of potentially lesser quality is past empirical studies done on predictors of the response. One should assess the quality of these studies in terms of replicability, precision, and study design before choosing to use the associated information. This information can additionally be used to identify suitable proxies for key confounders.

Given a set of potential confounders, it may not always be advantageous to select all of them in the data analysis. In reality, much of the confounding may be captured by a few variables such as basic demographics (e.g. age and sex), commonly collected lifestyle factors (e.g. smoking and alcohol use), and comorbidities (e.g. chronic disease and corresponding medication use). With each confounder included in the analysis, we must weigh moving towards conditional ignorability against overfitting (i.e. increased imprecision and Type II error), interpretability, and reproducibility. Consider that under non-linearity, adjusting for confounders may change the interpretation of the estimate of the treatment effect [18]. In the linear setting, a similar scenario can occur under treatment effect heterogeneity, which is when the effect of the treatment differs across the values of one or more factors [20].

Identifying and using instrumental variables

Unlike confounders, the use of IVs is not as straightforward and often requires more technical knowledge to employ effectively so we first will provide more background and intuition. The crux of the IV approach is that we use variation independent from confounding to identify treatment assignment. Because IVs, by definition, cannot be determined by other variables in the causal paradigm (there are exceptions: for example, see Fig. 4), we can assume that the IV values are “as good as” randomized in that they are not influenced by conditioned and unobserved confounders. As a result, the values of the treatment assignment generated by IVs, denoted \(D_{IV}\), are also randomized. It follows that the estimate using \(D_{IV}\), \(\hat{\beta }_{IV}\), is theoretically free of unobserved confounding.

Fig. 4
figure 4

Controlling for U, Z is a proxy of effectively randomized variable A

Complexity arises in using IVs mainly because D and \(D_{IV}\) are not technically the same variable. This means that \({\beta }_{IV}\) is a different estimand than \(\beta\) and the IVs cannot be used to directly calculate the ATE but, rather, the “local average treatment effect” (LATE) where we are “local” to variation in the IVs [21]. Fortunately, if there is no treatment effect heterogeneity and the assumptions for an IV are met, \(\hat{\beta }_{IV}\) is consistent for \(\beta\).

As a simple demonstration of the above notions, the form of the LATE with a binary IV and binary treatment can be given by the Wald estimand in Eq. 1:

$$\begin{aligned} \beta _{IV} = \frac{E[Y|Z = 1] - E[Y|Z = 0]}{E[D|Z = 1] - E[D|Z = 0]}. \end{aligned}$$
(1)

Besides adding more intuition behind IV-derived treatment effects, this equation introduces the importance of IV strength or “predictive power” captured by \(E[D|Z = 1] - E[D|Z = 0]\) or \(\alpha\) in Fig. 1. Heuristically, when \(\alpha\) is small then the IV is “weak” and if \(\alpha\) is sufficiently large then the IV is “strong.” In the linear setting, the finite sample bias of \(\hat{\beta }_{IV}\) is partially a function of \(\alpha\) where we incur large bias for \(\beta\) with small values of \(\alpha\) [22]. For instance, in Eq. 1, the fraction is inflated.

The impacts of weak IVs are not just limited to finite samples. Recall that we cannot confirm we have a true IV using observed data and so we must assume our IV estimate is inconsistent for \(\beta\). In this case, as an IV becomes weaker, the sensitivity of the corresponding estimate \(\hat{\beta }_{IV}\) to IV independence assumption violations increases [23]. To elucidate this, suppose we had two IV candidates \(Z_1\) and \(Z_2\) with corresponding strengths \(\alpha _1\) and \(\alpha _2\), where \(|\alpha _1| > |\alpha _2|\). For the same degree of violation in the independence assumptions (e.g. in Fig. 2\(\delta = c > 0\) where c is some constant) the inconsistency of an estimate derived from \(Z_2\) would be greater than from using \(Z_1\).

Table 1 Potential Treatment Assignment

When there is treatment effect heterogeneity, even when we have a valid IV, \(\hat{\beta }_{IV}\) can be inconsistent for \(\beta\) because \(\beta _{IV}\) is an estimand for a subset of the original population. Table 1 summarizes four distinct sub-populations related the IVs: always-takers, compliers, defiers, and never-takers. We can use potential outcomes once again but for treatment assignment: let Z be a binary instrument and D(0) be the treatment assignment had the value of the IV been 0 and D(1) had the values of the IV been 1.

In Table 1, it is clear that changing values of the IVs results in changing values of the treatment assignment only for compliers and defiers. Therefore, we cannot identify always-takers and never-takers using IVs. In addition, we must impose a further assumption that the defier population does not exist for a given IV. This notion is called “monotonicity” where \(D(1) \ge D(0)\) or vice-versa. Thus, we conclude that the subpopulation identified by the IVs are the compliers. To explain why we require monotonicity, we can rewrite Eq. 1 as [24]

$$\begin{aligned} E[Y|Z = 1] - E[Y|Z = 0]= & {} E[\left( Y(1)-Y(0)\right) \left( D(1) - D(0)\right) ]\\= & {} \left( E[Y(1)-Y(0)|D(1)> D(0)]P(D(1) > D(0))\right. \\- & {} \left. E[Y(1)-Y(0)|D(1)< D(0)]P(D(1) < D(0))\right) . \end{aligned}$$

Because of treatment effect heterogeneity, the ATE is differential depending on the subpopulation and there is the potential for a non-zero treatment effect in each group to cancel out. Of course, this does not occur if \(P(D(1) < D(0)) = 0\) (no defiers) or \(E[Y(1)-Y(0)|D(1) > D(0)] = E[Y(1)-Y(0)|D(1) < D(0)]\) (no treatment effect heterogeneity).

Through a similar derivation in the denominator of Eq. 1 as above, we arrive at a clearer definition of the LATE: \(\beta _{IV} = E[Y(1)-Y(0)|D(1) > D(0)]\) or the ATE for compliers [21]. This LATE, changes with chosen IV. If there are multiple IVs, then the LATE is a weighted average of LATEs characterized by each IV. When we have covariates included to establish the validity of Z or decrease error in predicting D, then the LATE is an estimand defined on a population conditional on these covariates. Furthermore, unless the model is saturated, always and never-takers are included [25, 26]. As most models in practice include covariates, the interpretability of IV models can be nebulous.

In practice, there usually exists treatment effect heterogeneity so before using IVs we must determine if it is reasonable to target the LATE as a proxy for the ATE estimand. Consider the following scenarios. First, when treatment effect heterogeneity is unrelated to the choice of treatment then the estimates for compliers will not be systematically different from the other subpopulations. Next, when the first-stage is strong, the population characterized by the IVs will be relatively close to the overall population of the study. For example, if we have found a strong genetic determinant of some condition that was a valid IV (i.e. a Mendelian randomization strategy), it is plausible that, for the vast majority of the population, the occurrence of the condition would vary with the assignment of the gene. It follows that if the IV is weak, the LATE will only capture a small subset of the original population, introducing significant inconsistency in estimating the ATE. Lastly, there is a developing set of literature that relaxes the assumptions for the Wald estimand to equal the ATE such as requiring heterogeneity in the outcome caused by the treatment assignment to be independent from heterogeneity in treatment assignment caused by the IV as well as the IV itself [27,28,29].

Identifying potential IVs is significantly less straightforward than identifying confounders, which is a main limitation of the approach. While there is usually abundant literature on predictors of a medical condition, the factors that determine the assignment of a treatment are difficult to study and are not usually studied. One reason for writing this paper is to generate expose biomedical researchers to IVs such that potential IVs can be shared in the literature similar to how predictors are. In a similar vein to the confounder adjustment approach, we may begin by determining factors that predict treatment assignment and then prune those that affect the outcome. Determining confounders first is helpful as variables that were once invalid IVs may become valid after holding certain confounders constant.

One popular source of IVs is variation in medical practice as it is well known that practice differs across physicians and regions across a wide variety of medical conditions [30,31,32]. If appropriate, we could use factors such as regional variation, facility prescribing patterns, attitudes to certain contraindications, physician preference, and calendar time as IVs [33, 34]. For example, with access to the relevant data, physician preference can be quantified by tabulating the proportion of patients under each physician who were prescribed the treatment of interest. Following this, we can use these proportions to predict which treatment a new patient who sees any of these physicians will receive.

Even still, the validity of prescriber preference as an IV can be questioned. It could be that certain types of patients tend to select a physician that they know is more likely to give them the treatment (graphically, Fig. 2 edges from Z to U). Furthermore, geographic variation in general population health could necessitate higher utilization of treatments in some regions compared to others. Herein lies the value of identifying confounders in IV analyses: perhaps controlling for patient characteristics will block these pathways and greatly reduce assumption violations (e.g. Fig. 4). One takeaway, however, is that IV analysis can easily suffer from issues related to unobserved confounding.

Given a set of IVs, we should characterize each subpopulation. For medical practice patterns, most likely, some patients would not comply with a doctor’s opinions; some patients could insist to get the treatment (alway-takers) and others would refuse under all circumstances (never-takers). One that does that opposite of what the doctor says (defiers) is possible and we will have to assume that they do not exist, which is practically untestable but can be reasoned as unlikely. Under this assumption, the LATE would roughly be those who follow the doctors’ orders. All of this considered, the analyst should determine whether the complier treatment effect is of scientific value.

Interactions of the confounder and IV approaches

The confounder and IVs approaches are deeply related. Therefore, even if an analyst decided to pursue one approach over another, awareness of the principles of the other approach is important. One pervasive issue in this vein is adjusting for an IV as if it was a confounder. Widely-cited guidelines such as Hirano and Imbens (2001) state that variables that are predictive of treatment assignment should be selected for confounder methods like propensity scores, [35] which risks adjusting for IVs and mediators. In the best case, treating IVs as confounders decreases precision because it does not explain variation in the response. Even worse, when there is unobserved confounding, existing inconsistency is amplified [36,37,38,39]. By adjusting for IVs, we reduce variation in the treatment that is uncorrelated with the unobserved confounding. Thus, variation in the treatment produced by unobserved confounding proportionally increases, which causes more bias in the treatment effect.

The impact of adjusting for IVs and mediators demonstrates why one should avoid a purely “kitchen sink,” data-driven approach to variable selection for causal inference. Simply because the estimate of the treatment effect changes when a variable is introduced does not necessarily mean it should be adjusted for. This is one reason why we advocate that confounders largely be sourced a priori by first hypothesizing predictors of the outcome. If one is reasonably certain that a variable is predictive of the outcome but is unlikely to be associated with the predictor of interest, one has a “precision variable,” which still may be of use. Specifically in the linear model setting with no unmeasured confounding, adjusting for such a variable will decrease standard errors in the treatment effect estimate with no cost to bias [18, 40].

Methodology for estimating causal effects

In this section, we discuss three popular approaches for estimating causal effects: regression adjustment using ordinary least squares (OLS) or generalized linear models, propensity score weighting with inverse probability of treatment weighting (IPTW), and utilizing two-stage least squares (2SLS) with IVs. Regression and IPTW are used in the confounder approach while 2SLS serves as a methodology for the IV approach. Although there are many other methods such as g-computation, targeted learning, proximal causal learning, non-parametric 2SLS, and two-stage residual inclusion, we will spend most of our discussion on regression, IPTW, and 2SLS because they are most commonly used and well-studied. A summary of the points discussed in this section is presented in Table 2. We briefly speak about recent advances in later sections.

A more sophisticated version of Fig. 1 is presented in Fig. 5. We have added a vector of IVs of length j (\(Z_1,Z_2,...Z_j\)) and a vector of observed confounders of length k (\(X_1,X_2,...X_k\)). In addition, we have stochastic errors \(\tau\) and \(\epsilon\) for D and Y, respectively. For simplicity, assume the effect of each IV is the same magnitude and similarly for each confounder and that the outgoing arrows capture from the joint effect. Furthermore, U captures all unobserved confounding though, in reality, there are likely many variables. Assuming the relationships between variables are linear, we can write the following system of relevant structural equations:

$$\begin{aligned} d_i = \gamma _0 + x_i^{T}\gamma _X + z_i^T\gamma _Z + u_i\gamma _U + \tau _i, \end{aligned}$$
(2)
$$\begin{aligned} y_{i} = \beta _0 + \beta _D d_i + x_i^{T}\beta _X + u_i\beta _U + \epsilon _i. \end{aligned}$$
(3)

Equation 2 depicts the treatment assignment or “first stage” while Eq. 3 depicts the outcome or “second stage.” The estimand of interest is \(\beta _D\). For ease of exposition, we will assume a linear probability model (LPM) as in Eq. 3. Nevertheless, because the treatments are usually binary variables, the functional form of the treatment assignment is commonly characterized using a logit or probit model. Please see later sections for discussion on the use of LPMs for modeling binary treatment assignment.

Using the above two equations, we can provide a high-level overview of the three methods in practice. Regression methods fit Eq. 3 with the treatment and the observed confounders to estimate \(\beta _D\). U is not observed so the estimate is inconsistent due to misspecification. Meanwhile, IPTW first fits Eq. 2 with only the confounders to predict the propensity score for all subjects. The propensity scores will be used to compute a weighted sum that will allow us to estimate \(\beta _D\). Because U is missing, the predicted propensity scores will not be correct nor adequate to achieve ignorability conditional on the propensity score.

Fig. 5
figure 5

DAG with more IVs and observed confounders as well as stochastic errors

2SLS will fit Eq. 2 as stated (except for U) and use the predictions to construct \(\hat{D}\). We then effectively substitute D in Eq. 3 and fit an OLS model to estimate the coefficient in front of \(\hat{D}\). Importantly, the omission of U does not affect the consistency of this estimate under the conditions of Fig. 5 and if there is no treatment effect heterogeneity then we have a consistent estimate of \(\beta _D\). If there is then, at the least, the estimate is not affected by U.

Regression adjustment and propensity score methods

While OLS and IPTW are different mathematically, they are conceptually similar: both seek to isolate variation in the outcome caused by the treatment by eliminating variation caused by confounding factors. Regression adjustment can be thought of as blocking paths in Fig. 5, which is another way of stating that we are holding X constant in order to isolate the effect of D on Y and obtain the direct causal pathway with the following estimand:

$$\begin{aligned} \beta _D^{OLS} = E[Y|D=1,X=x] - E[Y|D=0,X=x]. \end{aligned}$$

On the other hand, IPTW weights outcomes based on the probability of receiving (\(p(X) = P(D=1|X)\), creating a pseudo-population that balances confounders across the treatment groups in a similar rationale to randomization. IPTW has the following estimand:

$$\begin{aligned} \beta _D^{IPTW} = E\left[ \frac{DY}{p(X)}\right] - E\left[ \frac{(1-D)Y}{1-p(X)}\right] . \end{aligned}$$
(4)

For a more concrete example of how a pseudo-population is constructed, suppose an individual in the treatment group had a propensity score of 0.1. In other words, this individual is very likely to receive the control and has many similar subjects in the control group. The weighted outcome of this individual can represent the counterfactual outcomes of the comparable control group subjects and so we would weight that individual’s contribution to the treatment effect ten times. As a consequence of such a procedure, in Fig. 5 the pseudo-population will theoretically no longer contain edges for the X’s because the treatment assignment cannot be explained by covariate imbalance. A balance of observed confounders, however, does not imply a balance of unobserved confounders: the act of balancing observed confounders may increase the imbalance of these unobserved confounders [41].

Propensity scores are often used to match individuals across treatment groups. Once one or more suitable matches are found for each subject in the treatment groups, we can compute the differences in outcomes, and average them to obtain the average treatment effect on the treated (ATT), \(E[Y(1) - Y(0)|D = 1]\). We focus on IPTW and not propensity score matching because, under unobserved confounding, the ATT will not equal the ATE:

$$\begin{aligned} \beta ^{ATT}= & {} E[Y(1) - Y(0)|D = 1] = E[Y(1)|D = 1] - E[Y(0)|D = 1]\\\ne & {} E[Y(1)|D = 1] - E[Y(0)|D = 0] \text{ ( Y(0)} \not\!\perp\!\!\!\perp\text{ D)}\\= & {} \beta ^{ATE}. \end{aligned}$$

While our discussion of methodology in this paper mainly centers around the ignorability assumption, it is important to briefly touch upon the implications of positivity assumption violations. In propensity score methods this means each subject has a positive probability of receiving the treatment given each level of the covariates or \(0< P(D=d|\varvec{X} = \varvec{x}) < 1\) for all \(d \in D\) and \(\varvec{x} \in \varvec{X}\). Another way to conceptualize the positivity assumption is the notion of “common support” where there must be full overlap in each group’s distribution of propensity scores or, by extension, their observed covariates. Even when there is no unobserved confounding, positivity violations can arise when we fail to observe certain variables that are needed to create overlap. Therefore, for the subpopulations lacking overlap, extrapolating counterfactual claims can lead to erroneous conclusions.

Conceptually, a violation of the positivity assumption means that we are dividing by 0 in Eq. 4. In practice, this results in extreme weights, which both increases variability in the parameter estimates but also impacts finite sample bias because the estimate is weighted towards the few extreme observations [42]. In addition, near-positivity violations, or individuals who are extremely unlikely to receive the treatment or placebo, pose similar issues for estimation. When using OLS, positivity violations pose a similar bias of the estimand as IPTW. Nevertheless, OLS does not face the same degree of finite sample estimation issues as IPTW when there are positivity or near-positivity violations.

While IPTW and OLS target the same estimand, the ATE, and both are vulnerable to inconsistency via unobserved confounding, there are differences to consider in practice. If there are no extreme weights, by encapsulating many covariates in the propensity score, IPTW will generally be more efficient than OLS because of the degrees of freedom saved. If there are extreme weights, however, the instability of the variance of estimators will be larger than that of adjustment-based regression methods.

One common solution to extreme weights in propensity score methods is to trim extreme weights. This procedure, however, risks estimating the treatment effect for a population different than the original target population. In other words, for a decrease in variance, there is a potential increase in bias. Furthermore, the direction of this bias is difficult to determine because one must define a new population resulting from truncation. Though one could argue that the bias due to positivity violations could be advantageously traded-off with the bias due to truncation [43]. Another common solution to extreme weights could be to use stabilized weights as opposed to conventional inverse probability weights [44].

OLS and IPTW also have potential differences in ease of reproducibility and interpretability. Because propensity scores are the result of fitting a model for treatment assignment in order to generate propensity scores, methods ranging from logistic regression to random forests may be used. An issue, however, arises when different models produce different sets of propensity scores resulting in different pseudo-populations. This poses challenges for reproducibility across different studies of the same population. The simplicity of OLS arguably reduces the risk of this since basic adjustment can be easily communicated. On the other hand, when unaccounted for treatment effect heterogeneity exists, OLS will generate a marginal treatment effect estimate that is implicitly weighted by the covariance structure in the observed data sample as opposed to explicit weighting in IPTW [20, 24].

One advantage of the propensity score is that data-driven selection of the confounders to model treatment assignment is done separately from the fitting of Eq. 3. In contrast, adding and removing confounders in OLS in a data-driven fashion will also affect the estimand, estimate, and corresponding inference for the treatment effect. Therefore, IPTW is able to control inflation in Type I error from repeated testing of the treatment effect coefficient as a result of fitting several models.

Though it may be tempting to cast estimating propensity scores as a prediction problem, this may lead to unintended consequences. The original philosophy of propensity scores from Rosenbaum and Rubin is not to fit the first-stage as well as possible; rather, it is to find a balancing score sufficient to achieve ignorability [45,46,47,48]. Furthermore, measures of model performance like the C-statistic do not provide useful information to suggest unobserved confounding is mitigated more in one model than another [49]. IVs are predictors of treatment assignment and, yet, they are adverse to causal estimation if included in the model [36, 46]. In addition, including variables that are predictive of the outcome but not the treatment can help improve efficiency of treatment effect point estimates [40]. Therefore, we suggest that variable selection for propensity score modeling is not conducted purely by optimizing out-of-sample model prediction error.

In contrast, the simplicity of using one equation in regression methodology assists in avoiding confusion between prediction and inference goals because there is no intermediate step in obtaining an estimate for \(\beta _D\). Indeed, an optimized model MSE may reduce coefficient standard errors and yet could lead to issues in internal validity. For example, if we fit Eq. 3 with LASSO to select confounders that optimized out-of-sample prediction error, the elimination of confounders via shrinkage to zero could potentially induce further omitted variable bias due to no longer conditioning on an observed confounder [50].

A combination of the IPTW and regression methodology is the augmented IPTW (AIPTW) with the so-called “doubly robust” property: a consistent estimate is obtained if either the propensity score (Eq. 2) or the outcome equation (Eq. 3) are correctly specified. Certainly, AIPTW offers more robustness than IPTW but its practical advantage is unclear. Firstly, in the case where one suspects the propensity score equation is misspecified but the outcome equation is not, OLS also will result in a consistent estimate and could theoretically be more efficient. Secondly, an unobserved confounder would cause misspecification in both equations, rendering any estimate inconsistent. As such, it is unclear how the inconsistency due to unobserved confounding in the AIPTW compares to that of IPTW or OLS.

Instrumental variable methods

2SLS is the most commonly used and well-studied IV method. The motivations of 2SLS largely stem from the inadequacy of OLS to provide a consistent estimate of \(\beta _D\). Under the reduced form in Eq. 5, omitting U leads to inconsistency because \(\alpha _D = \beta _D + \frac{Cov(D,U)}{Var(D)}\).

$$\begin{aligned} (y_{i,t=2} - y_{i,t=0}) = \alpha _0 + \alpha _D d_i + x_i^{T}\alpha _X + \phi _i \end{aligned}$$
(5)

Another name for this scenario is that there is “endogeneity” or correlation of D with the error term. This is because \(\phi _i = \beta _U + \epsilon _i\), which is correlated with D and \(E[\phi _i|D,X] \ne 0\). IV methods will identify a re-characterized treatment, \(D_{IV}\), using exogenous variation (uncorrelated with the error term) such that \(Cov(D_{IV},\phi _i) = 0\).

In the first stage of 2SLS, we regress the X’s and Z’s (design matrix \(\varvec{F}\)) on D and use the projection matrix \(P_Z = \varvec{F}(\varvec{F}^T\varvec{F})^{-1}\varvec{F}\) to obtain predicted values \(\hat{D}_{IV}\). In the second stage, we use the \(P_Z\) to obtain the coefficients \(\varvec{\beta } = (\varvec{S}^TP_Z\varvec{S})^{-1}\varvec{S}^TP_ZY\) with design matrix \(\varvec{S}\) consisting of the X’s and D. By including the X’s in both the first and second stages, we can improve the prediction of D and covariates serve as their own IVs. If all assumptions are met, and there is no treatment effect heterogeneity, then \(\beta _D^{IV}\) is consistent for \(\beta _D\) but not necessarily unbiased. In fact, in the case where we only have one IV for one endogenous variable, the first moment does not exist [51].

There are considerable trade-offs in the IV analysis: consistency comes at the cost of increased standard errors compared to OLS since we only use the exogenous variation in the treatment and, thus, have less “information” to calculate the treatment effect [23]. In the case where the IVs are weak, the finite bias will move towards OLS as weakness increases (i.e. first-stage coefficients go towards 0) and inflate estimator standard errors [22, 52]. This is because our treatment effect is determined only by compliers, a subset of the overall population. If the IVs are correlated with the second stage error term (i.e. U) not only will the estimates be inconsistent but the magnitude of inconsistency is greatly affected by the IV strength [23]. This can be observed by deriving the form of the 2SLS estimand under this condition.

$$\begin{aligned} \beta _{D}^{IV} = \beta _D + \frac{Cov(Z,\epsilon )}{Cov(Z,D)}. \end{aligned}$$
(6)

\(Cov(Z,\epsilon )\) captures the degree of the violation in the independence of the IV, and Cov(ZD) captures the first stage strength. Rewriting covariances as correlations in Eq. 6 and in the OLS estimate \(\alpha _D\), we obtain:

$$\begin{aligned} \beta _{D}^{IV} = \beta _D + \frac{\sigma _\phi }{\sigma _D}\frac{Corr(Z,\phi )}{Corr(Z,D))}, \end{aligned}$$
$$\begin{aligned} \alpha _D = \beta _D + \frac{\sigma _\phi }{\sigma _D}Corr(D,\phi ). \end{aligned}$$

It is clear that when the IVs are weak, 2SLS inconsistency can be greater than even that of OLS if \(|Corr(D,\phi )||Corr(D,Z)| < |Corr(Z,\phi )|\). Considering that we can never confirm the IV independence assumption, the use of weak IVs may be perilous.

Resuming our assumption of valid IVs, another perhaps helpful perspective is that the first-stage is chiefly a prediction task. In this interpretation, weak IVs lead to inaccurate first-stage predictions, which leads to finite sample bias because we are unable to adequately capture the treatment assignment of the original population of interest. Simply adding more weak IVs to the first-stage rarely improves the issue; indeed, packing the first-stage with too many instruments will lead to overfitting and, hence, finite sample bias [53]. These issues are partially mitigated by large sample sizes.

Unlike many of the assumptions discussed, the degree of instrument relevance is somewhat determinable using the data. Because 2SLS utilizes OLS for the first stage, the F-statistic is commonly used to measure the joint strength of the IVs with 10 being a “rule of thumb” for sufficient strength. In the case of heteroskedasticity, a robust F-statistic can be used but variance estimates may be noisy [54]. Nevertheless, using a sample statistic to infer upon assumptions could be problematic. For instance, Young 2017 points out that there is a relatively high chance of spuriously obtaining a high F-statistic and unreliability of “guaranteed bounds” of size and bias of weak IVs tests. In addition, weak IV tests assume that the IVs are valid in the first place or else coverage will be incorrect [55].

Data-driven fitting of the first-stage may prompt concerns about external validity. In theory, we could select variables in the data that optimized a cross-validated F-statistic. In this process, however, we would fail to address the impacts on the interpretability of the LATE. This scenario introduces difficult situations where one set of IVs could have a worse F-statistic but is more interpretable for the desired use case. This is why we advocate for first thinking through the conceptual soundness of the selection of IVs given a set of theoretically strong predictors of the treatment.

Although the independence of the IVs is untestable, there are a few interesting falsification tests that one could employ. One straightforward test is to compare the values of potential confounders across values of the IVs similar to how one examines values of potential confounders across levels of the treatment [9]. Imbalance of IVs across an observed confounder is problematic because the observed confounders could be related to an unobserved confounder. Another test that assumes observed confounders may be related to unobserved confounders involves negative control outcomes, or populations constructed to falsify IV independence [56]. Of course, these tests cannot ensure IV assumptions are met and are subject to data availability.

In the multiple IV case, given there is one valid IV, one may test if additional IVs influence the outcome through Sargan’s J-statistic [57]. By running OLS on the residuals of 2SLS on the IVs, if one or more coefficients are not 0, we have some sort of violation. This test may have deceivingly high p-values and poor power when one IV is weak but valid and the others are strong but invalid [58]. The scenario of a mix of strength and validity of IVs poses an interesting question about whether one would choose a strong but slightly invalid IV over a weak but valid IV.

Table 2 Comparison of Regression, IPTW, and 2SLS under Unobserved Confounding

Complexities that arise with non-linearity

So far, our discussion has been focused on the case where the outcome is continuous and, thus, we can reasonably assume linear structural equations. When the outcome is not continuous, some of the statements we have previously made must be modified. In particular, we will revisit the consequences of adjusting for confounders, IVs, and precision variables, marginal versus conditional estimands, and the use of 2SLS for binary treatments and non-continuous outcomes.

First, in the non-linear model setting the estimand corresponding to the treatment effect will change by including not only confounders but also precision variables because of non-collapsibility [18]. Mathematically, because the covariates are encapsulated in a non-linear function (e.g. a link function), after adjusting for a precision variable, we cannot simply distribute the expected value such that we recover the “before adjustment” treatment effect [59]. A further consequence is that adjusting for any variable will increase coefficient standard errors of the variables already present. For instance, in logistic regression, the unexplained variance must stay fixed so the explained variance will increase upon adjustment for new variables, leading to coefficient values increasing in magnitude [60, 61]. Adjusting for precision variables will increase standard errors of the treatment effect but slightly increase the power to reject the null of no effect; this is due to the magnitude of the adjusted point estimate increasing relative to slight increases in the standard error of the estimate [62].

The presence of non-collapsibility means that the interpretation of coefficients before and after adjusting for variables differs even in cases where there is independence between the adjustment variable and the treatment variable. Marginal estimates, without confounders, will therefore be different from conditional (on the confounders and precision variables) estimates. Comparing methodologies, IPTW will give a marginal estimate whereas regression will give a conditional estimate [61]. For example, after adjusting for the confounders, IPTW produces the odds ratio for the population characterized by the sample while a logistic regression model would compute the odds ratio for someone with the average value of the confounders [63]. In the linear case, the difference between conditional and marginal effects was only present under treatment effect heterogeneity.

Whether the marginal or conditional estimand is preferable depends on the scenario. One could argue that the conditional estimand is more applicable to settings where a physician is already conditioning upon knowledge of various patient characteristics like sex, age, and comorbidities. In addition, conditional estimands may be better transported to other populations such as future populations. On the other hand, marginal estimands can be more interpretable and comparable across studies [64]. They may also be preferred when the covariates potentially included in the model are not easily observed or measured in practice.

The issue with using IVs as confounders persists in the non-linear setting: adjusting for IVs as if they were confounders amplifies existing bias none and could even introduce bias where none previously existed [38, 39]. Amplification is for the same reasons as the linear setting while introducing bias occurs when the IVs are dependent on the outcome given the treatment [65]. Though Ding et al. do find specific situations where this bias was not present under their proposed monotonicity conditions for the treatment selection and outcome model.

A primary challenge for standard IVs methods is that 2SLS misspecifies a non-linear functional form in the case where we have a binary treatment. Nevertheless, analysts still may utilize 2SLS in non-linear settings such as through LPMs. Simulations have shown that LPM can produce low inconsistency in the estimates of the LATE [66]. A counterpart of 2SLS, two-stage residual inclusion (2SRI), where one takes the residuals from the first stage as a covariate in the second stage, did not perform nearly as well. Furthermore, claims that non-linear 2SRI, (using a probit model for example) are able to recover the ATE (as opposed to the LATE) are questionable [67]. Quantifying the effect of IV method misspeficiation relative to the confounder methods is an avenue of research.

Intuitively, using LPMs appears inappropriate as there is nothing preventing prediction of values that are outside of the interval [0, 1], which leads to bias and inconsistency of LPM estimates. However, if the population of interest only has probabilities between a certain range not close to 0 or 1, then this will seldom be an issue because we will not have errant predicted values. Thus, LPMs will be consistent and unbiased [68]. One could also argue if there are true probabilities that are 0 or 1, then we have a positivity violation and propensity score methods, whether they use a logit, probit, or LPM to calculate propensity scores, will also run into issues. One solution for any method is to truncate probabilities but this will be at the cost of bias for the original causal effect.

We can rarely mimic the 2SLS substitution procedure with non-linear models, such as two-stage probit, and maintain consistency for the treatment effect (such an action is called “forbidden regression”) [23]. Instead, one can consider a three-step procedure: fit the first-stage with a non-linear model, regress the predicted values on the treatment in OLS excluding the IVs, and, lastly, fit the second stage with a linear or non-linear model.

Flexible confounder and IV methodology

Recent developments in causal inference methodology have been mostly focused on relaxing assumptions on common approaches like OLS, IPTW, and 2SLS. These developments are predominately motivated by the wish to reduce the impacts of model misspecification and the view that many subtasks of estimation are primarily prediction-based, allowing for more flexible modeling using ML. Another motivation that we will not discuss is sparsity (e.g. incorporating more confounders or IVs than observations). We will cover three approaches: targeted minimum loss-based estimation (TMLE), Post-LASSO (PL), and double machine learning (DML) [10, 12, 69].

TMLE improves upon AIPTW by modeling nuisance quantities flexibly in addition to achieving double robustness. Nuisance quantities are needed to calculate the treatment effect but are not of direct interest to the research question. For example, the propensity score can be considered a nuisance quantity. By using ML and a “targeting step,” we can avoid nuisance quantity misspecification and optimize the bias-variance tradeoff for the ATE [70, 71]. To estimate nuisance quantities, TMLE includes cross-validation-optimized weighting ensemble of ML algorithms (e.g. penalized regression, random forest, gradient boosted trees) and “cross-fitting.” Cross-fitting bears some similarities to cross-validation in that “out-of-sample” data, or partitions of the data not used to fit models, is utilized to reduce overfitting. In addition, cross-fitting allows one to avoid proving complicated Donsker conditions for asymptotic normality [72]. In addition, TMLE offers some robustness in near-violations of the positivity assumption.

Briefly, the estimation procedure of TMLE is as follows. Let D represent the assignment of a binary treatment and \({\textbf {X}}\) a set of confounders. First, perhaps using ML, we use the confounders to fit initial models for treatment assignment and outcome: \(g(D,{\textbf {X}})\) and \(\bar{Q}^0(D,{\textbf {X}})\), respectively. Following this, we combine predictions from these equations along with a fluctuation parameter \(\epsilon\) to give us double-robustness and local efficiency. Now, we will update the outcome equation to \(\bar{Q}^1(D,\varvec{X})\) and then estimate the ATE using a “plug-in” estimator of the form

$$\begin{aligned} \hat{\beta }^{TMLE} = \frac{1}{n} \sum \limits _{i=1}^{n} {\bar{Q}^1(d_i=1,\varvec{x_i})-\bar{Q}^1(d_i=0,\varvec{x_i})}. \end{aligned}$$

While TMLE takes a plug-in estimator approach, DML, and its precursor PL, utilizes estimating equations allowing them to compute both confounder and IV estimates. These methods utilize orthogonalization and the Frisch-Waugh-Lovell theorem to first perform regressions on the confounders and IVs and then use their residuals in subsequent estimating equations for the treatment effect that are “locally insensitive” to misspecification. Similar to TMLE, we utilize ML algorithms to fit the residual-creating regressions. The post-LASSO name comes from the fact that we first run LASSO regression to select variables and then refit OLS on the selected variables, which mitigates issues in LASSO variable selection. DML extends this notion of estimating nuisance quantities to more ML algorithms and includes double robustness. Asymptotic considerations, namely Donsker conditions for normality, are handled through cross-fitting.

For the IV model, we use the moment condition \(E[\Phi _i(\beta _0,\eta _0)]=0\) where \(\beta _0\) is the coefficient for the treatment effect and \(\eta _0\) are the coefficients for the nuisance quantities based on the confounders and IVs. The score function takes the form \(\Phi _i(\beta ,\eta ) = (\tilde{\rho }_i^y - \tilde{\rho }_i^y\beta )\tilde{v}_i\), which is simply the canonical form of a generalized method of moments using \(\tilde{v}_i\) as an IV. The \(\tilde{\rho }_i^y\), \(\tilde{\rho }_i^y\beta\), and \(\tilde{v}_i\) terms are are all results of orthogonalization. For example, \(\tilde{\rho }_i^y =y_i - x_i^T\theta\) where \(\theta\) was estimated via PL or LASSO. DML extends the orthogonalization procedure to other ML algorithms by assuming partially linear models.

The “robustness” to misspecification, whether via unobserved confounders or regularization of relevant confounders, is through the orthogonality condition:

$$\begin{aligned} \frac{\partial }{\partial \eta } E[\Phi _i(\beta _0,\eta )]|_{\eta =\eta _0} = 0, \forall \eta . \end{aligned}$$
(7)

Equation 7 essentially states that if there is some error in estimating the nuisance quantities, the resulting estimate will still be consistent for \(\beta _0\). The authors call this “Neyman orthogonality” as the concept of “locally” robust estimates dates back to work done by Jerzy Neyman in 1959 [73]. The robustness of local insensitivity remains to be rigorously investigated in real world scenarios.

The fitting of the first stage of an IV analysis using ML and cross-fitting could result in inference more robust to weak IVs. In 2SLS, the estimated coefficients and, by extension, error term in the second stage is correlated with that of the first, resulting in “finite sample” bias. Thus, if ML can generate better predictions than OLS, we can decrease the first-stage error. In addition, we can utilize sample-splitting to break the association between the two error terms, which reduces the impact of weak IVs [52]. We first partition the sample, for example in half, and use the first half to fit the first-stage and compute the predicted values for the treatment with the second half. The predictions based on second half are then used in the second stage. Cross-fitting expands on the idea of sample-splitting by repeating the process on the “unused” partition (i.e. partition used in the second stage) and then pooling the results across each fold. Both DML and TMLE use the cross-fitting procedure while PL only uses sample-splitting.

Theoretically, the benefits of ML and causal inference methods are clear but the extra complexity carries some caveats. There is a great danger of overfitting and spurious predictions, especially when the sample size is too low. For example, in predicting binary outcomes, one simulation study found that 20 to 50 “events” per variable were necessary for logistic regression to achieve stable prediction performance [74]. For more sophisticated methods like neural networks and random forests, instability persisted above 200 events per variable. Translating this to causal inference, because of overfitting, the external validity of ML prediction is questionable, especially when the proportion of treatment assignment is near 0 or 1 as even larger samples will be needed. Furthermore, ML methods could diminish the reproducibility of causal effects across studies and can be computationally expensive without much gain over “traditional” methods.

The in-practice performance of ML and causal inference of these algorithms is still under question. In one simulation study, Angrist and Frandsen were able to find meaningful gains in selecting confounders but not IVs using ML [13]. One interesting phenomenon they highlighted was “pretest bias:” when LASSO decides whether to retain a coefficient or not, there is an implicit test being performed against some threshold contingent on the lasso regularization parameter and the sample size. Consequentially, when the true first-stage coefficients are zero or near-zero, bias in the treatment effect is introduced [75]. In the paper, sample-splitting or cross-fitting mitigated pretest bias with ML not offering notable improvement in parameter estimation. The apparent importance of out-of-sample procedures highlights further needs for large sample sizes in sophisticated techniques.

Perhaps the work on ML and causal inference, though innovative, “buries the lede,” so to speak. Far and away, unobserved confounding is the greatest threat to the validity of observational studies. Though nuisance quantities can be more flexibly modeled, the bias due to omitted variables likely dwarfs potential model-fitting issues that arise when using traditional models such as OLS. Sophisticated methods are more robust to violations in the assumptions than their traditional counterparts but this does not eschew properly selecting confounders and IVs - evaluating the paradigm is paramount. For example, there has been some literature on estimation with some “imperfect” IVs that relax some variable selection techniques but these methods presuppose that one has at least some valid IVs [76].

In terms of external validity, in non-linear settings, it is also unclear whether these methods estimate a marginal or conditional effect. For the IV setting, if the first-stage is characterized by ML, where there is a significant amount of regularization, then the definition of the LATE can be vague. More perilous is the possibility for confounders that ensure conditional independence of the IVs to be omitted from the first-stage due to regularization. In PL, one could simply choose to not shrink to zero certain variables but for DML, it is unclear how this can be prevented in more sophisticated algorithms like random forests.

Comparing approaches for applied data analysis

In this section, we offer general principles and guidance in a series of steps to help an analyst weigh the validity of the confounder approach to the IV approach. Why not utilize both approaches in the same analysis? Certainly, if the appropriate variables are available then one can use IV estimates as a sensitivity analysis of confounder adjustment estimates and vice versa. An issue, however, arises when each approach offers conflicting results such as oppositely signed effect estimates or when one is statistically significant and not the other. In these cases, one will have to judge which estimate is more desirable in terms of robustness regarding consistency, utility, and generalizability. A visualization of the heuristic in this section is presented in Fig. 6.

Fig. 6
figure 6

Flowchart presenting a possible heuristic for selecting an approach

The first step is to assess the severity of unobserved confounding. In our view, compared to a confounder adjustment approach, the IV approach trades increased complexity and decreased interpretability for potential consistency in the setting of unmeasured confounding. Therefore, unless there is evidence to suggest that the confounder approach will suffer from notable unobserved confounding, the more straightforward approach should be pursued. In order to evaluate unobserved confounding, one may identify confounders separate from the dataset and then assess the dataset at hand for what is present, what is missing, and what observed variables could serve as proxies for missing confounders. Furthermore, one should postulate the magnitude of unobserved confounding as there could be scenarios where the lion’s share of variation due to confounding can be captured by a few observed confounders. For example, a rich electronic health record dataset may not be missing important confounders of interest whereas an insurance claims dataset is frequently missing valuable medical information. In addition, because the IV approach largely builds upon the confounder approach by incorporating confounders in the first and second stages, identifying confounders remains important to help gauge the validity of potential IVs.

When performing this step, one should keep in mind how conditioning on certain confounders may affect interpretability and reproducibility. Further, one may consider the potential for treatment effect heterogeneity as this also affects the generalizability of confounder methods via effect modification, keeping in mind that omitted confounders could also be effect modifiers [77]. In reviewing treatment effect heterogeneity, it also may be a good time to weigh the usefulness of marginal versus conditional treatment effects for the purposes of the study at hand. This will assist in interpreting regression adjustment and IPTW output.

In the second step, one should identify whether there are any potential IVs in the paradigm and then in the dataset. Even if a confounder approach will be used, adjusting for IVs can amplify inconsistency. Theoretically and empirically, one should explore the strength of the IVs. From this, we can hypothesize the degree of invalidity of the selected IVs in terms of the independence assumption. The order is important: if the IVs are strong but slightly invalid then the degree of inconsistency can still be better than a confounder approach; if the IVs are seemingly weak but valid then they can potentially be salvaged by more sophisticated algorithms. Just how invalid an IV can be to still be better than confounder methods is a developing topic of research. Of note, Pearl (2010) investigated this question with one confounder and one imperfect IV using a threshold of when to choose the candidate variable as a confounder [39]. Nevertheless, the interpretability of this threshold could be improved to better accommodate applied analyses and does not consider the potential results from an IV estimation procedure such as 2SLS.

If there is notable treatment effect heterogeneity per the previous step, one should describe the subpopulation characterized by the LATE and any potential differences between the LATE and ATE. One should also justify whether this subpopulation is worth studying in the first place. If the subpopulation is too obscure, a biased confounder approach is perhaps of more scientific value.

The third and final step is to examine other relevant characteristics of the dataset: generally, the sample size and, if the treatment is binary, the distribution of the proportion of individuals that received the treatment. Large sample sizes cannot salvage confounder-based estimates in the presence of a large amount of unobserved confounding but can greatly help the IV approach. Notably, IV methods have larger standard errors for the treatment effect compared to a confounder method like OLS, which is only exacerbated by weak IVs. In some cases, one could argue that IV methods could produce Type II error rates so large that the analysis is not worth pursuing. Therefore, not only will a larger sample size create a more precise estimate and lessen finite estimation bias but, in addition, the sample size will open the door to more sophisticated methods to be used reliably like ML and cross-fitting. Accordingly, we may strengthen the first stage and decrease sensitivity to exclusion restriction violations.

For a binary treatment, the distribution of the proportion of the population that receives the treatment is important both for the validity of the positivity assumption for propensity score-based methods and for the validity of using an LPM or 2SRI for the IV first stage. Probabilities near 0 or 1 are an issue for both approaches but it remains to be investigated whether positivity violations cause more inconsistency in confounder methods when compared to 2SLS. Under near-positivity violations, we may also run into concerns surrounding Type II error as we would for IV methods. We note these notions are particular to the scenario but sample size does not necessarily mitigate the effects of near-positivity violations under extreme imbalance of treatment assignment.

A notable data-driven diagnostic of the relative performance of confounder adjustment methods versus IV methods under observed confounding (recall that unobserved confounding can also violate the independence of the assumption of an IV) is the bias ratio [78]. Suppose we have the OLS in Eq. 8:

$$\begin{aligned} E[(y_{i,t=2} - y_{i,t=0})|D,X] = \beta _0 + \beta _1 d_i + \beta _2x_{1i}. \end{aligned}$$
(8)

If we have a binary IV, we can derive the bias from omitting the confounder \(X_1\) for OLS and 2SLS as

$$\begin{aligned} Bias_{OLS}(X_1) = \beta _2(E[X_1|D=1]-E[X_1|D=0]), \end{aligned}$$
(9)
$$\begin{aligned} Bias_{2SLS}(X_1) = \beta _2\frac{E[X_1|Z=1]-E[X_1|Z=0]}{E[D|Z=1]-E[D|Z=0]}. \end{aligned}$$
(10)

We can take the ratio of Eqs. 9 and 10 to obtain a bias ratio where a value greater than 1 implies that the 2SLS estimate is more sensitive to an omitted confounder than the OLS. If we assume that unobserved confounding is correlated with the observed confounder that we intentionally omit then we gain information about each method’s sensitivity to unobserved confounding. Improvements include adding variance to the bias ratio to represent uncertainty in estimates and considering the magnitude of \(\beta _2\) in our ratio [56, 79].

Illustrative example: off-label donepezil for MCI patients

We now turn to our applied data analysis surrounding donepezil, a cholinesterase inhibitor, off-label use in MCI patients. We will pursue the causal inference analysis steps outlined throughout the paper: first, we will outline the scientific paradigm of the analysis and define confounders and instruments; next, we will examine our available dataset and its ability to answer the causal question of interest and limitations on this goal; lastly, we will compare and contrast confounder and IV methodologies with respect to causal effect estimates.

We analyze data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), an observational dataset that longitudinally tracks cognitively normal, MCI, and AD volunteers aged 55-90 with good general health. Among other information, the data includes both general health and specific neurocognitive data collected roughly every six months until the patient leaves the study [80]. Our dataset contains patients from several ADNI recruitment waves beginning in October 2004 until May 2021 when the data was extracted for analysis. Of note, those already on cholinesterase inhibitors such as donepezil are allowed into ADNI if they have been stable on the medication for at least 12 weeks prior to entry.

Study paradigm

Our goal is to estimate the difference between donepezil users’ and non-users’ two-year change from baseline in ADAS-cog. Furthermore, we are potentially susceptible to confounding by indication, which warrants using the confounder and IV approaches to isolate the causal estimand of interest.

To identify confounders, we begin by listing potential general predictors of ADAS-cog scores such as age, sex, family history of neurodegenerative disease, socioeconomic status, physical disabilities, race, and comorbidities. In addition, we can list particular biological attributes: the number of APOE4 alleles and cerebral spinal fluid (CSF) measurements of phosphorylated tau (P-tau) and amyloid-\(\beta\) (A\(\beta\)). Except for physical disabilities, all of the aforementioned variables could be potential influences on the assignment of donepezil and, hence, we consider them confounders. A hypothesized DAG of causal relationships with a few of the potential confounders is presented (Fig. 7).

Fig. 7
figure 7

Directed Acylclic Graph of Donepezil Analysis with a Notable Confounders

If there are differential prescriber attitudes to donepezil then physician and facility prescribing patterns could be used as IVs [33]. A similar possible IV is the time since Food and Drug Administration (FDA) approval of donepezil for AD patients in 2004. In the roughly two decades since approval, a large volume of research has been conducted on the MCI construct and donepezil in MCI patients. As a result, the general perspectives of practitioners may have changed over time. Of course, we must assume the IV is monotonic; that is, attitudes towards donepezil use are monotonically more favorable or less favorable over time.

Another potential category of IVs is related to contraindications such as bradycardia or gastrointestinal disorder, which may make off-label prescription unfavorable relative to benefits. Importantly, we should note that contraindication IVs are sometimes related to the general health of a patient, which is, in turn, related to cognitive decline. In this case, we should be particularly cautious in utilizing certain contraindications as IVs and should only do so if we can adjust for most of the related confounders.

Identifying possible IVs allows us to examine whether the LATE is of scientific interest. All of the IVs previously mentioned essentially capture variation caused by differential physician prescribing attitudes. Therefore, under these IVs, a complier is a patient that follows the judgments of the physician and the LATE would be this population’s treatment effect. The resulting estimate for the complier treatment effect could be useful because a formal analysis using this LATE to evaluate donepezil off-label practice intends to influence guidelines for physicians, which are constructed with the assumption of patient compliance to physician advice. Defiers for these IVs are unlikely as individuals usually follow physicians’ judgments regarding risks and benefits.

Cohort selection

A flowchart of the selection criteria is depicted visually in Fig. 8. We ascertained off-label prescription donepezil from self-reported pharmaceutical use and commencement dates collected at every visit in ADNI. Using keyword search, we first found patients that were prescribed donepezil and then used diagnosis data to identify prescriptions during MCI. Patients who had never reported use of donepezil were considered to be a member of the control group. Furthermore, individuals who received donepezil while having AD were also used as controls but their data was artificially censored at the visit before the first donepezil prescription.

Fig. 8
figure 8

Flowchart of selection of final cohort

For the treatment group, the start date was the self-reported commencement of donepezil and we required that the baseline measurement of ADAS-cog must be from the closest visit within six months of this start date. If no such visit existed, these subjects were dropped from the study. Under an intent-to-treat principle, [81] if we were able to observe a patient beginning donepezil, then they remained in the treatment group. The end date was computed by adding two years to the start date and taking the closest visit between 21 months to 30 months after the start date. Once again, if no such visit existed then the subjects were dropped.

The control group start date was the first visit after MCI diagnosis. For many, this was their ADNI entry visit. Like the treatment group, the end of the date was computed by adding two years to the start date and applying the same 21 to 30 month requirement.

Mimicking the Peterson et al 2005 trial, on top of the ADNI inclusion criteria, we applied further inclusion-exclusion criteria from this trial depending on the availability in the data. Specifically, we only included those with an MMSE of 24-30 at baseline and excluded anyone with self-reported bipolarism, schizophrenia, suicide ideation, or psychosis. Applying these criteria resulted in 134 donepezil subjects and 483 control subjects.

Examining the data: confounders and IVs

Considering Fig. 7, which depicts major confounders, we hypothesize that there are notable unobserved confounders. For example, we cannot adjust for socioeconomic status (SES) nor family history. SES is not in the dataset while family history contains too many missing values to impute (approximately 93% before inclusion criteria). We have information on comorbidities via three indicator variables: the presence of cardiovascular disease, other neurological disease, and renal disease. These indicators are limited as they fail to capture severity and not all the conditions these categories include are relevant. Overall, we judge that the impact of these missing or low-quality confounders is not too large with the exception of family history, which is likely asked during a doctor’s visit.

Another potential area of unobserved confounding could come from the lack of historical information on a patient: we do not have neurocognitive test scores on these patients before entry into ADNI but a prescriber may have this information and consider it before making a decision. Most subjects’ study baseline visits are either close or at ADNI entry, which precludes the calculation of any sort of useful measure of history from the data such as a first-order trend. In addition, there are many difficult to measure evaluations of patient well-being that stem from the intuition of the attending clinician.

For observed variables that may serve as confounders, we have demographic information on sex, age, years of education, and APOE4 status. One necessary confounder induced by our sampling scheme is time length, in years, the patient was in ADNI before the start of the analysis. This is associated with the outcome because the longer someone has MCI, the more opportunity there is for cognitive decline. Moreover, time in ADNI is associated with treatment assignment as most of the start dates for the control group were entry into ADNI while the start dates of the donepezil group tended to be several months into ADNI (Table 3).

We utilize CSF measurements for P-Tau and A\(\beta\) but only at ADNI entry because missingness was too high otherwise (36.3% for both variables’ entry measurements and 84.3% for all recorded measurements). We believe that for individuals with start dates further from baseline, combining entry P-Tau and A\(\beta\) measurements with time in ADNI is sufficient to extrapolate to the study baseline.

Table 3 Potential confounders stratified by donepezil use

Overall, while the observed confounders serve as useful proxies of the unmeasured components, we judge that there still remains unobserved confounding by indication. Therefore, it is warranted to investigate the IV approach. Unfortunately, ADNI did not lend itself to intuitive and high-quality IVs. Nevertheless, we utilized time, in years, since FDA approval of donepezil and contraindications in the form of indicator variables of encompassing cardiovascular issues such as bradycardia and sick sinus syndrome, gastrointestinal disorder, and asthma. More details about these IVs including investigations of strength and validity can be found in the supplementary appendix. Notably, these IVs were weak but the LATE is scientifically meaningful for reasons discussed earlier.

We note that the weakness of the IVs combined with a lower sample size cautions us against using IVs methods. However, for the purposes of comparison to confounder methods, we will still fit models with these IVs. Lastly, the proportion of individuals in the dataset who took the treatment is not extreme so the use of the LPM in the IV methods is valid.

Statistical methodology

Before applying selection criteria, ADAS-cog, MMSE, A\(\beta\) at ADNI baseline, and P-tau at ADNI baseline were missing for roughly 30% of the observations. The comorbidity and contraindication (IV) indicators were also missing but only for 0.4% of the observations. These values were imputed using multiple imputation via chained equations (MICE) resulting in five different datasets. Specifically, to account for within-subject correlation, we performed predictive mean matching using a linear mixed model with the fixed effects of time in ADNI, clinical dementia rating scale sum of boxes, MMSE, ADAS-cog, age at ADNI entry, sex, APOE4 count, whether they had taken donepezil, the comorbidity indicators, \(\beta\) at ADNI baseline, and P-tau at ADNI baseline. We included a random intercept and a random slope on time in ADNI.

To obtain a “baseline” estimate of the treatment effect without any adjustment of confounders, we regressed the change in ADAS-cog on treatment assignment. Then, we utilized the variables in Table 3 with the exception of starting ADAS-cog as confounders for OLS, IPTW, TMLE, and DML. For IPTW, the propensity score was computed using logistic regression. Diagnostics are reported in the appendix. For TMLE, we utilized five-fold cross-fitting (5) and random forests to fit nuisance quantities. For DML, random forests were also used for fitting models involving nuisance quantities. The IV analyses included the same confounders and the aforementioned IVs. For models, we fit 2SLS, PL regularizing the first-stage, and DML that used random forests for nuisance quantities. To compare the strength of the first-stage, pooled F-statistics were computed on the first-stage OLS involving all the IVs and confounders and the IVs and confounders selected by PL. Estimates across imputed datasets were combined using Rubin’s rules. All analysis was conducted using R Version 4.0.3.

Results

The results, presented in Table 4, demonstrate a clear divide between the confounder and IV approaches. While the confounder methods found a point estimate that suggests donepezil has a deleterious effect on MCI patients, the IV methods produce an attenuated but beneficial effect estimate. Taking into account the standard errors, however, the IV methods cannot rule out a null effect of donepezil.

Table 4 Treatment effect estimate by methodology

Among the confounder methods, differing levels of efficiency led to some methods attaining statistical significance using a level .05 test, while others did not (with the exception of the unadjusted OLS). TMLE yielded the lowest estimated standard error. In contrast, DML 1 with random forests had a large standard error possibly due to the lower sample size and less efficient use of the data in cross-fitting. When we used DML 1 with logistic regression and OLS to estimate nuisance quantities, the point estimate and standard error of the estimate decreased to be closer with the other confounder methods (Estimate: 1.48, SE: 1.16). Similarly, substituting linear models for random forests in TMLE increased the standard errors but did not modify the point estimate though it was not significant (Estimate: 1.33, SE: 0.99). Subsequently, the estimate became statistically insignificant (\(p=0.178\)). OLS and IPTW had slightly different results with IPTW having a larger point estimate (i.e. a more harmful effect) and a larger standard error. The different point estimates could be due to slight treatment effect heterogeneity while the lower precision of IPTW estimate could be due to near positivity violations (see Appendix).

Regarding PL, we originally intended to utilize the authors’ R package (named hdm) default “data-driven” selection for \(\lambda\), the regularization parameter in the first stage LASSO [82]. However, the package-selected value of \(\lambda\) led to all IVs being eliminated from the first stage and subsequently unreliable results. Figure 9 shows that at \(\lambda = 0.075\), when the instruments were all excluded, the point estimate and standard errors change notably. Noting that when there was at least one IV the results largely remained the same, we reported the results from \(\lambda = 0.025\).

Fig. 9
figure 9

Post LASSO Estimates and 95% Confidence Intervals by Shrinkage Amount (\(\lambda\))

Across all imputed datasets, the first-stage LASSO with \(\lambda = 0.025\) selected time in ADNI, baseline A\(\beta\), and time since FDA approval. For two imputed datasets, LASSO retained baseline P-tau and, in one of these two datasets, asthma was additionally retained. Selecting variables strengthened the first-stage with the PL first-stage fit being much higher than 2SLS (pooled \(F = 33.3\) and pooled \(F = 8.5\), respectively).

Ostensibly because DML 2 builds upon PL with better practices and more sophisticated ML techniques, we would expect similar results but this was not the case. The point estimate was the most negative and most extreme besides the unadjusted OLS. One explanation could be related to Fig. 9 where “over-regularization” of the first-stage produces extreme results. When we replaced random forests for OLS in DML 2 we obtained results more similar to the other IV methods (Estimate: 0.154, SE: 2.712) suggesting a sensitivity of DML point estimates to the choice of the prediction model in both approaches. We were not able to fit DML with LASSO as the authors’ official package (dml) did not support non-numeric covariates specifically for the LASSO algorithm.

Takeaways from applied example

In this analysis, we found that the two different approaches presented two different conclusions. Contextualizing these results, the Peterson et al. clinical trial found a -0.27 point change in ADAS-cog after 2 years for the donepezil group compared to placebo that failed to attain statistical significance (95% CI: -1.13 to 0.59 points), [1] which was similar to the IV estimates. Meanwhile, our confounder-based estimates mirror a similar analysis using ADNI by Schneider, Insel, and Weiner conducted in 2011 [5]. Using a linear mixed model, the authors found a statistically significant increase in the rate of decline in donepezil users compared to control subjects. Using this slope to calculate the difference from baseline after two years, we arrive at a comparable estimate of 1.56.

Adding the more sophisticated ML-based algorithms did not show notable changes over traditional methods and even created a few dilemmas. For PL, the default algorithm only included baseline A\(\beta\), selecting out all IVs likely due to their weakness. This was “a feature, not a bug”: the first-stage F-statistic was indeed the highest at 60.94, indicating a “stronger” first-stage. But without an IV in the first-stage, the results are theoretically meaningless, which cautions against a purely “greedy” approach to fitting the first stage.

Another issue with PL was that across imputed datasets, the variables selected for the first stages were different. Thus, both the confounder and IV estimands across imputed datasets are not exactly the same. For example, the estimated LATE from selecting only time since FDA approval as IV was combined with the estimated LATE using both time since FDA approval and asthma. If this is the case, Rubin’s rules pooled estimates are actually an average of different LATE, which is ambiguous. This is yet another problem with a “greedy” approach to the first-stage as it is likely that a similar process occurred for TMLE and DML random forests.

One interesting discussion surrounds the choice of hyperparameters for TMLE and DML, particularly which ML algorithms should be selected given that the results changed depending on the algorithm chosen: TMLE-derived results became insignificant while the DML 2 point estimate flipped signs. This raises concerns about replicability issues that may arise from an arbitrary choice of ML algorithm. The criteria for selecting ML algorithms is presently unclear from both the authors of TMLE and DML. In our case, the choice of random forest was made before any results were computed.

This analysis contains several limitations. Firstly, the selection criteria were not exactly the same as the Peterson et al and Schneider et al, leading to the populations we were estimating the effect of donepezil to be slightly different. The exclusion of MCI individuals from ADNI who have not been stable on donepezil skews the results towards the null as these individuals likely experienced harm from donepezil but were missing from the data. In addition, the control group in our analysis is aware they are not getting donepezil whereas in a trial, the control group received a placebo in a blinded fashion. Selecting the baseline and after 2-year measurement dates could use improvement where, instead, we could have used longitudinal modeling and modeled the rate of decline in ADAS-cog. Unfortunately, the more sophisticated ML methods are not yet equipped to handle clustered data. Even so, the main purpose was to compare our results to Peterson et al, which utilized 24-month change. Starting and ending dates often were missing the day of the month leading to inaccuracy in measuring exact 24 month changes. For the instrumental variable strategy, we were relying on self-reported health issues, a reporting process that suffers from recall bias. Most likely there were omitted or misdiagnosed conditions, which meant that our IVs were much weaker than they could have been given accurate medical history. Lastly, dosage of donepezil and compliance could be two critical nuances in evaluating treatment effects that we were not able to measure here. As an example of compliance, it could be that the already more severe MCI subjects would also have issues remembering to take their medication. Despite these limitations, the variation in estimands and resulting estimates between confounder adjustment and IV approaches remains of interest.

Conclusions

In this paper, we have outlined a paradigm for isolating causal effects in observational studies that centers around two approaches: confounders and IVs. We began by defining each approach and its often untestable assumptions. We then discussed a set of heuristics for executing the approaches from identifying relevant variables to examining external validity. In addition, we highlighted that the two approaches often overlap (e.g. potential bias amplification by IVs used as confounders) and, thus, that one approach cannot be undertaken without some consideration of the other. Following this, we discussed three popular methodologies, OLS, IPTW, and 2SLS, for conducting statistical modeling along the confounder and IV approach including comparing OLS to IPTW and why causal inference modeling should not simply be viewed as a prediction task. Further, we explored the implications of violations in the assumptions of each of these methodologies and how to operate when these assumptions could be violated. After touching upon causal inference in non-linear scenarios and more flexible models, we outlined a set of steps to consider when weighing each approach.

The principles and processes laid out in the above portions culminated in the applied example surrounding off-label donepezil where each approach provided different challenges and, ultimately, conflicting estimates. A natural question to ask is: which approach was better at isolating the causal effect of interest? Of course, we do not have knowledge of the true causal effect in the general population, so it is impossible to have a definitive answer to this question. This mostly owes to the untestable assumptions discussed throughout the paper such as proving we have accounted for virtually all confounding and that we have valid IVs. Even then, the confounder and IV estimates may be targeting different causal estimands and, furthermore, the process of outlining the scientific paradigm and causal mechanisms (e.g. identifying IVs) included many conjectures. Nevertheless, for sake of argument, we can speculate on the above question by judging potential scenarios in our analysis and how they may impact each approach.

In surmising what the true ADNI causal effect could be, we can first consider the Peterson et al trial as we know this estimate is, on average, unbiased for the treatment effect in the trial population. But are the populations and usage of donepezil in ADNI at the time we obtained the data similar to that of the RCT? On one hand, recruitment criteria for ADNI are akin to MCI clinical trials and we imposed extra inclusion-exclusion criteria similar to Peterson et al. On the other hand, there may be cohort effects from 2005 to the present as the understanding of MCI has developed since then and, further, we cannot fully account for compliance and concomitant medicines.

Certainly, if we assumed the clinical trial was a good benchmark for our study, the IV methodology would be the better performer. However, we must weigh the fact that the lack of high-quality IVs and low sample size likely led to large finite sample bias and standard errors. The difficulty in finding IVs underscores the fact that confounders are much more intuitive to source than IVs. Yet, both are equally important as paths to consistent treatment effects, and, as such, one aim of this paper is to increase awareness and education of IVs such that they can be identified and investigated as easily as confounders.

Considering the available observational study evidence, particularly Schneider, Insel, and Weiner’s ADNI study, we can pose two questions. Firstly, like in the previous paragraph, how different is the population of our study? Secondly, seeing that our confounder results are similar, can we postulate sources of unobserved confounding and hypothesize how these sources have skewed our results? To answer the first question, because we have more present data, we can certainly re-perform our analysis on the subset of ADNI that Schneider had (they pulled their data on May 2009) to observe how the results changed. Doing this, we observed similar observational study estimates. But this far from disproves that there is no cohort effect that makes our populations irreconcilable.

For the second question, one possible explanation for confounding by indication is that more severe patients receive donepezil, which can skew a confounder estimate that fails to fully account for this towards one that suggests harm. Although, we are limited in this line of thinking because of the difficulty of making accurate statements regarding how a physician decides if a patient is “severe enough” to be prescribed donepezil. If we were to suppose the doctor looks at the steepness of the rate of decline then a historical measure such as a first-order trend of their cognitive tests scores could mitigate some confounding. This highlights the need for more research to specifically quantify factors contributing to physician decision making in specific disease areas.

To conclude, our goals with this paper were, firstly, to highlight the key differences between the confounder and IV approaches and estimates, and, secondly, to lay out a set of principles and heuristics that can help an analyst choose the best tool under the potential violation of untestable causal assumptions. The key is to favor the approach that one has judged to violate the assumptions to a lesser degree than the other. Of course, if there are assumptions we can do nothing about, one could decide to simply not pursue causal estimates from observational studies. Yet, this would discount the compelling scientific reasons to conduct observational studies in health research such as for hypothesis generation. Therefore, skeptical yet practical discussions of how to best move forward and avenues of further development, such as the ones in the paper, are of utmost necessity.

Availability of data and materials

Data used in the preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). The ADNI dataset is avaliable to the public at http://adni.loni.usc.edu via the completion of an online application form and acceptance of data use agreement.

Abbreviations

RCT:

Randomized control trial

IV:

Instrumental variable

MCI:

Mild cognitively impaired

AD:

Alzheimer’s disease

ADNI:

Alzheimer’s Disease Neuroimaging Initiate

ML:

Machine learning

TMLE:

Targeted minimum loss-based estimation

DML:

Double machine learning

DAG:

Directed acyclic graph

ADAS-cog:

Alzheimer’s disease assessment scale

ATE:

Average treatment effect

SUTVA:

Stable unit treatment values assumption

LATE:

Local average treatment effect

OLS:

Ordinary least squares

IPTW:

Inverse probability of treatment weighting

2SLS:

Two-stage least squares

ATT:

Average treatment effect on the treated

AIPTW:

Augmented inverse probability of treatment weighting

LPM:

Linear probability model

2SRI:

Two-stage residual inclusion

PL:

Post-LASSO

CSF:

Cerebral spinal fluid

P-tau:

Phosphorylated tau

A\(\beta\) :

Amyloid-\(\beta\)

FDA:

Food and Drug Administration

SES:

Socioeconomic status

MICE:

Multiple imputation via chained equations

References

  1. Petersen RC, Thomas RG, Grundman M, Bennett D, Doody R, Ferris S, et al. Vitamin E and Donepezil for the Treatment of Mild Cognitive Impairment. New Engl J Med. 2005;352(23):2379–88. https://doi.org/10.1056/nejmoa050151.

    Article  CAS  PubMed  Google Scholar 

  2. Salloway S, Ferris S, Kluger A, Goldman R, Griesing T, Kumar D, et al. Efficacy of donepezil in mild cognitive impairment: A randomized placebo-controlled trial. Neurology. 2004;63(4):651–7. https://doi.org/10.1212/01.wnl.0000134664.80320.92.

    Article  CAS  PubMed  Google Scholar 

  3. Doody RS, Ferris SH, Salloway S, Sun Y, Goldman R, Watkins WE, et al. Donepezil treatment of patients with MCI: A 48-week randomized, placebo-controlled trial. Neurology. 2009;72(18):1555–61. https://doi.org/10.1212/01.wnl.0000344650.95823.03.

    Article  CAS  PubMed  Google Scholar 

  4. Sokolow S, Li X, Chen L, Taylor KD, Rotter JI, Rissman RA, et al. Deleterious Effect of Butyrylcholinesterase K-Variant in Donepezil Treatment of Mild Cognitive Impairment. J Alzheimers Dis. 2017;56(1):229–37. https://doi.org/10.3233/jad-160562.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Schneider LS. Treatment With Cholinesterase Inhibitors and Memantine of Patients in the Alzheimer’s Disease Neuroimaging Initiative. Arch Neurol. 2011;68(1):58. https://doi.org/10.1001/archneurol.2010.343.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Devanand DP, Pelton GH, D’Antonio K, Ciarleglio A, Scodes J, Andrews H, et al. Donepezil Treatment in Patients With Depression and Cognitive Impairment on Stable Antidepressant Treatment: A Randomized Controlled Trial. Am J Geriatr Psychiatry. 2018;26(10):1050–60. https://doi.org/10.1016/j.jagp.2018.05.008.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Montero-Odasso M, Speechley M, Chertkow H, Sarquis-Adamson Y, Wells J, Borrie M, et al. Donepezil for gait and falls in mild cognitive impairment: A randomized controlled trial. Eur J Neurol. 2018;26(4):651–9. https://doi.org/10.1111/ene.13872.

    Article  PubMed  Google Scholar 

  8. Petersen RC, Aisen PS, Beckett LA, Donohue MC, Gamst AC, Harvey DJ, et al. Alzheimer’s Disease Neuroimaging Initiative (ADNI). Neurology. 2010;74(3):201–209. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2809036/. https://doi.org/10.1212/WNL.0b013e3181cb3e25.

  9. Baiocchi M, Cheng J, Small DS. Instrumental variable methods for causal inference. Stat Med. 2014;33(13):2297–340. https://doi.org/10.1002/sim.6128.

    Article  PubMed  PubMed Central  Google Scholar 

  10. van der Laan MJ. Targeted Maximum Likelihood Based Causal Inference: Part I. Int J Biostat. 2010;6(2):2. https://doi.org/10.2202/1557-4679.1211.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Schuler MS, Rose S. Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies. Am J Epidemiol. 2016;185(1):65–73. https://doi.org/10.1093/aje/kww165.

    Article  PubMed  Google Scholar 

  12. Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, Newey W, et al. Double/debiased machine learning for treatment and structural parameters. Econom J. 2018;21(1):C1–68. https://doi.org/10.1111/ectj.12097.

    Article  Google Scholar 

  13. Angrist JD, Frandsen B. Machine Labor. J Labor Econ. 2022;40:S97–S140. Publisher: The University of Chicago Press. https://doi.org/10.1086/717933.

  14. Kueper JK, Speechley M, Montero-Odasso M. The Alzheimer’s Disease Assessment Scale-Cognitive Subscale {(ADAS}-Cog): Modifications and Responsiveness in Pre-Dementia Populations. A Narrative Review J Alzheimers Dis. 2018;63(2):423–44. https://doi.org/10.3233/jad-170991.

    Article  PubMed  Google Scholar 

  15. Rubin DB. Causal Inference Using Potential Outcomes. J Amer Statist Assoc. 2005;100(469):322–31. https://doi.org/10.1198/016214504000001880.

    Article  CAS  Google Scholar 

  16. Pearl J. Causal diagrams for empirical research. Biometrika. 1995;82(4):669–88. https://doi.org/10.1093/biomet/82.4.669.

    Article  Google Scholar 

  17. VanderWeele TJ, Shpitser I. On the definition of a confounder. Ann Statist. 2013;41(1):196–220. https://doi.org/10.1214/12-aos1058.

    Article  Google Scholar 

  18. Greenland S, Pearl J, Robins JM. Confounding and Collapsibility in Causal Inference. Stat Sci. 1999;14(1):29–46. https://doi.org/10.1214/ss/1009211805.

    Article  Google Scholar 

  19. Brookhart MA, Stürmer T, Glynn RJ, Rassen J, Schneeweiss S. Confounding Control in Healthcare Database Research. Med Care. 2010;48(6):S114–20. https://doi.org/10.1097/mlr.0b013e3181dbebe3.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Aronow PM, Samii C. Does Regression Produce Representative Estimates of Causal Effects? Am J Polit Sci. 2015;60(1):250–67. https://doi.org/10.1111/ajps.12185.

    Article  Google Scholar 

  21. Imbens GW, Angrist JD. Identification and Estimation of Local Average Treatment Effects. Econometrica. 1994;62(2):467. https://doi.org/10.2307/2951620.

    Article  Google Scholar 

  22. Bound J, Jaeger DA, Baker RM. Problems with Instrumental Variables Estimation When the Correlation Between the Instruments and the Endogeneous Explanatory Variable is Weak. J Amer Statist Assoc. 1995;90(430):443. https://doi.org/10.2307/2291055.

    Article  Google Scholar 

  23. Wooldridge JM. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge: MIT Press; 2010.

    Google Scholar 

  24. Angrist JD, Pischke JS. Mostly Harmless {Econometrics}: An {Empiricist}’s Companion. 1st ed. Princeton University Press; 2009.

  25. Abadie A. Semiparametric instrumental variable estimation of treatment response models. J Econ. 2003;113(2):231–63. https://doi.org/10.1016/s0304-4076(02)00201-4.

    Article  Google Scholar 

  26. Blandhol C, Bonney J, Mogstad M, Torgovitsky A. When is Tsls Actually Late? SSRN Electron J. 2022:69. https://doi.org/10.2139/ssrn.4021804.

  27. Aronow PM, Carnegie A. Beyond LATE: Estimation of the Average Treatment Effect with an Instrumental Variable. Polit Anal. 2017;21(4):492–506. https://doi.org/10.1093/pan/mpt013. www.cambridge.org/core/journals/political-analysis/article/abs/beyond-late-estimation-of-the-average-treatment-effect-with-an-instrumental-variable/604E0803793175CF88329DB34DAA80B3.

  28. Wang L, Tchetgen Tchetgen E. Bounded, efficient and multiply robust estimation of average treatment effects using instrumental variables. J R Stat Soc Series B Stat Methodol. 2020;80(3):531–50. https://doi.org/10.1111/rssb.12262.

    Article  Google Scholar 

  29. Hartwig FP, Wang L, Smith GD, Davies NM. Average causal effect estimation via instrumental variables: the no simultaneous heterogeneity assumption. 2020. arXiv preprint arXiv:2010.10017.

  30. Wennberg JE. Dealing With Medical Practice Variations: A Proposal for Action. Health Affair. 1984;3(2):6–33. https://doi.org/10.1377/hlthaff.3.2.6.

    Article  CAS  Google Scholar 

  31. Wennberg J, Gittelsohn A. Small Area Variations in Health Care Delivery. Science. 1973;182(4117):1102–8. https://doi.org/10.1126/science.182.4117.1102.

    Article  CAS  PubMed  Google Scholar 

  32. Corallo AN, Croxford R, Goodman DC, Bryan EL, Srivastava D, Stukel TA. A systematic review of medical practice variation in OECD countries. Health Policy. 2014;114(1):5–14. https://doi.org/10.1016/j.healthpol.2013.08.002.

    Article  PubMed  Google Scholar 

  33. Chen Y, Briesacher BA. Use of instrumental variable in prescription drug research with observational data: A systematic review. J Clin Epidemiol. 2011;64(6):687–700. https://doi.org/10.1016/j.jclinepi.2010.09.006.

    Article  PubMed  Google Scholar 

  34. Brookhart MA, Schneeweiss S. Preference-Based Instrumental Variable Methods for the Estimation of Treatment Effects: Assessing Validity and Interpreting Results. Int J Biostat. 2007;3(1):14. https://doi.org/10.2202/1557-4679.1072.

    Article  PubMed  Google Scholar 

  35. Hirano K, Imbens GW. Estimation of Causal Effects using Propensity Score Weighting: An Application to Data on Right Heart Catheterization. Health Serv Outcomes Res Methodol. 2001;2(3/4):259–78. https://doi.org/10.1023/a:1020371312283.

    Article  Google Scholar 

  36. Bhattacharya J, Vogt W. Do Instrumental Variables Belong in Propensity Scores? Cambridge: National Bureau of Economic Research; 2007. Technical Report t0343. https://doi.org/10.3386/t0343.

  37. Wooldridge JM. Should instrumental variables be used as matching variables? Res Econ. 2016;70(2):232–7. https://doi.org/10.1016/j.rie.2016.01.001. www.sciencedirect.com/science/article/pii/S1090944315301678.

  38. Ding P, Vanderweele TJ, Robins JM. Instrumental variables as bias amplifiers with general outcome and confounding. Biometrika. 2017;104(2):291–302. https://doi.org/10.1093/biomet/asx009.

    Article  CAS  PubMed  Google Scholar 

  39. Pearl J. On a Class of Bias-Amplifying Variables that Endanger Effect Estimates. 2012. arXiv prepint arXiv:1203.3503.

  40. Brookhart MA, Schneeweiss S, Rothman KJ, Glynn RJ, Avorn J, Stürmer T. Variable Selection for Propensity Score Models. Am J Epidemiol. 2006;163(12):1149–56. https://doi.org/10.1093/aje/kwj149.

    Article  PubMed  Google Scholar 

  41. Brooks JM, Ohsfeldt RL. Squeezing the Balloon: Propensity Scores and Unmeasured Covariate Balance. Health Serv Res. 2012;48(4):1487–507. https://doi.org/10.1111/1475-6773.12020.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Petersen ML, Porter KE, Gruber S, Wang Y, van der Laan MJ. Diagnosing and responding to violations in the positivity assumption. Stat Methods Med Res. 2010;21(1):31–54. https://doi.org/10.1177/0962280210386207.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Westreich D, Cole SR. Invited Commentary: Positivity in Practice. Am J Epidemiol. 2010;171(6):674–7. https://doi.org/10.1093/aje/kwp436.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Robins JM, Hernán MA, Brumback B. Marginal Structural Models and Causal Inference in Epidemiology. Epidemiology. 2000;11(5):550–60. https://doi.org/10.1097/00001648-200009000-00011.

    Article  CAS  PubMed  Google Scholar 

  45. Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55. https://doi.org/10.1093/biomet/70.1.41.

    Article  Google Scholar 

  46. Austin PC, Stuart EA. Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies. Stat Med. 2015;34(28):3661–79. https://doi.org/10.1002/sim.6607.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Vegetabile BG, Gillen DL, Stern HS. Optimally Balanced Gaussian Process Propensity Scores for Estimating Treatment Effects. J R Stat Soc Ser A Stat Soc. 2020;183(1):355–377. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8360444/. https://doi.org/10.1111/rssa.12502.

  48. Imai K, Ratkovic M. Covariate balancing propensity score. J R Stat Soc Ser B Methodol. 2014;76(1):243–263. https://doi.org/10.1111/rssb.12027.

    Article  Google Scholar 

  49. Weitzen S, Lapane KL, Toledano AY, Hume AL, Mor V. Weaknesses of goodness-of-fit tests for evaluating propensity score models: The case of the omitted confounder. Pharmacoepidem Dr S. 2004;14(4):227–38. https://doi.org/10.1002/pds.986.

    Article  Google Scholar 

  50. Chernozhukov V, Hansen C, Spindler M. Post-Selection and Post-Regularization Inference in Linear Models with Many Controls and Instruments. Am Econ Rev. 2015;105(5):486–90. https://doi.org/10.1257/aer.p20151022.

    Article  Google Scholar 

  51. Kinal TW. The Existence of Moments of k-Class Estimators. Econometrica. 1980;48(1):241. https://doi.org/10.2307/1912027.

    Article  Google Scholar 

  52. Angrist JD, Krueger AB. Split-Sample Instrumental Variables Estimates of the Return to Schooling. J Bus Econ Stat. 1995;13(2):225–35. https://doi.org/10.1080/07350015.1995.10524597.

    Article  Google Scholar 

  53. Roodman D. A Note on the Theme of Too Many Instruments. Oxford B Econ Stat. 2009;71(1):135–58. https://doi.org/10.1111/j.1468-0084.2008.00542.x.

    Article  Google Scholar 

  54. Young A. Consistency without Inference: Instrumental Variables in Practical Application. Eur Econ Rev. 2022;147:104112. https://doi.org/10.1016/j.euroecorev.2022.104112. www.sciencedirect.com/science/article/pii/S001429212200054X.

  55. Guggenberger P. On The Asymptotic Size Distortion Of Tests When Instruments Locally Violate The Exogeneity Assumption. Economet Theor. 2011;28(2):387–421. https://doi.org/10.1017/s0266466611000375.

    Article  Google Scholar 

  56. Davies NM, Thomas KH, Taylor AE, Taylor GM, Martin RM, Munafò MR, et al. How to compare instrumental variable and conventional regression analyses using negative controls and bias plots. Int J Epidemiol. 2017;46(6):2067–77. https://doi.org/10.1093/ije/dyx014.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Sargan JD. The Estimation of Economic Relationships using Instrumental Variables. Econometrica. 1958;26(3):393. https://doi.org/10.2307/1907619.

    Article  Google Scholar 

  58. Kiviet JF, Kripfganz S. Instrument approval by the Sargan test and its consequences for coefficient estimation. Econ Lett. 2021;205:109935. https://doi.org/10.1016/j.econlet.2021.109935.

    Article  Google Scholar 

  59. Pearl J. Causality. 2nd ed. Cambridge: Cambridge University Press; 2009. https://doi.org/10.1017/cbo9780511803161.

    Book  Google Scholar 

  60. Mood C. Logistic Regression: Why We Cannot Do What We Think We Can Do, and What We Can Do About It. Eur Sociol Rev. 2009;26(1):67–82. https://doi.org/10.1093/esr/jcp006.

    Article  Google Scholar 

  61. Schuster NA, Twisk JWR, ter Riet G, Heymans MW, Rijnhart JJM. Noncollapsibility and its role in quantifying confounding bias in logistic regression. BMC Med Res Methodol. 2021;21(1):136. https://doi.org/10.1186/s12874-021-01316-8.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Schisterman EF, Cole SR, Platt RW. Overadjustment Bias and Unnecessary Adjustment in Epidemiologic Studies. Epidemiology. 2009;20(4):488–95. https://doi.org/10.1097/EDE.0b013e3181a819a1.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Karlson KB, Popham F, Holm A. Marginal and Conditional Confounding Using Logits. Sociol Methods Res. 2021:004912412199554. https://doi.org/10.1177/0049124121995548.

  64. Vansteelandt S, Keiding N. Invited Commentary: G-computation-lost in Translation? Am J Epidemiol. 2011;173(7):739–42. https://doi.org/10.1093/aje/kwq474.

    Article  PubMed  Google Scholar 

  65. Pearl J, Paz A. Confounding Equivalence in Causal Inference. J Causal Inference. 2014;2(1):75–93. https://doi.org/10.1515/jci-2013-0020.

    Article  Google Scholar 

  66. Basu A, Coe NB, Chapman CG. 2SLS versus 2SRI: Appropriate methods for rare outcomes and/or rare exposures. Health Econ. 2018;27(6):937–55. https://doi.org/10.1002/hec.3647.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Chapman CG, Brooks JM. Treatment Effect Estimation Using Nonlinear Two-Stage Instrumental Variable Estimators: Another Cautionary Note. Health Serv Res. 2016;51(6):2375–94. https://doi.org/10.1111/1475-6773.12463.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Horrace WC, Oaxaca RL. Results on the bias and inconsistency of ordinary least squares for the linear probability model. Econ Lett. 2006;90(3):321–7. https://doi.org/10.1016/j.econlet.2005.08.024.

    Article  Google Scholar 

  69. Belloni A, Chen D, Chernozhukov V, Hansen C. Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain. Econometrica. 2012;80(6):2369–429. https://doi.org/10.3982/ecta9626.

    Article  Google Scholar 

  70. Luque-Fernandez MA, Schomaker M, Rachet B, Schnitzer ME. Targeted maximum likelihood estimation for a binary treatment: A tutorial. Stat Med. 2018;37(16):2530–46. https://doi.org/10.1002/sim.7628.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Díaz I. Machine learning in the estimation of causal effects: Targeted minimum loss-based estimation and double/debiased machine learning. Biostatistics. 2019;21(2):353–8. https://doi.org/10.1093/biostatistics/kxz042.

    Article  Google Scholar 

  72. van der Laan MJ, Robins JM. Unified Methods for Censored Longitudinal Data and Causality. Springer Series in Statistics. Springer; 2003. https://doi.org/10.1007/978-0-387-21700-0.

  73. Neyman J. Optimal asymptotic tests of composite statistical hypotheses. Probab Stat. 1959:416–44.

  74. van der Ploeg T, Austin PC, Steyerberg EW. Modern modelling techniques are data hungry: A simulation study for predicting dichotomous endpoints. BMC Med Res Methodol. 2014;14(1):137. https://doi.org/10.1186/1471-2288-14-137.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Andrews I, Stock JH, Sun L. Weak Instruments in Instrumental Variables Regression: Theory and Practice. Annu Rev Econ. 2019;11(1):727–53. https://doi.org/10.1146/annurev-economics-080218-025643.

    Article  Google Scholar 

  76. Windmeijer F, Farbmacher H, Davies N, Davey Smith G. On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments. J Amer Statist Assoc. 2018;114(527):1339–50. https://doi.org/10.1080/01621459.2018.1498346.

    Article  CAS  Google Scholar 

  77. VanderWeele TJ. Confounding and Effect Modification: Distribution and Measure. Epidemiol Methods. 2012;1(1):55–82. https://doi.org/10.1515/2161-962x.1004.

    Article  CAS  Google Scholar 

  78. Jackson JW, Swanson SA. Toward a Clearer Portrayal of Confounding Bias in Instrumental Variable Applications. Epidemiology. 2015;26(4):498–504. https://doi.org/10.1097/ede.0000000000000287.

    Article  PubMed  PubMed Central  Google Scholar 

  79. Zhao Q, Small DS. Graphical Diagnosis of Confounding Bias in Instrumental Variable Analysis. Epidemiology. 2018;29(4):e29–31. https://doi.org/10.1097/ede.0000000000000822.

    Article  PubMed  Google Scholar 

  80. Mueller SG, Weiner MW, Thal LJ, Petersen RC, Jack CR, Jagust W, et al. Ways toward an early diagnosis in Alzheimer’s disease: The Alzheimer’s Disease Neuroimaging Initiative (ADNI). Alzheimers Dement. 2005;1(1):55–66. https://doi.org/10.1016/j.jalz.2005.06.003.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Gupta S. Intention-to-treat concept: A review. Perspect Clin Res. 2011;2(3):109. https://doi.org/10.4103/2229-3485.83221.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Chernozhukov V, Hansen C, Spindler M. High-Dimensional Metrics in R. 2016. arXiv preprint arXiv:1603.01700.

Download references

Funding

RSZ and DLG are supported by NIH/NIA P30AG066519 and NIH/NIA RF1AG075107. JDG is supported by NIH/NIA AG066519. Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

RSZ and DLG contributed to the conception and design of the work, the acquisition, analysis, and interpretation of data, and have drafted the work and substantively revised it. JDG contributed to the acquisition, analysis, and interpretation of data as well as substantively revised the work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Roy S. Zawadzki.

Ethics declarations

Ethics approval and consent to participate

All participants in ADNI provided informed consent. For details regarding the ethics statement of the ADNI study population please see: https://adni.loni.usc.edu.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf.

Supplementary Information

Additional file 1.

More information on how the IVs were created as well as checking the relevance and sensitivity to unobserved confounders of these IVs. Additionally, we investigate diagnostics related to IPTW: pre- and post-matching balance as well as the positivity assumption.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zawadzki, R.S., Grill, J.D., Gillen, D.L. et al. Frameworks for estimating causal effects in observational settings: comparing confounder adjustment and instrumental variables. BMC Med Res Methodol 23, 122 (2023). https://doi.org/10.1186/s12874-023-01936-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-023-01936-2

Keywords