Correction for retest effects across repeated measures of cognitive functioning: a longitudinal cohort study of postoperative delirium

Background Few studies have compared methods to correct for retest effects or practice effects in settings where an acute event could influence test performance, such as major surgery. Our goal in this study was to evaluate the use of different methods to correct for the effects of practice or retest on repeated test administration in the context of an observational study of older adults undergoing elective surgery. Methods In a cohort of older surgical patients (N = 560) and a non-surgical comparison group (N = 118), we compared changes on repeated cognitive testing using a summary measure of general cognitive performance (GCP) between patients who developed post-operative delirium and those who did not. Surgical patients were evaluated pre-operatively and at 1, 2, 6, 12, and 18 months following surgery. Inferences from linear mixed effects models using four approaches were compared: 1) no retest correction, 2) mean-difference correction, 3) predicted-difference correction, and 4) model-based correction. Results Using Approaches 1 or 4, which use uncorrected data, both surgical groups appeared to improve or remain stable after surgery. In contrast, Approaches 2 and 3, which dissociate retest and surgery effects by using retest-adjusted GCP scores, revealed an acute decline in performance in both surgical groups followed by a recovery to baseline. Relative differences between delirium groups were generally consistent across all approaches: the delirium group showed greater short- and longer-term decline compared to the group without delirium, although differences were attenuated after 2 months. Standard errors and model fit were also highly consistent across approaches. Conclusion All four approaches would lead to nearly identical inferences regarding relative mean differences between groups experiencing a key post-operative outcome (delirium) but produced qualitatively different impressions of absolute performance differences following surgery. Each of the four retest correction approaches analyzed in this study has strengths and weakness that should be evaluated in the context of future studies. Retest correction is critical for interpretation of absolute cognitive performance measured over time and, consequently, for advancing our understanding of the effects of exposures such as surgery, hospitalization, acute illness, and delirium. Electronic supplementary material The online version of this article (10.1186/s12874-018-0530-x) contains supplementary material, which is available to authorized users.


Background
Studies of within-individual change in cognitive performance are critical to advancing our understanding of the impact of aging and disease on cognitive functioning. However, repeated test administration can result in observed gains in performance that may be due to familiarity with test content or context rather than true differences in underlying cognitive ability. These spurious gains, referred to as practice or retest effects, may be due to familiarity with the testing situation, reduced anxiety, or changes in the environment [1]. Retest effects are observed across a variety of cognitive domains [2,3] and can last for several years [1]. The magnitude of decline attributable to aging or age-related disorders may be underestimated in longitudinal studies if practice or retest effects are not considered, and may even erroneously suggest longitudinal gains in performance [1,4,5].
Although multiple methods have been proposed to correct for retest effects in longitudinal studies, there is no clear consensus on the best approach [6]. Moreover, the ability to measure or adjust for practice or retest effects is further complicated when there is an acute event or insult that is anticipated to influence test performance, such as major surgery, acute illness, or intervention. The cohort examined in this observational study provides a particularly instructive example of this challenge. It concerns older adults undergoing major surgery with cognitive test performance data assessed immediately prior to and for some time after surgery. Patients' observed cognitive performance over time is expected to be influenced by retest effects, and also the combined effects of their baseline cognitive abilities, individual variations over time, older age, major surgery (including hospitalization, anesthesia, psychoactive medications, postoperative complications), and in some cases, delirium, an acute confusional state that is common after surgery in older adults [7]. Both surgery [8][9][10] and specifically delirium [11][12][13][14] have been shown to be associated with acute and long-term cognitive decline.
Our goal in this study was to evaluate the use of different methods of correcting for the effects of practice or retest on repeat test administration in the context of an observational study of older adults undergoing elective surgery. It is important to point out that our objective is related to but different from methods of characterizing change using reliable change indices. We are specifically interested in assessing change when there are more than two repeated observations and with investigating the impact of an acute insult or exposure on short-and long-term cognitive change. We first examined the raw data without retest correction, then applied three methods of retest correction that have been utilized in previous studies. Two retest correction approaches, mean difference correction [15] and predicted difference correction [16][17][18], rely on retestadjusted cognitive scores based on performance in a non-surgical comparison group. A model-based correction [19,20] in which the data from the non-surgical comparison group was modeled directly as a reference group was also assessed. We contrast results and inferences from the four approaches with regard to overall trends and the differences in short-term (e.g., 1-2 months) and longer-term (e.g., 6-18 months) cognitive change in a group of patients who developed post-operative delirium compared to those who did not. The objective of this study was to compare the different retest correction methods and to provide insights on strengths and weakness of different methods for future longitudinal studies of older adults employing serial cognitive testing in the setting of acute insults, such as surgery, hospitalization, acute illness, or delirium.

Study populations
We examined data from the Successful Aging after Elective Surgery (SAGES) study cohort along with a non-surgical comparison (NSC) sample measured at the same time [21,22]. Written informed consent was obtained from all participants according to procedures approved by the institutional review boards of Beth Israel Deaconess Medical Center and Brigham and Women's Hospital, the two study hospitals, and Hebrew SeniorLife, the study coordinating center, all located in Boston, Massachusetts. The institutional review boards approved this study and all mandatory laboratory health and safety procedures were complied with in the course of conducting this research.

Surgical sample (SAGES)
SAGES is an ongoing prospective cohort study of older adults without dementia undergoing major elective surgery. The study design and methods have been described in detail previously [21,22]. In brief, eligible participants were age 70 years and older, English speaking, scheduled to undergo elective surgery at one of two Harvard-affiliated academic medical centers and with an anticipated length of stay of at least 3 days. Eligible surgical procedures included: total hip or knee replacement, lumbar, cervical, or sacral laminectomy, lower extremity arterial bypass surgery, open abdominal aortic aneurysm repair, and open or laparoscopic colectomy. Exclusion criteria were evidence of dementia, delirium (within 12 months), hospitalization within 3 months, terminal condition, legal blindness, severe deafness, history of schizophrenia or psychosis, or history of alcohol abuse or withdrawal. A total of 566 patients met all eligibility criteria and were enrolled between June 18, 2010 and August 8, 2013. Six subjects were excluded after enrollment due to suspected dementia, determined by neuropsychological testing and clinical review by an expert multi-disciplinary panel, leaving a final sample of 560 participants.

Non-surgical comparison (NSC) sample
The NSC sample (N = 119) was recruited concurrent to the SAGES sample to evaluate the cognitive trajectory over time in the absence of hospitalization, surgery, and delirium and specifically to quantify retest effects. The approximate size of the NSC sample was selected to provide sufficient precision to quantify the magnitude of retest effects, and is comparable to other studies [23]. NSC patients were enrolled from Beth Israel Deaconess Medical Center [13]. Other than surgery, they met the same inclusion and exclusion criteria set for the surgical group. One subject was excluded after enrollment due to suspected dementia, leaving a final sample of 118.

Data collection
Surgery participants underwent baseline assessment within 30 days before surgery (median, 9 days) and daily delirium assessment during hospitalization (detailed below). After discharge, participants were followed at 1, 2, 6, and 12 months, and every 6 months thereafter. Our analysis is limited to cognitive test performance data observed over 18 months following surgery. NSC participants were administered the same neuropsychological battery as the surgical sample at a baseline assessment and 1, 2, 6, and 18 months (no 12-month assessment due to logistical constraints). Limited data was collected from their medical record to assess overall level of comorbidity. Delirium was not assessed in the NSC sample.

General cognitive performance
A complete neuropsychological test battery was completed at baseline and each follow-up interview, which included: Trail-Making Tests A and B, Phonemic F-A-S Fluency, Category Fluency, Visual Search and Attention Test, Hopkins Verbal Learning Test-Revised, Digit Span Forward/Backward, Boston Naming Test, and Repeatable Battery for the Assessment of Neuropsychological Status Digit Symbol Substitution. From this battery, we created a composite summary measure, the General Cognitive Performance (GCP), which was used as our primary outcome measure of longitudinal cognitive decline. GCP is a weighted composite measure that is calibrated to a nationally representative sample of adults ≥70 years, in order to yield a mean score of 50 and standard deviation (SD) of 10 [36,37] at the U.S. population level. The GCP is sensitive to change, has minimal floor and ceiling effects, and has been utilized in many prior studies to date [13,[38][39][40].

Data analysis
Linear mixed effects (LME) models were used to model baseline levels of and longitudinal changes in cognitive test scores, and to assess differences between those who developed delirium and those who did not in terms of both short-and longer-term cognitive performance. We selected an LME model to measure longitudinal change because we assumed the outcome variable (GCP) to be continuous and for cognitive decline to follow a linear trajectory after the second month. Specifically, we used LME models with maximum likelihood parameter estimation and an unstructured covariance using either the surgical cohort only (N = 560) or the surgery + NSC cohorts (N = 678). Conditional models included a random intercept, random time-slope from 2 to 18 months, fixed time indicator variable for months 1 and 2, a fixed time-slope from 2 to 18 months, fixed effects for covariates (mean-centered age, female sex, nonwhite race, and years of education), main effect of delirium group, and interactions between time indicator variables and delirium group (1 m × group and 2 m × group) and time-slope and delirium group (2-18 m × group). Delirium group is defined with two levels-surgery delirium-negative (delirium-, reference group) and surgery delirium-positive (delirium+)-for Approaches 1, 2, and 3. For Approach 4, delirium group is defined with three levels-NSC (reference group), surgery delirium-, and surgery delirium+. The four methods for retest correction are described below and summarized in Table 1. The coefficient of determination (R 2 ) was calculated as the squared correlation between the observed and predicted GCP values for each person on each occasion of measurement from each model [41]. R 2 is reported for the overall model and within each group. All models assumed that incomplete data were missing at random.
Previous studies have shown the greatest retest gains between the first and second test administrations [42,43], with diminishing practice effects leveling off after about 3-4 assessments [3,44]. While retest effects may last for up to 7 years [1], at least one study has shown that with frequent serial testing, retest effects tend to diminish after about three months across multiple cognitive domains [43]. Moreover, the assumption that changes observed over time reflect primarily the effect of repeated test exposure and minimally reflect maturational changes is justified for only relatively short periods. Therefore, the level of retest correction applied in this study varies for follow-up assessments at months 1, 2, and 6. The same level of correction at month 6 (the fourth assessment) is applied to all subsequent assessments, when retest effects are expected to level off.
Group differences between the surgical and NSC groups were assessed with t-tests for continuous variables and chi-square test for dichotomous variables.

Approach 1: No retest correction
Approach 1 does not apply any retest correction. Raw GCP is the independent variable and analysis was limited to the surgical sample (N = 560).

Approach 2: Mean difference correction
This approach was used in the International Study on Postoperative Cognitive Dysfunction (ISPOCD) [15]. The approach first subtracts the observed baseline GCP score from the GCP score at each time point in both the NSC and surgical samples. Then, the mean difference in the NSC sample is subtracted from the observed difference scores seen in the surgical sample at matching time points.
This correction shifts the distribution of the repeatedly observed cognitive performance scores across the entire cohort, but does not impact the rank order of persons or inter-individual variation at any visit. Within occasion, the correction is a constant and therefore uncorrelated with any variable under study. The ISPOCD method also allows for division of the corrected score by the SD of the NSC group mean to create a unitless score, but to preserve comparability of methods we have not done so here. Additionally, to readily compare approaches and maximize interpretability, we returned the corrected value to the GCP scale by adding the individual's baseline GCP score to the difference score. The mean difference retest-corrected GCP score was then used as the dependent variable in the LME model to test differences between the delirium+ and delirium-groups in the surgical cohort (N = 560).
The 6-month assessment in the comparison sample is used as the centering point for the 6 month and all subsequent observations in the surgical sample. This approach relies upon the assumption that differences in the mean across the repeat performances in the comparison sample represent the mean practice or retest effect free of normative cognitive change. We considered this assumption reasonable given evidence from the literature (described in section "Data analysis") and the relatively short time interval between assessments in this stable outpatient comparison group.

Approach 3: Predicted difference correction
The predicted difference method is a regression-based approach that uses a patient's baseline performance to predict what his/or her retest score is expected to be at retest, using a regression equation derived from a reference sample [16][17][18]. First, a series of simple linear regressions were performed in the NSC sample to derive regression equations for prediction of GCP performance at months 1, 2, and 6 (dependent variable) based on baseline GCP performance (independent variable). No other variables were included in these initial models. Next, the estimated regression coefficients were used to generate predicted GCP performance at months 1, 2, 6, 12, and 18 for each individual in the surgical sample (Additional file 1: Table S1). The model estimated for month 6 in the NSC group was used to generate expected scores for month 6 onward in the surgical sample. The predicted GCP score was then subtracted from the observed GCP score to derive the retest effect. Finally, an individual's retest effect was added to his or her observed baseline GCP score to calculate the predicted difference retest-corrected GCP at each visit. These retest-corrected GCP scores were then used as the dependent variable in the LME model to test differences between delirium+ and delirium-groups in the surgical cohort (N = 560).

Approach 4: Model-based correction
Rather than using the NSC group to derive a retest-corrected GCP score, the model-based approach uses the NSC group as the reference group in the LME model [19,20]. Raw GCP scores were the independent variable and both surgical and NSC cohorts were combined before analysis (N = 678). This approach also uses an interaction between group and time, but in this analysis there are Step 2: Calculate retest-corrected scores in surgical group: t ∈ (1,2,6): Step 2: Compute expected GCP performance given the NSC group regression equations: Step 3: Calculate retest effect in surgical (SRG) group: Step 4: Calculate retest-corrected scores in surgical group: ] as the mean difference in general cognitive performance (GCP) score from baseline among the non-surgical comparison (NSC) group. This effect is computed assuming no true change occurs within a six month time frame, or is vanishingly small relative to the practice or retest effect. The retest effect is simply the mean within-person difference between the time t follow-up and baseline observed cognitive test score (GCP). These are computed for months 1, 2 and 6 (per-protocol observation time points).
Step 2 defines the retest corrected cognitive performance score (GCPRC) for a person (i) at time t as their observed score at time t minus the mean retest effect in the NSC group at time t. We set the 12 and 18 month follow-up equal to the six-month retest effect to reflect our assumption that practice or retest effects are constant following the six month follow-up three groups: NSC, surgery delirium-, and surgery delir-ium+. The coefficients of the surgery delirium-and surgery delirium+ groups were compared with those in the NSC group. Post hoc tests were performed to test if the de-lirium+ group differed from the delirium-group at months 1 and 2, and whether their slopes differed from months 2-18.

Sample characteristics
Baseline sample characteristics are summarized in Table 2.

Overall cognitive trends
Visual examination of our raw data ( Fig. 1 displays raw time-series data for a random sample of individuals) showed a consistent pattern: the greatest gains in the NSC group were observed between baseline and the second assessment at month 1, and smaller gains were observed between the second and third assessment, leveling off after month 2. This is consistent with previous studies showing little to no retest effect gains after the 3rd assessment and further justifies our decision to apply a variable level of retest correction only up to month 6 and the same level of correction to all subsequent assessments. GCP performance for both groups started at about the same level, on average, and the mean GCP scores increased at the 1 and 2 month observations and remained stable thereafter (modeled data from uncorrected GCP scores is displayed in Fig. 2). The gain from baseline to month 6 was, on average, 2.6 GCP points in the surgical group and 2.3 GCP points in the NSC group. The GCP is calibrated to have a SD of 10 in a representative sample of U.S. community dwelling elders aged 70 and older, and therefore this performance gain from baseline to 6 months describes a small effect size. The mean score was different between the surgical and NSC groups only at the 1-month assessment (1.04 GCP points lower in the surgical group, p = .002).

Comparison of approaches
The parameter estimates and model fit for each retest correction method are summarized in Table 3 and Fig. 3. Overall, we found remarkable consistency in estimates  of mean fit and group differences in means; however some differencesparticularly in Approach 3were also apparent. All approaches indicated that the delirium+ group exhibited greater mean decline than the delirium-group in GCP at month 1, but that this effect was somewhat attenuated at month 2: net differences in GCP scores between the delirium+ and delirium-groups measured by the four approaches differed at baseline from − 2.9 to − 3.0 GCP points, change from baseline to month 1 from − 1.3 to − 1.6 GCP points, and change from month 1 to month 2 from 0.94 to 1.14 GCP points. Estimated time-slopes from 2 to 18 months did not differ substantially between the delirium-and delirium+ groups (difference in slopes ranged from − 0.32 to − 0.34 GCP points per year). Standard errors, model R 2 (ranging from 0.33 to 0.37), and R 2 by group (ranging from 0.31-0.32 in the delirium-group and 0.25-0.27 in the delirium+ group) were also highly consistent across approaches. Figure 4 and Table 4 display the average GCP performance at each time point in the delirium+ and deliriumgroups using raw (uncorrected) GCP (used in Approaches 1 and 4), mean difference-corrected GCP (used in Approach 2), and predicted difference-corrected GCP (used in Approach 3). While relative differences between the delirium-and delirium+ groups were mostly unaffected by retest correction, the model-implied GCP scores and corresponding evidence of cognitive change varied by method. For instance, in Approach 1 (no correction) and Approach 4 (model-based correction), which used raw GCP data, it appears that the delirium-and delirium+ groups improved or remained stable one month after surgery and continued to improve at month 2. Although the standard errors (SE) and confidence intervals (CI) differ slightly, the estimates are identical: raw scores for both Approach 1 and Approach 4 indicate that the delirium-group improved by 1.08 GCP points, while the delirium+ group declined slightly by − 0.20 GCP points. The delirium-group improved from month 1 to 2 by an additional 1.39 GCP points and the delirium+ group improved by 2.33 GCP points. In other words, raw GCP scores indicate that surgical patients will improve by 2 or more GCP points, equivalent to more than a fifth of a population SD, two months after surgery. Comparable increases in the NSC group (increases of 1.80 GCP points at month 1 and an additional 0.32 at month 2) strongly indicate that these perceived improvements are due to factors unrelated to surgery, and demonstrate the importance of correcting for retest effects if model-implied means will be evaluated in addition to relative differences.
In contrast, Approach 2 (mean-difference) and Approach 3 (predicted-difference correction), which dissociate retest and surgery effects by using retest-adjusted GCP scores, revealed a decline in performance in both surgical groups at month 1 and a recovery to baseline at month 2. Specifically, in Approach 2, at month 1 the delirium-group declined by − 0.73 GCP points then improved from month 1 to month 2 by 1.08 GCP points; the delirium+ group declined even more markedly at month 1 by − 2.01 . Raw time-series data for all three groups generally show a plateauing of cognitive performance after month 6, with the most variability in the delirium group GCP points, but recovered at month 2, improving by 2.02 GCP points. Similarly, in Approach 3, the delirium-group declined by − 0.67 GCP points at month 1 but improved from month 1 to month 2 by 1.04 GCP points; the delirium+ group declined at month 1 by − 2.27 GCP points then recovered at month 2, improving by 2.17 GCP points. Slopes from months 2-18 did not differ by approach; this was expected since, similar to previous studies, the largest retest gains are observed after the first two repeated administrations (at months 1 and 2).

Discussion
Retest effects are a known source of bias in longitudinal studies. Our goal was to contrast results and inferences derived from four existing methods in the literature for addressing practice or retest effects in an observational study of cognitive performance following elective surgery. Overall, we found that all four approaches provided nearly equivalent information and would lead to identical inferences regarding relative mean differences between groups experiencing a key post-operative outcome (delirium).   Conversely, the three methods produced qualitatively different impressions of absolute performance differences over 18 months following surgery: uncorrected GCP scores increased by more than 1/5 of a population SD two months after surgery in both the surgical and NSC groups, suggesting that this increase is primarily due to retest rather than surgery; in contrast, retest-corrected GCP scores revealed a decline in performance in both surgical groups at month 1 and a recovery to baseline by month 2.
Our main finding, that substantive inferences and effect size estimates relating to the relative impact of exposure variables on cognitive changes are robust to different approaches to retest effects, echoes similar findings reported by Vivot et al. (2016) despite those authors considering a different context of study (long-running observational cohort studies of cognitive aging vs. follow-up studies of a clinical cohort), types of exposures (diabetes and depression vs. delirium) and studied approaches to handling practice and retest effects (model-based approaches with no control group vs. data manipulation and model-based approaches with a control group) [20]. Our findings are also congruent with those of Salthouse (2016) [45] who concluded that the primary impact of practice and retest effects was to disrupt mean age trend trajectories in the context of retest effects, whereas slopes over time are relatively unconfounded by retest effects. It is also notable that despite similar conclusions to our study, Vivot et al. and Salthouse used different approaches to quantify practice and retest effects (quasi-longitudinal or sequential cohort design).
Our study combined with those of Vivot et al. and Salthouse, all of which used different practice/retest effect adjustment methods suited to different study designs and research questions, demonstrates that retest effects influence age-related trends but are less important for understanding the relative impact of risk factors. Therefore, correction for retest effects is especially important for descriptive and natural history studies, but for analytical epidemiology studiesfor example, characterizing the impact of discrete risk factors (such as delirium, depression, diabetes)the impact of addressing retest effects is in interpretation and building the narrative to set the context of observed differences and effects of exposures. Of the four approaches examined, only Approach 1 (no correction) contradicts our a priori assumption based on clinical observations that surgery causes an acute decline in cognition. Furthermore, Approach 2 (mean difference correction) is the most straightforward method for interpreting cognitive trends since Approach 3 (predicted difference correction) changes the rank order and Approach 4 (model-based correction) requires extra post hoc transformations to generate interpretable results.
There are other important strengths and weakness of each retest correction method that should be considered before choosing the best approach for a particular study ( Table 1). The primary strengths of Approach 1 are that it does not make manipulations to observed data and does not require a control group. However, because it is difficult (or impossible) to separate the effects of retest and exposure, it is only appropriate for studies examining relative differences between groups or only in long-term cognitive change, provided that a slope is only fit to data after the first 2-3 administrations, after the most significant retest gains have already occurred. Approach 4 utilizes raw GCP data from the surgical group as well as from the NSC group to model retest effects and variation in retest  Table 4 effects across a population. Although the latter is a strength of this approach, its importance is attenuated by our finding that model fit indices and standard errors were nearly identical across approaches, suggesting that variance in retest effects does not substantially impact inferences. A primary limitation of Approach 4 is that using a NSC group as the reference group in a statistical model restricts the types of hypotheses that can be tested. For instance, surgery type, anesthesia type, or other exposure-related factors cannot be modeled as covariates or predictors of cognitive decline since these variables cannot be collected in patients who are not undergoing surgery. This significantly limits the applicability of this method for many studies.
Rather than modeling the NSC data directly, Approaches 2 and 3 use the NSC data to generate retest-corrected GCP scores. The key difference between Approach 2 and Approach 3 is that Approach 2 uses a constant retest effect correction for every participant while Approach 3 allows the magnitude of the retest effect correction to vary by the participant's baseline GCP. A primary strength of Approach 3 is that variables that could influence retest effects can be used to predict retest effects on an individual basis. Indeed, various characteristics have been suggested to influence retest effects [46,47], including baseline cognitive ability [5,44]. However, this literature is inconsistent, making it difficult to reliably select appropriate prediction variables [46,48]. Because we did not expect to draw more definitive conclusions about predictors of retest in our sample than prior work, and to keep the four Approaches as consistent as possible for comparison purposes, we chose not to include other potential predictors of retest in Approach 3. However, should consistent drivers of retest emerge, this would be a considerable advantage for using Approach 3 since those variables could be included in the retest prediction model. In building the predictive model for Approach 3, we observed that baseline GCP scores were negatively correlated with change in GCP scores in the NSC sample. The phenomenon of regression to the mean [49,50] would lead to the expectation that change scores on two variables that are positively correlated and have similar variance (e.g. baseline and follow-up GCP) would be negatively correlated with baseline performance. So, while this observation is expected given the phenomenon regression to the mean, it may also signal limitations of using a linear model to describe the dependence of follow-up scores on baseline scores. Approach 2 is a relatively straightforward method which uses a constant transformation derived from the NSC group. This transformation is applied uniformly across all participants and is uncorrelated with any other variable. This strength may illustrate the primary reason for differences in the model outputs between Approaches 2 and 3. Approach 3 allows the mean difference correction to vary as a function of a participant's baseline GCP, but it is known that cognitive performance is correlated with delirium, our independent variable of interest in the present analyses. Approach 2, therefore, is preferred over Approach 3 when the variable of interest is correlated with cognitive performance, as this may result in biased estimates of difference. The primary advantages of Approach 2, mean-difference correction, are its relatively straightforward application, that it enables interpretation of both relative differences and absolute performance, and that the hypothesized limitation that it does not account for variability due to precision of retest correction estimation has a negligible impact on inferences and model fit.
Our study also compared shorter and longer-term cognitive performance between the surgical and NSC groups. The mean score is different across groups only at the 1-month assessment, where the lingering effects of surgery, including postoperative delirium, are presumed to depress scores for some people. It is also worth noting the slightly lower score at baseline, but equivalent score at month 6 and beyond, suggesting the possibility that the performance of the surgical group at baseline might be depressed by factors related to the impending surgery (e.g., stress, pain, use of pain medications), rather than true differences in cognitive abilities. If it is true that surgical cohorts systematically have lower baseline cognitive scores compared to a NSC group, then a retest prediction model derived in the NSC sample based on baseline cognitive scores (as in Approach 3) may create biased predictions when applied to a surgical sample. This potential source of bias should be considered in future studies that plan to use Approach 3 to correct for retest effects.
This study offers an innovative contribution to the study of retest effects because it specifically assesses approaches that are applicable to observational studies with longitudinal cognitive assessment with two or more time-points aimed at investigating the impact of an acute insult or exposure on short-or long-term cognitive change. Indeed, some of the most common methodologies for controlling for retest effects cannot be evaluated using this type of study design. For instance, although many studies have evaluated the "reliable change index" [51][52][53][54][55][56], these methods are less applicable to studies with more than two assessments. Additionally, "boost" correction [4,57], which typically uses a step function to model improvement after the first assessment (i.e., using a function of 0-1-1-1…1 across assessment time points), will not work as designed if there are other factors affecting performance at the second assessment besides retest effects. In the present study, surgery occurred in the intervening period between the first and second assessment; thus the retest "boost" would be biased (likely attenuated) by surgery effects. In contrast, the four approaches evaluated herein are appropriate for study populations where the second or third test administration co-occurs with the acute exposure under study.
This study also has several limitations. First, our study was based on an observational cohort study, and does not provide a "gold standard" by which to measure retest effect. Fortunately, all four approaches provided similar estimates of effect and inferences were qualitatively indistinct for our primary point of inference -delirium+ and delirium-group differences in longitudinal trends of cognitive performance. Second, all models examined here assumed that incomplete data were missing at random. It is possible that bias due to non-random drop out affected both our retest effect estimates in the NSC group and modeling of the effect of delirium in the surgical sample. The latter has been investigated in prior work, which found that estimations of long-term decline in SAGES were robust to multiple different assumptions about missing data (see the Supplementary Appendix of reference [13]). In the NSC sample, all participants returned for the second assessment at month 1, five participants (4%) did not return for their third assessment at month 2, and an additional eight participants (11%) did not return for their fourth assessment at month 6. Although all participants returned for the second assessment, when the greatest retest gains are usually observed, it remains possible that drop-outs may have influenced our retest effect estimations. Third, because it remains unclear which variables consistently influence retest effects, it is possible that our findings may not be generalizable to cohorts that are younger, more racially and ethnically diverse, or less educated. Moreover, it is possible that important differences in our NSC group (e.g. greater baseline cognitive performance, more years of education, fewer physical and functional impairments) compared to the surgical cohort impacted our results. However, a recent study of a community-based cohort of older adults (mean age 77, N = 4073) found that, similar to other studies [48], retest effects did not differ as a function of individual differences in race/ethnicity, sex, language, years of education, literacy, apolipoprotein E ε4 status, or cardiovascular risk [46]. Fourth, because retest effect may vary by cognitive domain [58], it is possible the optimal retest correction would also vary by cognitive domain. This is an important area for future study. Finally, the retest correction approaches analyzed have been previously studied in various fields and were selected specifically for this study design, but the chosen methods are not an exhaustive list, and it is possible that alternative approaches exist. In fact, a gold standard approach for this type of study might be repeated observations prior to the acute event, such that retest effect has been exhausted before the event or insult of interest [44,59]. However, given that SAGES participants were enrolled in anticipation of an impending surgical procedure, such an approach might not be feasible for other studies as well.

Conclusion
In conclusion, this study addressed an important question about potential bias due to uncorrected retest effects in observational, longitudinal studies of surgical populations. Our results show that retest correction is critical for interpretation of absolute cognitive performance measured over time and, consequently, for advancing our understanding of the effects of exposures such as surgery, hospitalization, acute illness, and delirium. Each of the four retest