 Research
 Open Access
 Published:
A pairwise pseudolikelihood approach for regression analysis of lefttruncated failure time data with various types of censoring
BMC Medical Research Methodology volume 23, Article number: 82 (2023)
Abstract
Background
Failure time data frequently occur in many medical studies and often accompany with various types of censoring. In some applications, left truncation may occur and can induce biased sampling, which makes the practical data analysis become more complicated. The existing analysis methods for lefttruncated data have some limitations in that they either focus only on a special type of censored data or fail to flexibly utilize the distribution information of the truncation times for inference. Therefore, it is essential to develop a reliable and efficient method for the analysis of lefttruncated failure time data with various types of censoring.
Method
This paper concerns regression analysis of lefttruncated failure time data with the proportional hazards model under various types of censoring mechanisms, including right censoring, interval censoring and a mixture of them. The proposed pairwise pseudolikelihood estimation method is essentially built on a combination of the conditional likelihood and the pairwise likelihood that eliminates the nuisance truncation distribution function or avoids its estimation. To implement the presented method, a flexible EM algorithm is developed by utilizing the idea of selfconsistent estimating equation. A main feature of the algorithm is that it involves closedform estimators of the largedimensional nuisance parameters and is thus computationally stable and reliable. In addition, an R package LTsurv is developed.
Results
The numerical results obtained from extensive simulation studies suggest that the proposed pairwise pseudolikelihood method performs reasonably well in practical situations and is obviously more efficient than the conditional likelihood approach as expected. The analysis results of the MHCPS data with the proposed pairwise pseudolikelihood method indicate that males have significantly higher risk of losing active life than females. In contrast, the conditional likelihood method recognizes this effect as nonsignificant, which is because the conditional likelihood method often loses some estimation efficiency compared with the proposed method.
Conclusions
The proposed method provides a general and helpful tool to conduct the Cox’s regression analysis of lefttruncated failure time data under various types of censoring.
Introduction
Failure time data are frequently encountered in various scientific areas, including clinical trials, epidemiology surveys, and biomedical studies. A key feature of such data is the presence of censoring, which usually poses great computational challenges for their analysis [1, 2]. The type of censoring that has been investigated most is apparently right censoring [3,4,5,6]. Other types of censored data that often occur in practice include intervalcensored and partly intervalcensored data [7,8,9,10,11,12,13]. In particular, Gao et al. [10] recently proposed an efficient semiparametric estimation approach for the analysis of partly intervalcensored data under the accelerated failure time model. Zhou et al. [13] also studied the analysis of partly intervalcensored failure time but via the transformation models.
For failure time data, in addition to censoring, left truncation also often arises due to the use of crosssectional sampling strategy and can substantially complicate the data analysis. For example, in the Canadian Study of Health and Aging Study, the failure time of interest is defined as the duration from the onset of dementia to death [14]. Since only dementia patients who had not experienced the death at the enrollment are included in the study, the patient’s death time is expected to suffer from left truncation, where the truncation time is the gap time between the onset of dementia and the enrollment. Therefore, the sampled patients are no longer representative of the whole population under study, and it is wellknown that ignoring the left truncation in the data analysis often leads to biased parameter estimation.
Due to the ubiquity of left truncation in failure time studies, extensive efforts have been devoted to the method developments for the analysis of the lefttruncated failure time data under various types of censoring scheme [15,16,17,18,19,20,21,22,23,24,25]. For instance, Wang et al. [16] considered the lefttruncated and rightcensored data, and developed a conditional estimation approach under the proportional hazards (PH) model, while Pan and Chappell [17] investigated the analysis of lefttruncated and intervalcensored data and suggested a marginal likelihood approach and a monotone maximum likelihood approach for the PH model. Gao and Chan [24] discussed the same model and data structure as Pan and Chappell [17], but further assumed that the truncation times follow the uniform distribution, which is usually referred to as the stationary or lengthbiased assumption in the literature. However, it is worth noting that this approach may produce biased parameter estimation when the lengthbiased assumption is violated in practical applications. For the lefttruncated and partly intervalcensored data, Wu et al. [25] provided a conditional likelihood approach for the PH model in the presence of a cured subgroup.
In addition to the work described above, Huang and Qin [14] also studied lefttruncated and rightcensored data and proposed an estimation procedure for the additive hazards model by combining a pairwise pseudoscore function and the conditional estimating function. This approach is appealing since it utilizes the marginal likelihood of the truncation times and can thus improve the estimation efficiency. In addition, the employed pairwise pseudolikelihood can eliminate nuisance parameters from the marginal likelihood of the truncation times, leading to an estimating equation function with tractable form, and can yield more efficient estimation compared with the conditional estimating equation approach. Inspired by the work of Huang and Qin [14], Wu et al. [26] proposed a pairwise likelihood augmented estimator for the PH model with the lefttruncated and rightcensored data. Furthermore, Wang et al. [27] considered the analysis of lefttruncated and intervalcensored data with the PH model, and developed a sieve maximum likelihood estimation procedure by accommodating the pairwise likelihood function of the truncation times.
In the following, we will consider regression analysis of lefttruncated failure time data under the PH model and various types of censoring mechanism, including the interval censoring, right censoring and a mixture of them. Specifically, motivated by Huang and Qin [14] and Wu et al. [26], we propose a nonparametric maximum likelihood estimation (NPMLE) approach by combining the conditional likelihood of the failure times with the pairwise likelihood obtained from the marginal likelihood of the truncation times, rendering an efficient estimation for the PH model. A flexible EM algorithm that can accommodate various types of censored data will be developed to implement the NPMLE. Through the desirable data augmentation, the objective function in the Mstep of the algorithm has a tractable form, and one can estimate the regression coefficients and the nuisance parameters related to the cumulative baseline hazard function separately. In particular, by utilizing the spirit of selfconsistent estimation equation, we obtain the explicit estimators of the possibly largedimensional nuisance parameters, which can greatly relieve the computational burden in the optimization procedure. The numerical results obtained from extensive simulation studies demonstrate that the proposed method is computationally stable and reliable and can improve the estimation efficiency of the conditional likelihood approach. In other words, the proposed method provides a general and helpful tool to conduct the Cox’s regression analysis of lefttruncated failure time data under various types of censoring.
The remainder of this paper is organized as follows. In Section Notation, model, and likelihood, we will first introduce some notation, data structure and the model, and then present the observed data likelihood function. Section Estimation procedure presents the developed EM algorithm to implement the NPMLE. In Section Simulation studies, extensive simulation studies are conducted to evaluate the empirical performance of the proposed method, followed by an application to a set of real data in Section An application. Section Discussion and concluding remarks gives some discussion and concluding remarks.
Notation, model, and likelihood
Consider a failure time study involving left truncation, and for a subject from the target population, let \(T^{*}\) denote the underlying failure time, that is, the time to the onset of the failure event. Let \(A^{*}\) be the underlying truncation time (i.e. the time to the study enrolment), which is assumed to be independent of \(T^{*}\), and \(\varvec{Z}^{*}\) be the pdimensional vector of covariates. For a subject enrolled in the study (i.e. satisfying \(T^{*}\ge A^{*}\)), denoted by T, A and \(\varvec{Z}\) the failure time, the truncation time and the vector of covariates, respectively. Then \((T, A,\varvec{Z})\) has the same joint distribution as \((T^{*}, A^{*},\varvec{Z}^{*})\) conditional on \(T^{*}\ge A^{*}\).
Let f and S denote the density and survival functions of \(T^{*}\), respectively. Let h be the density function of \(A^{*}\). Then the joint density function of (T, A) at (t, a) is
where f(t)/S(a) is the conditional density of T given A, \(S(a)h(a)/\int _0^{\infty }S(u)h(u)du\) is the marginal density of A. To describe the effect of \(\varvec{Z}^{*}\) on the failure time \(T^{*}\), we assume that \(T^{*}\) follows the PH model with the conditional cumulative hazard function of \(T^{*}\) given \(\varvec{Z}^{*}\) taking the form
In the above, \(\Lambda (t)\) is an unspecified baseline cumulative hazard function and \(\varvec{\beta }\) denotes a pdimensional vector of regression coefficients.
As mentioned above, censoring always exists in failure time studies. Define \(\Delta =1\) if T can be observed exactly and 0 otherwise. If \(\Delta =0\), let (L, R] be the smallest interval that brackets T with \(L \ge A\). Clearly, T is leftcensored if \(L = A\), T is rightcensored if \(R = \infty\), and T is intervalcensored if \(R < \infty\). In the sequel, notations with the subscript \(_i\) represent the corresponding sample analogues. Therefore, we have partly intervalcensored data if the obtained data consist of n independent observations denoted by \((A_i, T_i, \Delta _i, \varvec{Z}_i)\) if \(\Delta _i=1\) and \((A_i, L_i, R_i, \Delta _i, \varvec{Z}_i)\) if \(\Delta _i=0\) for \(i = 1, \ldots , n\). Notably, the data above reduce to intervalcensored data if \(\Delta _i = 0\) for \(i = 1, \ldots , n\), and rightcensored data if \(R_i = \infty\) for \(i = 1, \ldots , n\).
Let \(S(t \mid \varvec{Z}_i) = \exp \{\Lambda (t)\exp (\varvec{Z}_i^{\top } \varvec{\beta })\}\) and \(\lambda (t)= d\Lambda (t)/dt\). Assume that \((L_i, R_i)\) is conditionally independent of \((A^{*}, T^{*})\) given \(A^{*} \le T^{*}\) and \(\varvec{Z}^{*}\), and that \(A^{*}\) is independent of \(\varvec{Z}^{*}\), the observed data likelihood function takes the form
where
and
In the above, \(L^C_n(\varvec{\beta },\Lambda )\) is the conditional likelihood of \(\{\Delta _i T_i, (1\Delta _i)L_i, (1\Delta _i)R_i, \Delta _i\}\) given \((A_i, \varvec{Z}_i)\), and \(L^M_n(\varvec{\beta },\Lambda , h)\) is the marginal likelihood of \(A_i\) given \(\varvec{Z}_i\). Note that the observed data likelihood \(L_n(\varvec{\beta },\Lambda ,h)\) has an intractable form due to the complex data structure and the involvement of the nuisance functions \(\Lambda\) and h. For the estimation, it is apparent that performing direct maximization of \(L_n(\varvec{\beta },\Lambda ,h)\) with respect to all parameters is quite challenging and unstable even after approximating \(\Lambda\) and h with some smooth functions with finitedimensional parameters. To address this issue, in the next section, we will develop a flexible EM algorithm by introducing some Poisson latent variables in the data augmentation procedure, which can greatly simplify the form of \(L^C_n(\varvec{\beta },\Lambda )\). In addition, by following Liang and Qin [28] and others, we will employ the pairwise likelihood approach to eliminate the nuisance function h from the marginal likelihood \(L^M_n(\varvec{\beta },\Lambda ,h)\). The above two manipulations make the estimation procedure appealing and easily implemented.
Estimation procedure
To estimate \(\varvec{\beta }\) and \(\Lambda\), we adopt the NPMLE approach and develop an EM algorithm for its implementation. For this, we will first discuss the data augmentation and then present the pairwise likelihood method as well as the Estep and Mstep of the algorithm.
Data augmentation
First note that the likelihood function above depends on \(\Lambda (t)\) only through its values at the finite observation times, exactlyobserved failure times and truncation times. Let \(t_1\cdots<t_{K_n}<\infty\) denote the ordered sequence of these unique time points, and assume that \(\Lambda (t)\) is a step function at \(t_k\) with the nonnegative jump size \(\lambda _k\) for \(k = 1,\ldots , K_n\). Then the conditional likelihood \(L^C_n(\varvec{\beta },\Lambda )\) can be reexpressed as
where \(\varvec{\theta } = (\varvec{\beta }^{\top }, \lambda _1, \ldots , \lambda _{K_n})^{\top }\).
To simplify \(L^C_{1n}(\varvec{\theta })\), for the ith subject, we introduce a set of new independent latent variables \(\{W_{ik}; k=1,2,\cdots ,K_n\}\) relating to \(t_1, t_2,\cdots , t_{K_n}\) respectively, where \(W_{ik}\) is a Poisson random variable with mean \(\lambda _k\exp (\varvec{Z}_i^{\top } \varvec{\beta })\). Then \(L^C_{1n}(\varvec{\theta })\) can be equivalently expressed as
where \(W_{ik}_{t_k = T_i}\) denotes the variable in \(\{W_{ik}; k=1,2,\cdots ,K_n\}\) that satisfies \(t_k = T_i\).
Define \(R_i^{*} = (1\Delta _i) (L_i I(R_i=\infty )+ R_i I(R_i<\infty )) + \Delta _i T_i\), and let \(p\{W_{ik} \mid \lambda _k\exp (\varvec{Z}_i^{\top } \varvec{\beta })\}\) be the probability mass function of \(W_{ik}\) with mean \(\lambda _k\exp (\varvec{Z}_i^{\top } \varvec{\beta })\). By treating the latent variables \(W_{ik}\)’s as observable, the augmented likelihood function is given by
which subjects to the constraints that \(\sum _{A_i\le t_k < T_i}W_{ik}=0\) and \(W_{ik}_{T_i = t_k}=1\) if \(\Delta _i = 1\), \(\sum _{A_i\le t_k\le L_i}W_{ik}=0\) and \(\sum _{L_i< t_k\le R_i}W_{ik}>0\) if \(\Delta _i = 0\) and \(R_i < \infty\); and \(\sum _{A_i\le t_k\le L_i}W_{ik}=0\) if \(\Delta _i = 0\) and \(R_i = \infty\).
Pairwise likelihood
Since the density function h in the marginal likelihood \(L^M_n(\varvec{\beta },\Lambda , h)\) is a nuisance function, we follow the work of Liang and Qin [28] and apply the pairwise likelihood method to \(L^M_n(\varvec{\beta },\Lambda ,h)\) to eliminate h. Note that, for \(i \ne j\), by conditioning on \((\varvec{Z}_i, \varvec{Z}_j)\) and having observed \((A_i, A_j)\) but without knowing the order of \(A_i\) and \(A_j\), the pairwise pseudolikelihood of the observed \((A_i, A_j)\) is given by
where
Therefore, the pairwise likelihood \(L^P_n(\varvec{\theta })\) of all pairs is given by
Notably, through the above manipulation, \(L^P (\varvec{\theta })\) depends on the parameters in the survival model, \(\varvec{\beta }\) and \(\lambda _1, \ldots , \lambda _{K_n}\), but not on the density function h of truncation time \(A^{*}\).
EM algorithm
Combing the augmented likelihood \(L^C(\varvec{\theta })\) with the pairwise likelihood \(L^P (\varvec{\theta })\), and taking into account the different magnitudes of \(L^C(\varvec{\theta })\) and \(L^P (\varvec{\theta })\), we can derive the composite completedata loglikelihood as follows
In the Estep of the algorithm, we take the conditional expectations with respect to the latent variables \(W_{ik}\)’s in \(l (\varvec{\theta })\), and for notational simplicity, we will ignore the conditional arguments including the observed data and the estimate of \(\varvec{\theta }\) at the lth iteration denoted by \(\varvec{\theta }^{(l)}\) in all conditional expectations. This step yields the following objective function
We now present the expressions of \(E(W_{ik})\)’s in \(l_E(\varvec{\theta })\). Specifically, in the case of \(\Delta _i = 1\) (exactlyobserved \(T_i\)), we have \(E(W_{ik})=0\) if \(A_i\le t_k < T_i\), and \(E(W_{ik})=1\) if \(T_i=t_k\). In the case of \(\Delta _i = 0\) and \(A_i\le T_i\le L_i\) (left censoring), we have
In the case of \(\Delta _i = 0\) and \(R_i<\infty\) (interval censoring), we have \(E(W_{ik})=0\) if \(A_i\le t_k\le L_i\), and
In the case of \(\Delta _i = 0\) and \(R_i=\infty\) (right censoring), we have \(E(W_{ik})=0\) if \(A_i\le t_k\le L_i\).
Differentiating \(l_E(\varvec{\theta })\) with respect to \(\varvec{\beta }\) and \(\lambda _k\)’s yields the following composite score functions
and
where \(Q_{ij}^{(m)}(t;\varvec{\beta })=\Big \{\varvec{Z}_i^{\otimes m}\exp (\varvec{Z}_i^{\top } \varvec{\beta })  \varvec{Z}_j^{\otimes m}\exp (\varvec{Z}_j^{\top } \varvec{\beta }) \Big \}\Big \{I(t\le A_i) I(t\le A_j)\Big \}\) for \(m=0\) or 1, \(\varvec{Z}^{\otimes 0}=1\) and \(\varvec{Z}^{\otimes 1}=\varvec{Z}\).
Specifically, at the \((l+1)\)th iteration, based on estimating equation \(U_{\lambda _k}(\varvec{\theta })=0\), one can derive a selfconsistent solution to update each \(\lambda _k\) :
By combining the discussion above, the proposed EM algorithm can be summarized as follows:
 Step 0::

Choose initial values for \(\varvec{\beta }^{(0)}\) and \(\lambda _k^{(0)}\) for \(k = 1, \ldots , K_n\), and set \(l=0\).
 Step 1::

At the \((l+1)\)th iteration, calculate each \(E(W_{ik})\) based on the observed data and the parameter estimates at the lth iteration.
 Step 2::

Update each \(\lambda _k\) with the closedform expression (3).
 Step 3::

Update \(\varvec{\beta }\) by solving the estimation equation \(U_{\varvec{\beta }}(\varvec{\theta })=0\) with the onestep NewtonRaphson method, and increase l by 1.
 Step 4::

Repeat Steps 1  3 until the convergence is achieved.
The resulting estimators of \(\varvec{\beta }\) and \(\Lambda (t)\) are denoted as \(\hat{\varvec{\beta }}\) and \(\hat{\Lambda }(t)=\sum _{t_k\le t}\hat{\lambda }_k\), respectively, where \(\hat{\lambda }_k\) is the estimate of \(\lambda\) for \(k = 1, \ldots , K_n\). For the standard error estimation of \(\hat{\varvec{\beta }}\) and \(\hat{\Lambda }(t)\), we propose to simply employ the nonparametric bootstrap approach ([29], for example), and the numerical results below suggest that it seems to work well in finite samples. The numerical results also indicate that the performance of the proposed algorithm is quite robust to the choices of the initial values of \(\varvec{\beta }\) and \(\lambda _k\)’s. In the practical implementation of the proposed algorithm, one can simply set the initial value of each regression parameter to 0 and the initial value of each \(\lambda _k\) to \(1/K_n\). The algorithm is declared to achieve convergence if the sum of the absolute differences between two successive estimates of all parameters is less than a small positive constant, say 0.001. We implement the proposed algorithm under the Rcpp environment, which guarantees that the computation is efficient and tractable.
Simulation studies
Simulation studies were conducted to assess the empirical performance of the proposed estimation procedure. In the study, the failure time \(T^{*}\) was generated from model (1) with \(\varvec{Z} = (Z_1, Z_2)^{\top }\), \(Z_1\sim Bernoulli(0.5)\), \(Z_2\sim Uniform(0.5, 0.5)\), \(\varvec{\beta } = (\beta _1, \beta _2)^{\top } = (1,1)^{\top },\) and \(\Lambda (t)=t^2\), which corresponds to the Weibull distribution with the scale parameter 1 and the shape parameter 2. The truncation time \(A^{*}\) was generated from either Uniform(0, \(\tau ^{*}\)) or exponential distribution with rate \(\theta ^{*}\), where \(\tau ^{*}\) or \(\theta ^{*}\) was chosen to yield about \(50\%\) average truncation rate. Note that when the truncation time follows the uniform distribution or satisfies the stationary assumption, we have the lengthbiased data, a special type of the lefttruncated data as discussed above. Under the left truncation mechanism, the observed failure time T was equal to \(T^{*}\) if \(T^{*} > A^{*}\). We firstly considered the situation with lefttruncated and partly intervalcensored data. To construct censoring, for each subject, we mimicked the periodical followup study and generated a sequence of examination times with the first observation time being \(A^{*}\) and the gap times of two successive observation times being \(0.05+Uniform(0, 0.5)\). Then we used the above simulated failure time T instead of the intervalcensored observation if interval length is less than 0.2 to construct the uncensored or exactly observed T. The length of study was set to be 1.5, beyond which no further examinations were conducted.
For comparison, we considered the following three competing methods: the proposed pairwise pseudolikelihood method (Proposed method), the NPMLE method without adjusting for the left truncation (Ignoring truncation) and the conditional likelihood method (CL method). Specifically, in the supplementary materials, we developed an EM algorithm with Poisson latent variables to implement the conditional likelihood method, and the “Ignoring truncation” method can be implemented with the EM algorithm by setting each \(A_i = 0\). We set \(n = 100\), 300 or 500, and used 1000 replicates. Under the above configurations, the proportions of exactlyobserved failure times ranged from \(4\%\) to \(26\%\); left censoring rates ranged from \(16\%\) to \(37\%\); right censoring rates ranged from \(7\%\) to \(33\%\) and interval censoring rates ranged from \(24\%\) to \(58\%\).
Table 1 presents the simulation results for the estimated regression parameters and the cumulative hazards function at \(t=0.4\), 0.8 or 1.2 with partly intervalcensored data. They include the estimated bias (Bias) given by the average of the 1000 estimates minus the true value, the sample standard error (SSE) of the 1000 estimates, the average of the 1000 standard error estimates (SEE), and the 95% empirical coverage probability (CP) yielded by the normal approximation. Specifically, the standard errors of the proposed pairwise pseudolikelihood estimators were calculated via the nonparametric bootstrapping with 100 bootstrap samples. For CL and “Ignoring truncation” methods, we followed Zeng et al. [30] and proposed to adopt the profile likelihood approach to perform the variance estimation. This approach is simple and easy to implement, but can only provide the variance estimation for the estimated regression parameter, finitedimensional parameter of interest. Thus, the SEEs of the cumulative hazards function estimates of the CL and “Ignoring truncation” methods were not available in Table 1. Given that \(\Lambda (t)\) is always positive, we used the logtransformation and constructed its confidence band with the delta method as Mao and Lin [31] among others. For any t, the confidence interval of \(\Lambda (t)\) is given by \([\hat{\Lambda }(t) \exp \{z_{0.975}\hat{\sigma }(t)/\hat{\Lambda }(t)\},\) \(\hat{\Lambda }(t) \exp \{z_{0.975}\hat{\sigma }(t)/\hat{\Lambda }(t)\}]\), where \(\hat{\sigma }(t)\) is the standard error estimate of \(\hat{\Lambda }(t)\), and \(z_{0.975}\) is the upper 97.5th percentile of the standard normal distribution.
One can see from Table 1 that the estimators of the proposed pairwise pseudolikelihood method are virtually unbiased, the corresponding sample standard error estimates are close to the average standard error estimates, and the empirical coverage probabilities are all around the nominal value 95%, implying that the normal approximation of the asymptotic distribution of the proposed estimator seems reasonable. In addition, one can clearly find that the proposed method is more efficient than the conditional likelihood method, and this efficiency gain can be anticipated since the proposed method utilizes the information of the marginal distribution of the truncation time. Since the generated data are subject to biased sampling, as seen from Table 1, the “Ignoring truncation” method is expected to yield much larger estimation biases than the proposed and the conditional likelihood methods.
In the second study, we considered the lefttruncated and intervalcensored data. For this, we generated the truncation time \(A^{*}\) in the same way as before, and set the first examination time being \(A^{*}\). The gap time of two successive observation times was set to be \(0.05+Uniform(0, 0.5)\), and the other model specifications were kept the same as above. Then we had the lefttruncated and intervalcensored data by contrasting the generated T with the observation times. Under the aforementioned simulation setups, the left censoring rates were from \(20\%\) to \(56\%\); the right censoring rates ranged from \(7\%\) to \(32\%\); interval censoring rates ranged from \(27\%\) to \(67\%\). The simulation results summarized in Table 2 again indicate that the proposed method performs reasonably well and has some advantages over the conditional likelihood and the “Ignoring truncation” methods.
Note that Wu et al. [26] considered the lefttruncated and rightcensored data and proposed an iterative estimation procedure to implement the pairwise pseudolikelihood method. It is clear that the proposed method can deal with such data too. Therefore, one may be interested in comparing the performance of the proposed method with that of Wu et al. [26]. To investigate this, we generated the failure time \(T^{*}\) from model (1) with \(\varvec{Z} = (Z_1, Z_2)^{\top }\), \(Z_1\sim Bernoulli(0.5)\), \(Z_2\sim Uniform(1,1)\), \(\beta _1 = \beta _2 = 1\), and \(\Lambda (t)=t^2\). The truncation time \(A^{*}\) was generated in the same way as before. The right censoring time C was generated independently from \(Uniform(0,C_{max})\), where \(C_{max}\) were chosen to yield about \(30\%\) right censoring rate. The results given in Table 3 imply that the two methods can both perform well and give similar performance.
An application
We apply the proposed method to a set of real data arising from the Massachusetts Health Care Panel Study (MHCPS) discussed in Pan and Chappell [17], Gao and Chan [24] and others. In 1975, the MHCPS enrolled elderly people who had not lost the active life in Massachusetts to evaluate the effect of gender (male or female) on the time to loss of active life. To determine when individuals in the study lost the active life, three subsequent followups were taken at the 1.25, 6, and 10 years after the study enrolment. Therefore, age of the loss of active life, the defined failure time of interest \(T^{*}\), cannot be recorded exactly and suffered from interval censoring. In the MHCPS, since subjects who had lost the active life before the study were not enrolled, the age of the loss of active life was subject to left truncation with the truncation time \(A^{*}\) being the age at enrolment [17]. Therefore, we had lefttruncated and intervalcensored data. After deleting a small amount of unrealistic records of the raw data, 1025 subjects with the age ranging from 65 to 97.3 were considered in the current analysis. In particular, the right censoring rate is \(45.8\%\).
Define \(Z =1\) if the individual is male and 0 otherwise. For the analysis of the MHCPS data, as in the simulation studies, we considered three competing methods: the proposed pairwise pseudolikelihood method (Proposed method), the conditional likelihood approach (CL method), and the NPMLE method that ignores the existence of left truncation (Ignoring truncation). Table 4 presents the obtained results including the estimated covariate effect (Est), the standard error estimate (Std) and the associated pvalue for testing the covariate effect being zero. In the proposed pairwise pseudolikelihood method, as in the simulation study, we employed the nonparametric bootstrapping with 100 bootstrap samples to calculate the standard error of the estimated regression parameter.
One can see from Table 4 that the estimated coefficient and the standard error estimate of the proposed method are given by 0.122 and 0.060, respectively, meaning that males have significantly higher risk of losing active life than females. This conclusion is in accordance with that given in Gao and Chan [24] where the lengthbiased assumption was made for the truncation time. One can also find from Table 4 that the CL method recognized the covariate effect as nonsignificant, which is different from the conclusion obtained by the proposed method. This phenomenon may arise partly due to the fact the CL method often loses some estimation efficiency compared with the proposed method. Moreover, the results given in Table 4 suggested that the NPMLE method that ignores the existence of left truncation tended to overestimate the covariate effect, and this effect was also recognized as nonsignificant.
Discussion and concluding remarks
In the preceding sections, we proposed a general or unified pairwise pseudolikelihood approach for the analysis of lefttruncated failure time data under the PH model. The proposed method is quite general and flexible since it applies to various types of censored data, including the partly intervalcensored, intervalcensored, and rightcensored data. We devised an EM algorithm to calculate the nonparametric maximum likelihood estimators, which was shown to be computationally stable and reliable in finite samples. Numerical results indicated that, by utilizing the pairwise order information of the truncation times, the proposed method can indeed yield more efficient estimators compared with the conventional conditional likelihood estimation approach. An application to the MHCPS data demonstrated the practical utility of the proposed method.
Notably, in the proposed algorithm, the derivation of the selfconsistent solution (3) for \(\lambda _k\) is the desirable feature, which avoids the use of highdimensional optimization procedure. In addition, the estimation equation \(U_{\varvec{\beta }}(\varvec{\theta })=0\) for \(\varvec{\beta }\) has tractable form and can be readily solved with some routine optimization procedure, such as the NewtonRaphson method. The two desirable features both make the proposed algorithm computationally stable and reliable. There may also exist some shortcomings of the proposed method. One is that the selfconsistent solution (3) may not ensure that the estimate of \(\lambda _k\) is always nonnegative. However, it has been our experience that, given a reasonable initial value, the negative estimate of \(\lambda _k\) is unlikely to occur in the simulations. As an alternative, by following Zhou et al. [32] and others, one can attempt to reparameterize each \(\lambda _k\) as \(\exp (\lambda _k^{*})\), where \(\lambda _k^{*}\) is the unconstrained parameter to be estimated. Another is that we adopted the nonparametric bootstrap method to calculate the variance of parameter estimate, which involves repeated data sampling. This procedure will become computationally intensive if the sample size is extremely large. Future efforts will be devoted to develop a simple variance estimation procedure.
There may also exist several potential research directions for future research. One is that in the proposed method, we made a noninformative or independent censoring assumption [33, 34]. In other words, the failure times of interest were assumed to be conditionally independent of the observation times given the covariates. However, it is apparent that this assumption may not hold in some applications, and thus the generalizing of the proposed method to the situation of informative censoring deserves further investigation. In some applications, one may also encounter bivariate or multivariate failure time data [35], and it would be helpful to generalize the proposed method to deal with such data. Also the extensions of the proposed method to other regression models such as the transformation or additive hazards models can be useful.
Availability of data and materials
The MHCPS data set used in this study can be downloaded at https://onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1111%2Fj.0006341X.2002.00064.x &file=BIOM_64_sm_010423.txt. The proposed algorithm can be implemented in the R package LTsurv, which is publicly available at https://github.com/lishuwstat/LefttruncationCoxPairwiselikelihood.
Abbreviations
 PH:

Proportional hazards
 NPMLE:

Nonparametric maximum likelihood estimation
 EM:

Expectation Maximization Algorithm
 CL:

Conditional likelihood
 SSE:

Sample standard error
 SEE:

Standard error estimate
 CP:

Coverage probability
 MHCPS:

Massachusetts Health Care Panel Study
References
Kalbfleisch JD, Prentice RL. The statistical analysis of failure time data. New York: Wiley; 2002.
Sun J. The statistical analysis of intervalcensored failure time data. New York: Springer; 2006.
Cox DR. Regression models and lifetables (with Discussion). J R Stat Soc Ser B. 1972;34(2):187–220.
Lin DY, Ying Z. Semiparametric analysis of the additive risk model. Biometrika. 1994;81(1):61–71.
Zeng D, Lin DY. Efficient estimation of semiparametric transformation models for counting processes. Biometrika. 2006;93(3):627–40.
Chiou SH, Kang S, Yan J. Rankbased estimating equations with general weight for accelerated failure time models: an induced smoothing approach. Stat Med. 2015;34:1495–510.
Huang J. Efficient Estimation for the Cox Model with Interval Censoring. Ann Stat. 1996;24(2):540–68.
Huang J. Asymptotic properties of nonparametric estimation based on partly intervalcensored data. Stat Sin. 1999;9:501–19.
Kim JS. Maximum likelihood estimation for the proportional hazards model with partly intervalcensored data. J R Stat Soc Ser B. 2003;65(2):489–502.
Gao F, Zeng D, Lin DY. Semiparametric estimation of the accelerated failure time model with partly intervalcensored data. Biometrics. 2017;73(4):1161–8.
Li J, Ma J. Maximum penalized likelihood estimation of additive hazards models with partly interval censoring. Comput Stat Data Anal. 2019;137:170–80.
Pan C, Cai B, Wang L. A Bayesian approach for analyzing partly intervalcensored data under the proportional hazards model. Stat Methods Med Res. 2020;29(11):3192–204.
Zhou Q, Sun Y, Gilbert PB. Semiparametric regression analysis of partly intervalcensored failure time data with application to an AIDS clinical trial. Stat Med. 2021;40(20):4376–94.
Huang CY, Qin J. Semiparametric estimation for the additive hazards model with lefttruncated and rightcensored data. Biometrika. 2013;100(4):877–88.
Wang MC. Nonparametric estimation from crosssectional survival data. J Am Stat Assoc. 1991;86(413):130–43.
Wang MC, Brookmeyer R, Jewell NP. Statistical models for prevalent cohort data. Biometrics. 1993;49:1–11.
Pan W, Chappell R. Estimation in the Cox proportional hazards model with lefttruncated and intervalcensored data. Biometrics. 2002;58(1):64–70.
Shen Y, Ning J, Qin J. Analyzing lengthbiased data with semiparametric transformation and accelerated failure time models. J Am Stat Assoc. 2009;104(487):1192–202.
Qin J, Ning J, Liu H, Shen Y. Maximum likelihood estimations and EM algorithms with lengthbiased data. J Am Stat Assoc. 2011;106(496):1434–49.
Shen PS. Proportional hazards regression with intervalcensored and lefttruncated data. J Stat Comput Simul. 2014;84(2):264–72.
Shen PS. Conditional MLE for the proportional hazards model with lefttruncated and intervalcensored data. Stat Probab Lett. 2015;100:164–71.
Wang P, Tong X, Zhao S, Sun J. Efficient estimation for the additive hazards model in the presence of lefttruncation and interval censoring. Stat Interface. 2015;8(3):391–402.
Shen Y, Ning J, Qin J. Nonparametric and semiparametric regression estimation for lengthbiased survival data. Lifetime Data Anal. 2017;23(1):3–24.
Gao F, Chan KCG. Semiparametric regression analysis of lengthbiased intervalcensored data. Biometrics. 2019;75(1):121–32.
Wu Y, Chambers CD, Xu R. Semiparametric sieve maximum likelihood estimation under cure model with partly interval censored and left truncated data for application to spontaneous abortion. Lifetime Data Anal. 2019;25(3):507–28.
Wu F, Kim S, Qin J, Saran R, Li Y. A pairwise likelihood augmented Cox estimator for lefttruncated data. Biometrics. 2018;74(1):100–8.
Wang P, Li D, Sun J. A pairwise pseudolikelihood approach for lefttruncated and intervalcensored data under the Cox model. Biometrics. 2021;77(4):1303–14.
Liang KY, Qin J. Regression analysis under nonstandard situations: a pairwise pseudolikelihood approach. J R Stat Soc Ser B. 2000;62(4):773–86.
Efron B. Censored data and the bootstrap. J Am Stat Assoc. 1981;76:316–9.
Zeng D, Mao L, Lin D. Maximum likelihood estimation for semiparametric transformation models with intervalcensored data. Biometrika. 2016;103(2):253–71.
Mao L, Lin DY. Efficient estimation of semiparametric transformation models for the cumulative incidence of competing risks. J R Stat Soc Ser B. 2017;79:573–87.
Zhou Q, Hu T, Sun J. A Sieve Semiparametric Maximum Likelihood Approach for Regression Analysis of Bivariate IntervalCensored Failure Time Data. J Am Stat Assoc. 2017;112:664–72.
Ma L, Hu T, Sun J. Sieve maximum likelihood regression analysis of dependent current status data. Biometrika. 2015;102:731–8.
Li S, Hu T, Wang P, Sun J. Regression analysis of current status data in the presence of dependent censoring with applications to tumorigenicity experiments. Comput Stat Data Anal. 2017;110:75–86.
Piao J, Ning J, Shen Y. Semiparametric model for bivariate survival data subject to biased sampling. J R Stat Soc Ser B. 2019;81:409–29.
Acknowledgements
We would like to thank the editor office for the efforts on handing this submission. We also wish to thank the editor, the associate editor, and reviewers for the helpful comments and suggestions that greatly improved this article.
Funding
Shuwei Li’s research was partially supported by Science and Technology Program of Guangzhou of China (Grant No. 202102010512), the National Nature Science Foundation of China (Grant No. 11901128), and Nature Science Foundation of Guangdong Province of China (Grant Nos. 2021A1515010044 and 2022A1515011901). Li Shao’s work was supported by Guangdong Basic and Applied Basic Research Foundation (Grant No. 2021A1515110926).
Author information
Authors and Affiliations
Contributions
SL proposed the idea. LS wrote the R code and created the R package. HL conducted the simulation and real data analysis. SL, LS and HL wrote the original version of the manuscript together and JS polished the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
12874_2023_1903_MOESM1_ESM.pdf
Additional file 1.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Shao, L., Li, H., Li, S. et al. A pairwise pseudolikelihood approach for regression analysis of lefttruncated failure time data with various types of censoring. BMC Med Res Methodol 23, 82 (2023). https://doi.org/10.1186/s1287402301903x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1287402301903x
Keywords
 Cox model
 EM algorithm
 Interval censoring
 Left truncation
 Partly intervalcensored data