- Research article
- Open Access
- Published:

# Using observational data to estimate an upper bound on the reduction in cancer mortality due to periodic screening

*BMC Medical Research Methodology*
**volume 3**, Article number: 4 (2003)

## Abstract

### Background

Because randomized cancer screening trials are very expensive, observational cancer screening studies can play an important role in the early phases of screening evaluation. Periodic screening evaluation (PSE) is a methodology for estimating the reduction in population cancer mortality from data on subjects who receive regularly scheduled screens. Although PSE does not require assumptions about natural history of cancer it requires other assumptions, particularly progressive detection – the assumption that once a cancer is detected by a screening test, it will always be detected by the screening test.

### Methods

We formulate a simple version of PSE and show that it leads to an upper bound on screening efficacy if the progressive detection assumption does not hold (and any effect of birth cohort is minimal) To determine if the upper bound is reasonable, for three randomized screening trials, we compared PSE estimates based only on screened subjects with PSE estimates based on all subjects.

### Results

In the three randomized screening trials, PSE estimates based on screened subjects gave fairly close results to PSE estimates based on all subjects.

### Conclusion

PSE has promise for obtaining an upper bound on the reduction in population cancer mortality rates based on observational screening data. If the upper bound estimate is found to be small and any birth cohort effects are likely minimal, then a definitive randomized trial would not be warranted.

## Background

Because randomized cancer screening trials are very expensive and sometimes difficult to implement, observational cancer screening studies can play an important role in estimating the efficacy of cancer screening during early phases of evaluation of the screening test. However the standard methodology for observational cancer screening studies has various limitations. Case-control studies require adequate case identification, eligibility criteria for equal access of cases and controls to screening, distinguishing symptomatic and diagnostic tests, and adjustments for self-selection bias [1]. Cohort studies often involve natural history models which rest upon assumptions about the duration of preclinical cancer or the growth rate of the tumor, the sensitivity of the screening test, and how screening affects cancer mortality. Some examples can be found in [2–6]. Importantly natural history models based only on observational data must implicitly assume no selection bias, a very tenuous assumption.

In contrast, periodic screening evaluation (PSE), which combines estimates from screened subjects to estimate the reduction in population cancer mortality associated with periodic cancer screening [7–9], does not involve natural history models and the associated assumptions. However a different set of assumptions is required. In certain situations these assumptions may be more plausible than the natural history assumptions, so in some circumstances, the method may be complementary and possibly superior, to the natural history modeling approach.

PSE starts with the following estimates based directly on observed data from a few screenings at regular intervals over various ages (1) age-specific rates of cancer detection on first screening, interval cancers, and cancer detection on subsequent screenings, and (2) cancer fatality rates following cancer detection at first screening, interval cases, and cancer detection on subsequent screening. For evaluation it is also necessary to estimate (1) the age-specific rate of cancer detection in the *absence* of screening and (2) the cancer mortality rate following detection in the *absence* of screening. Because there are no randomized controls the challenge is to estimate rates in the absence of screening in a manner that mitigates selection bias.

In estimating the cancer detection rate in the absence of screening PSE mitigates selection bias in a unique manner. As will be discussed, PSE estimates the age-specific detection rate in the absence of screening as the sum of the age-specific rates of detection for cancers on the first screening, interval cancers, and cancers on subsequent screenings, minus the age-specific rate of detection for cancer on the first screening in subjects one year older. This estimation assumes progressive detection, namely that once a cancer is detected on screening it will always be detected on screening. Previous versions of PSE made this assumption. However progressive detection is not likely to hold for many types of screening modalities. Fortunately, as we discuss, if progressive detection is violated, the estimated detection rate in the absence of screening is an upper bound and this can lead to useful estimates.

In estimating the cancer fatality rate following detection in the absence of screening, earlier versions of PSE used data from refusers and simply assumed no selection bias. To avoid this assumption (and the need to collect data from refusers), we estimated the cancer mortality rate following detection in the absence of screening by the cancer mortality rate in interval cancers. As we discuss, this also leads to an upper bound (i.e. optimistic) estimate of screening efficacy.

Thus, this version of PSE circumvents the problem of selection bias by estimating an upper bound. The specific estimates of cancer detection rates and cancer fatality rates after cancer detection are not meaningful as separate quantities. Fortunately, one can longitudinally combine the estimates to estimate an upper bound on the reduction in population cancer mortality associated with periodic screening. (Given these data, it is not possible to estimate reduction in population cancer mortality for other intervals between screenings or after periodic screening has stopped). The longitudinal combination of cross-sectional estimates, which also appears in earlier versions of PSE [7–9], is similar to G-computation [10] and the method of Flanders and Longini [11].

If an upper bound estimate of screening efficacy is small, a definitive randomized trial to evaluate the effect of screening on cancer mortality would not be warranted. Thus the upper bound estimate is helpful only if is not unreasonably large. To determine if the upper bound is reasonable, we estimated its value using data from screened subjects in randomized trials of colorectal cancer screening [12, 13], breast cancer screening [14], and lung cancer screening [15, 16]. We then compared this estimate to a modified PSE estimate using data from all subjects, so that estimates of age-specific cancer detection in the absence of screening and cancer mortality in the absence of screening are based on data from randomized controls and refusers.

We also compared the PSE estimates with estimates based on a comparison of outcomes in the two randomized groups, adjusting for refusers and mitigating the effect of dilution after stopping screening [17]. It is important to bear in mind that the two estimates are answering different questions. For PSE, the question is "What is the effect of periodic screening starting at a given age and ending at a later age?" For comparing randomized groups, the question is "What is the effect of the particular screening program in the intervention group?"

## Methods

### Simple formulation of PSE

We derive a simple formulation of PSE and show that it gives an upper bound on the estimated reduction in population cancer mortality.

PSE requires two types of data from subjects who receive two or more screenings at regular intervals. The first type of data are the numbers of subjects who receive each screen and who are detected with cancer as a result of screening or in the interval between screens. (See Tables 1,2,3,4) The second type of data are numbers of subjects with cancer who die from cancer and are in the risk set each year after diagnosis. (See Table 5)

PSE involves three steps: (1) estimate the age-specific incidence of cancer associated with different types of detection: first screen, interval between screens, subsequent screen, refusers, and controls, if available, (2) estimate cancer fatality rates after cancer detection, and (3) combine these estimates to estimate the reduction in population cancer mortality associated with periodic cancer screening.

Because PSE requires regular intervals, the analysis is restricted to screenings that occur "on-time", namely, within a window of time close to the length of interval. The length of the window for on-time screenings is somewhat arbitrary. A very wide window discards relatively few subjects; however it may introduce bias into calculations that are based on assuming the screening interval equals the midpoint of the window. Alternatively, a narrow window might discard too much data, increasing the chance for bias from nonrandom exclusion of subjects. In designing a study for PSE analysis, the screening intervals should be as regular as possible. Without loss of generality in the following discussion, we assume a regular interval between screens of 1 year.

### Step 1: Age-specific cancer incidence

PSE requires estimates of age-specific cancer incidence for the following types of cancer detection: type *F*, detection on the first screening; type *I*, detection in the interval between screenings; type *S*, detection on screening subsequent to the first; and type *A*, detection in the absence of screening. The incidence of type *F* detection at age *a* is *q*
_{
F
}(*a*) = *x*
_{
F
}(*a*) / *n*
_{
F
}(*a*), where *x*
_{
F
}(*a*) is the number of subjects detected with cancer as a result of a first screening at age *a* and *n*
_{
F
}(*a*) is the number who received the first screening at age *a.* The incidence of type *I* detection at age *a* is *q*
_{
I
}(*a*) = *x*
_{
I
}(*a*) / *n*
_{
I
}(*a*) where *x*
_{
I
}(*a*) is the number of cases in the interval after screening at age *a* and *n*
_{
I
}(*a*) is the number at risk at the start of the interval. The incidence of type *S* detection at age *a* is *q*
_{
S
}(*a*) = *x*
_{
S
}(*a*) / *n*
_{
S
}(*a*) where *x*
_{
S
}(*a*) is the number of subjects detected with cancer as a result of an "on-time" screening after a previous screening at age *a* and *n*
_{
S
}(*a*) is the number of subjects who received an "on-time" screening after the previous screening at age *a*. Although a type *S* detection occurs on screening at age *a* + 1, for mathematical convenience, it is associated with screening at age *a*. We cannot observe type *A* detection from data on subjects screened. However, as derived in [8], one can estimate the probability of type *A* detection by

*q*
_{
A
}(*a*) = *q*
_{
F
}(*a*) + *q*
_{
I
}(*a*) + *q*
_{
S
}(*a*) - *q*
_{
F
}(*a* + 1), (1)

if the following key assumption holds.

*Assumption 1. Progressive Detection*

Once a subject is detectable on screening the subject will always be detectable on screening.

(The quantity *q*
_{
F
}(*a* + 1) in (1) is the cancer detection rate on the first screening among subjects age *a* + 1 at first screening.) The graphical proof of (1) in Figure 1 generalizes the graphical proof in [8] to allow some subjects with preclinical cancer to be missed on screening. In Figure 1, *Assumption 1* corresponds to δ = 0, where δ is the probability that some individuals with preclinical cancer would be detected if screened at age *a* but missed if screened at age *a* + 1. As an example of *Assumption* 7, consider a woman who would have been detected with breast cancer at age 50, but is not screened at age 50 for reasons unrelated to the screening or any possibility of cancer. Under *Assumption* 1, if the woman were screened at age 51, she would be detected with cancer. There is sometimes confusion about how this relates to sensitivity of the screening test. As shown in Figure 1, *Assumption 1* implies that the sensitivity of the screening test equals 1 if a previous screening test would have detected cancer.

Ideally *q*
_{
F
}(*a* + 1) estimates the probability of detecting cancer on a first screen at age *a* + 1 one year *after* the start of the study. Because there are no data on subjects first screened after a one year delay, we compute *q*
_{
F
}(*a* + 1) from subjects age *a* + 1 at the start of the study. This procedure requires the following additional assumption.

*Assumption 2. No Birth Cohort Effect*

Given age, year of birth provides no additional information for predicting cancer incidence on the first screen.

### Step 2: Cancer-fatality rates among cases

PSE also requires estimates of cancer fatality-rates among cases. The estimated probability of death from cancer within 5 years of type *d* cancer detection at age *a* is

*pr* (cancer death | type *d* detection at age *a*)

where *h*
_{
di
}is the estimated case-fatality rate from cancer in year *i* after type *d* detection, and *Surv*(*a*, *a* + *i*) is the probability of surviving competing risks from age *a* to age *a* + *i*. See also Gooley et al [18]. For year *j* after type *d* detection (*d* = *F, I, S*), the estimated case-fatality rate from the cancer under study is *h*
_{
dj
}= *x*
_{
dj
}/ *r*
_{
dj
}, where *x*
_{
dj
}is the number of cancer deaths among cases at year *j* after type *d* detection, and *r*
_{
dj
}the number of cases with type *d* detection who are at risk at year *j* since detection.

We approximate *Surv*(*a*, *a* + *i*) by the probability of surviving from age *a* to age *a* + *i* obtained from demographic data stratified by sex [19]. We approximate the probability of surviving competing risks each year over five years as the probability of surviving competing risks at the midpoint of two years, i.e., *Surv*(*a*, *a* + *i*) = *Surv*(*a*, *a* + 3) for *i* = 1,2,3,4,5. This lets us approximate (2) by

*pr* (cancer death in cases | type *d* detection at age *i*) = *Surv*(*a*, *a* + 3) *m*
_{
d
}, (3)

where

is the estimated probability of cancer death within five years of type *d* detection conditional on no death from competing risk and *Surv*(*a*, *a* + 3) is the approximate probability of surviving competing risks within five years of type *d* detection.

A major challenge is how to estimate *m*
_{
A
}, the probability of cancer fatality within five years of type *A* detection (i.e. in the absence of screening) conditional on no death from competing risk. Previous approaches [1–3] used data from refusers, substituting *m*
_{
R
}for *m*
_{
A
}. However this requires a strong unreasonable assumption as well as data from refusers, which is often not available.

As an alternative, we estimate an upper bound on the reduction in the population cancer mortality rate from screening by estimating the cancer fatality rate in the absence of screening using data from interval cancers, namely, substituting *m*
_{
I
}= *m*
_{
A
}. The reason this is an upper bound is that cancers arising in the absence of screening are composed of cancers that would have arisen in the interval after a negative screening (had there been screening) and cancers that would have been detected on a previous screening (had there been screening). The latter cases are presumably slower growing (a type of length-biased sampling) with better survival, so using only the interval cancers artificially increases the estimated cancer-fatality rate in the absence of screening. (One caveat is that the survival of interval cancers may be improved due to increased awareness of treatment options that would occur as part of a screening program. If the effect of length bias is relatively small, substituting *m*
_{
I
}for *m*
_{
A
}might not be an upper bound, although we believe it would be a reasonable approximation.)

### Step 3. Reduction in population cancer mortality rates due to periodic screening

PSE estimates the reduction in population cancer mortality rates due to starting periodic cancer screening at age *a* instead of age *b*, where ages *a* and *b* lie in the range of ages at initial screening. (This estimate accounts for competing risks through the use of *Surv*(*a*, *a* + 3)). To avoid different rates of overdiagnosis between comparison groups, PSE compares population cancer mortality rates in two hypothetical scenarios involving full compliance: Scenario , periodic screening from age *a* until age *b* and Scenario , no periodic screening from age *a* to age *b*-1 followed by screening at age *b*. Scenario involves either detection on the first screen at age *a* or detection in the interval or on subsequent screens to age *b*. Scenario involves either detection in the absence of screening from ages *a* to *b* - 1 or detection on a first screen at age *b.* Screening at age *b* in both Scenarios and avoids differential overdiagnosis rates because, if *Assumption 1* holds, both scenarios and specify equal probabilities of detecting cancer by age *b*.

More formally we can write the reduction in population cancer mortality rates associated with starting periodic cancer screening at age *a* instead of age *b* as

*g* = *pr* (cancer mortality under Scenario )

- *pr* (cancer mortality under Scenario ), (4)

where

*pr* (cancer mortality under Scenario )

=

*pr* (cancer death in cases | type *A* detection at age *i*)

+ *q*
_{
F
}(*b*) pr(cancer death in cases | type *F* detection at age *b*)

*pr* (cancer mortality under Scenario )

= *q*
_{
F
}(*a*) pr(cancer death in cases | type *F* detection at age *a*)

*pr* (cancer death in cases | type *I* detection at age *i*)

*pr* (cancer death in cases | type *S* detection at age *i*)

and cancer death in cases refers to death from cancer in cases within five years of cancer detection. Accounting for deaths from competing risks, the estimated probability of type *d* detection at age *i*, conditional on being alive at age *a*, is

*p*
(type *d* detection at age *i*) = *Surv*(*a, i*) *q*
_{
d
}(*i*) (5)

Substituting (5) and (3) into (4) gives

To simplify (6), we define

Substituting (7) into (4) gives the following simple estimate,

_{
basic
}= (*Q*
_{
A
}
*m*
_{
A
}+ *q*
_{
F1(basic) }
*m*
_{
F
}) - (*q*
_{
F0(basic) }
*m*
_{
F
}+ *Q*
_{
I
}
*m*
_{
I
}+ *Q*
_{
S
}
*m*
_{
S
}. (8)

To increase the stability of

_{
basic
}, we used averages over *k* = 3 intervals, for the probabilities of detection on the first screenings at age *a* and at age *b*,

Substituting (9) into (8) gives the modified estimate

_{
modified
}= (*Q*
_{
A
}
*m*
_{
A
}+ *q*
_{
F1 }
*m*
_{
F
}) - (*q*
_{
F0 }
*m*
_{
F
}+ *Q*
_{
I
}
*m*
_{
I
}+ *Q*
_{
S
}
*m*
_{
S
}). (10)

Invoking (1), we substitute *q*
_{
F0 }+ *Q*
_{
I
}+ *Q*
_{
S
}- *Q*
_{
F1 }for *Q*
_{
A
}(10). As discussed previously to obtain an upper bound, we also set *m*
_{
A
}= *m*
_{
I
}. This gives the following estimated upper bound in the reduction in population cancer mortality from periodic screening and its asymptotic variance

_{
upp
}= (*q*
_{
F0 }- *q*
_{
F1}) (*m*
_{
I
}- *m*
_{
F
}) + *Q*
_{
S
}(*m*
_{
I
}- *m*
_{
S
}), (11)

### Upper bound if Assumption 1 is violated

There are two basic scenarios in which *Assumption 1* could be violated. First the cancer, or at least the detectable part of cancer, could regress over time. This would most likely occur if the cancer were at a very early stage. Of course, early lesions are the principal targets of screening tests. Second, chance fluctuations in the results of the screening test might mask cancer detection, particularly if the interval between the screening tests were small. For example if the screening test were based on a sampling of cells, the screening test may, by chance, not include any of the tumor cells. For many screening modalities *Assumption 1* may not hold.

As shown in Figure 1, if *Assumption 1* did not hold, PSE would estimate the cancer incidence rate in the absence of screening as *q*
_{
A
}(*a*) + δ, for δ > 0, instead of *q*
_{
A
}(*a*). Thus if *Assumption 1* were not satisfied, PSE would overestimate the cancer incidence rate in the absence of screening which *overestimates* the reduction in the population cancer mortality rate. On the other hand, a violation of *Assumption 1* would also imply that some cancers detected on screening in would not have been detected on the last screening in which would lower the reduction in the population cancer mortality rate in (4). However we think this latter situation would have a small impact relative to the former which involves the entire span of ages at screening and not just the last age at screening.

Thus (11) is an upper bound for two reasons. First it uses interval cancers to estimate fatality rates following cancer diagnosis in the absence of screening. Second it is an upper bound if *Assumption 1* is violated.

## Validation methodology

If an upper bound is too large it will not be useful. To investigate the upper bound, we used data from three randomized screening trials to compare PSE estimates based on screened subjects with PSE estimates based on all subjects.

In computing PSE estimates for all subjects, the progressive detection assumption is not necessary. To estimate *q*
_{
A
}(*j*) we use a simple noncompliance adjustment for randomized trials (see [17] and references therein), *q*
_{
A
}(*j*) = (*q*
_{
C
}(*j*) - *q*
_{
R
}(*j*) π) /(1 - π) where *q*
_{
C
}(*j*) is the age-specific cancer incidence rate in controls, *q*
_{
R
}(*j*) is the age-specific cancer incidence rate in refusers, and π is the fraction of subjects who refused screening. Also with data from a randomized trial it is not necessary to use interval cancer cases to estimate the case fatality rate in the absence of screening. Instead we estimate *m*
_{
A
}= (*m*
_{
C
}- *m*
_{
R
}π) /(1 - π), where subscript *C* refers to randomized controls and *R* refers to refusers. Substituting (*Q*
_{
C
}- *Q*
_{
R
}π) /(1- π) for *Q*
_{
A
}in (10) and using *m*
_{
A
}gives the following estimated reduction in population mortality from cancer screening and its variance,

where π is treated as known, which is reasonable due to the large sample size.

We computed (12)-(15) using data from the following three randomized screening trials.

### Minnesota Colon Cancer Control Study (MCCCS)

Between 1975 and 1978 investigators randomized approximately 45,000 subjects to either 5 annual fecal occult blood screenings, 3 biennial screenings, or no screening [6, 7]. Due to a lower than expected death rate among controls, the investigators resumed screening between 1982 and 1986. After a hiatus of screening of between 3 and 5 years, the annual screened group received 5 additional annual screenings and the biennial screened group received 3 additional biennial screenings. Approximately 14 percent of subjects randomized to screening did not receive screening. Each screening cycle consisted of six Hemocult slides with planned definitive work-up if any slide showed evidence of occult blood. Screenings for the annual groups were labeled as on time if they were done in the 9 to 15 month time window since the previous screening. Screenings for the biennial group were labeled as on-time if they were done 20 to 28 months since the previous screening. (The longer time window was used to keep the loss of data arbitrarily to no more than 15%) Excluding the first screening after the resumption of screening, approximately 93 percent of the annual subsequent screenings and 85 percent of the biennial subsequent screenings were on-time. The age range for the analysis was 50 to 75. For estimating the age-specific incidence of cancer among controls, we used data collected up to the time of the last screen, which was 16 years after the start of the study. We increased the precision of the estimated age-specific cancer incidence on the first screen by pooling data on the first screening in the annual and biennial arms. For annual cancer screening, age was divided into intervals of 1 year. For biennial screening, age was divided into intervals of 2 years.

### Health Insurance Plan of Greater New York (HIP) Study

Starting in 1963, approximately 60,000 women were randomly assigned to either a study group invited for four annual mammograms and physical examinations or to a control group that received no screening within the study [8]. Approximately 1/3 of the subjects in the study group refused the first screening and received no screenings. Screenings were labeled on-time if they were done 9 to 15 months after a previous screening. Approximately 79 percent of second screenings, 76 percent of third screenings, and 73 percent of fourth screenings were on time. The age range for the analysis was 40 to 64. For estimating the age-specific incidence of cancer among controls, we used data collected up to the time of the last screen, which was 4 years after the start of the study.

### Mayo Lung Project (MLP)

Between 1971 and 1976 approximately 9,200 male heavy smokers who tested negative on a prevalence (initial) screening were randomized to either a study group urged to undergo radiologic and cytological screening examinations every 4 months for 6 years or a control group that at study entry received a recommendation for annual chest X-rays with no further reminders [9, 10]. Approximately 7 percent of the study group subjects did not receive any screenings.

Because PSE requires a single screening time at each round of screening, we restricted PSE to the screenings in which the time between cytology and x-ray was less than 3 weeks. Screenings were labeled as on-time if they were done within 3.5 to 5.5 months of the previous screening and the time between cytology and x-ray was less than 3 weeks. Approximately 85 percent of the subsequent screenings were on-time.

Only yearly age data were available. Because the screenings in the Mayo Lung Project were scheduled at 4 month intervals, yearly cancer incidence data for types *I* and *S* detection are approximated using the sums of counts for three successive screens. Because all subjects in the control group had an initial screening, we pooled data for detection rates on initial screenings in the study and control groups.

Unfortunately the initial screening in the control group greatly complicated the validation, which requires that no screening be performed in the control group. In order to better approximate a control group that received no screening, we only used data in controls starting 6 years after randomization. The underlying assumption is that by 6 years, most cancers detected on the prevalence screening would have progressed to clinical cancer in the absence of intervention. (This may not be true because of the likely possibility of overdiagnosis [10] and lead times that may exceed 6 years, but it may serve as a useful approximation if the amount of overdiagnosis is small). Due to the 6-year wash-out period, we start the age range for PSE at 51 instead of 45. We chose 6 years for the washout period as a compromise. We thought a longer washout period would have greatly restricted the age range under study and a shorter washout period would have had a much more limited effect.

We illustrate the calculations for the analysis of the HIP data on breast cancer screening. The probability of a women surviving competing risk from age 40 to each successive age up to age 64 is 1., 1., 0.99, 0.99, 0.99, 0.99, 0.99, 0.98, 0.98, 0.98, 0.97, 0.97,0.96, 0.96,0.95, 0.95,0.94, 0.93,0.92, 0.92,0.91, 0.9, 0.89, 0.87. Using these probabilities and data from Tables 1 and 5, we computed *q*
_{
F
}= .00166, *q*
_{
Fs
}= .00373, *Q*
_{
I
}= .0151, *Q*
_{
S
}= .0338, *m*
_{
I
}= .334, and *m*
_{
S
}= .123, Substituting into (11) gave
_{
upp
}= .00676. To estimate the variance we computed *v*
_{
I
}= .00497, *v*
_{
S
}= .00189, *V*
_{
F0 }= .00000056, *V*
_{
F1 }= .0000023. Substituting into (12) gave *var*(
_{
upp
}) = .0000090.

## Results

To determine if the PSE estimated upper bound from screened subjects is reasonable, we computed its value (and 95% confidence interval) along with the PSE estimate (and 95% confidence interval) based on all subjects (Figure 2). To account for the correlation we also computed the estimated difference (and 95% confidence interval) between the two types of PSE estimates (Figure 3).

In addition, we computed the estimated cancer mortality reduction between the two randomized groups (and 95% confidence intervals) (Figure 2). To compute this estimate we first computed the estimated efficacy of receiving screening among subjects who would receive screening if offered. This equals the intent-to-treat estimate divided by the fraction in the screening group who did not refuse any screening [17]. For the HIP study and the Mayo Lung Project, in which screening stopped well before the end of follow-up, we also computed an adaptive estimate to mitigate the effect of dilution [17]. The adaptive estimate is the estimate at the time after screening stops when the estimate divided by its standard error is largest. Confidence intervals for the adaptive estimate are based on bootstrapping.

### Minnesota Colon Cancer Control Study (MCCCS)

There is a large overlap in the confidence intervals for the PSE estimated upper bound from screened subjects and PSE estimates from all subjects (Figure 2). Also, the 95% confidence interval for the estimated difference includes zero (Figure 3), indicating that the upper bound estimate is reasonable. The estimated mortality reduction in the screening program is similar to the PSE estimates (Figure 2) because the long-duration of screening (5 annual or 3 biennial screenings followed by 3–5 years hiatus followed by 5 annual or 3 biennial screenings) in the trial approximated the 21 years of periodic screening.

### Health Insurance Plan of Greater New York (HIP) Study

As in the previous example, there is a large overlap in the confidence intervals for the PSE estimated upper bound from screened subjects and PSE estimates from all subjects (Figure 2). Also, similar to the previous example, the 95% confidence interval for the difference included zero (Figure 3) indicating that the upper bound estimate is reasonable. The PSE estimates were higher than the estimated effect of the screening program in the trial because the former was based on 24 annual screenings and the latter was based on only 4 annual screenings.

### Mayo Lung Project (MLP)

Unlike the other examples the confidence intervals for PSE estimates from screened subjects and PSE estimates from all subjects differed considerably (Figure 2). We think *Assumption 1* may not have held due to the short interval between screens and to the fact that the performance of sputum cytology screening depends on sampling of the tumor cells. Although the 95% confidence interval for the difference included zero (Figure 3), its large width means that a substantial bias cannot be ruled out. The PSE estimates were higher than the estimated effect of the screening program in the trial because (*i*) the former is based on 24 years of screening while the latter is based on only 6 years and (*ii*) the effect in the latter was reduced by a prevalence screening in the controls. PSE for screened subjects does not use any data from the control group. With PSE for all subjects, we assumed a wash-out period to try to remove the effect of the prevalence screen.

The results indicate that a PSE estimated upper bound based on subjects screened is not unreasonable when compared to the PSE estimate based on all subjects in the randomized trial. Because of sampling variability it is not surprising that the point estimate of the upper bound can be smaller than the point estimate based on all subjects.

We caution that violations of *Assumption 2* could have a substantial impact. *Assumption 2* depends on the cumulative effect of birth cohort from ages *a* to *b*. According to Moran [21] the relative bias due to violation of *Assumption 2* is particularly large if the age-specific incidence on the first screen changes little with age and interval and subsequent cancers are relatively rare. In that case Moran advised that other methods be applied. One way to reduce bias from *Assumption 2* is to only estimate the effect of screening for at most 5 years. That way the cumulative birth cohort effect would be limited to only 5 years.

## Conclusion

We think the major role of PSE is to rule out screening modalities that have little benefit. This information is useful when making policy decisions about screening, or when considering a large randomized trial to definitively compare benefits and harms of screening strategies. Because PSE is estimating an upper bound when *Assumption 1* is violated, if PSE estimates little reduction in population cancer mortality, the true reduction in population cancer mortality due to periodic screening is likely small. If any effects of birth cohort are minimal, further evaluation with a randomized trial would not be warranted.

## References

- 1.
Cronin KA, Weed DL, Connor RJ, Prorok PC: Case-control studies of cancer screening: Theory and Practice. J Natl Cancer Inst. 1998, 90: 498-504. 10.1093/jnci/90.7.498.

- 2.
Duffy SW, Chen H, Prevost TC, Tabar L: Markov chain models of breast tumour progression and its arrest by screening. In: Quantitative Methods for the Evaluation of Cancer Screening. Edited by: Duffy SW, Hill C, Esteve J. 2001, London: Edward Arnold Limited, 42-60.

- 3.
van Oortmarssen GJ, Boer R, Habbema JDF: Modelling issues in cancer screening. Stat Methods Med Res. 1995, 4: 33-54.

- 4.
Stevenson CE: Statistical models for cancer screening. Stat Methods Med Res. 1995, 4: 18-32.

- 5.
Paci E, Boer R, Zappa M, de Koning HJ, van Oortmarssen GJ, Crocetti E, Giorig D, Rosselli Del Turco M, Habbema JDF: A model-based prediction of the impact on reduction in mortality by a breast cancer screening programme in the city of Florence, Italy. Eur J Cancer. 1995, 31A: 348-353. 10.1016/0959-8049(95)94001-F.

- 6.
Parmigiani G: Decision models in screening for breast cancer. Bayesian Statistiics. 1999, 6: 525-546.

- 7.
Baker SG, Chu KC: Evaluating screening for the early detection and treatment of cancer without using a randomized control group. Journal of the American Statistical Association. 1990, 85: 321-327.

- 8.
Baker SG: Evaluating the age to begin periodic breast cancer screening using data from a few regularly scheduled screens. Biometrics. 1998, 54: 1569-1578.

- 9.
Baker SG: Evaluating periodic cancer screening without a randomized control group: a simplified design and analysis. In: Quantitative Methods for the Evaluation of Cancer Screening. Edited by: Duffy SW, Hill C, Esteve J. 2001, London: Edward Arnold Limited, 34-41.

- 10.
Robins JM: A new approach to causal inference in mortality studies with sustained exposure periods – application to control of the health worker survivor effect. Mathematical Modelling. 1986, 7: 1393-1512. 10.1016/0270-0255(86)90088-6.

- 11.
Flanders WD, Longini IM: Estimating benefits of screening from observational cohort studies. Stat Med. 1990, 9: 969-980.

- 12.
Mandel JS, Church TR, Ederer F, Bond JH: Colorectal cancer mortality: effectiveness of biennial screening for fecal occult blood. J Natl Cancer Inst. 1999, 91: 434-7. 10.1093/jnci/91.5.434.

- 13.
Mandel JS, Bond JH, Church TR, Snover DC, Bradely M, Schuman LM, Ederer F: Reducing mortality from colorectal cancer by screening for fecal occult blood. N Engl J Med. 1993, 328: 1365-1371. 10.1056/NEJM199305133281901.

- 14.
Shapiro S, Venet W, Strax P, Venet L: Periodic Screening for Breast Cancer, The Health Insurance Plan Project and Its Sequelae, 1963–1986. Baltimore, Johns Hopkins University Press. 1988

- 15.
Fontana RS, Sanderson DR, Woolner LB, Taylor WF, Miller WE, Muhm JR, et al: Screening for lung cancer: a critique of the Mayo Lung Project. Cancer. 1991, 67 (4 Suppl): 1155-64.

- 16.
Marcus PM, Bergstralh EJ, Fagerstrom RM, Williams DE, Fontana R, Talylor WF, Prorok PC: Lung Cancer Mortality in the Mayo Lung Project: Impact of Extended Follow-up. J Natl Cancer Inst. 2000, 92: 1308-1316. 10.1093/jnci/92.16.1308.

- 17.
Baker SG, Kramer BS, Prorok PC: Statistical issues in randomized trials of cancer screening. BMC Medical Research Methodology. 2002, 2: 11-10.1186/1471-2288-2-11. [http://www.biomedcentral.com/1471-2288/2/11]

- 18.
Gooley TA, Liesenring W, Crowley J, Storer BE: Estimation of failure probabilities in the presence of competing risks: new representations of old estimators. Stat in Med. 1999, 18: 695-706. 10.1002/(SICI)1097-0258(19990330)18:6<695::AID-SIM60>3.3.CO;2-F.

- 19.
Anderson RN: United States Life Tables, National Vital Statistics Reports, from the Centers for Disease Control and Prevention, 47, Number 2. 1999

- 20.
Cox DR, Oakes D: Analysis of Survival Data. London: Chapman and Hall. 1984

- 21.
Moran M: Periodic Screening Evaluation Under Cohort Effect. Master's Thesis, School of Public Health, University of South Carolina. 1998

### Pre-publication history

The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/3/4/prepub

## Acknowledgments

The authors thank Pamela Marcus and Ping Hu for their assistance and comments. The manuscript reflects the opinions of the authors and does not necessarily reflect the official opinion of the U.S. Department of Health and Human Services.

## Author information

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

## About this article

### Cite this article

Baker, S.G., Erwin, D., Kramer, B.S. *et al.* Using observational data to estimate an upper bound on the reduction in cancer mortality due to periodic screening.
*BMC Med Res Methodol* **3, **4 (2003). https://doi.org/10.1186/1471-2288-3-4

Received:

Accepted:

Published:

### Keywords

- Cancer Screening
- Screen Test
- Cancer Detection
- Interval Cancer
- Cancer Detection Rate