2

Suppose I have a "study" that occurs each year. Subjects can enter the study at any point in a given year (although most enter at the very beginning), and once they're in the study, we monitor whether they experience some event e. We are concerned only if subjects experience e within the same year – we don't care about what happens next year (or, more generally, within the study's time boundary, which is a year). So, at the end of the year, we know exactly who "survived" (i.e. didn't experience e) and who did not (i.e. did experience e).

While time-to-event could be a useful analysis here, I don't believe there is actually any censoring at all in this study, since we know for a fact whether e was experienced in the study period for each subject. Would it matter, if I were modeling time-to-event, if I used a survival analysis method (e.g. Cox's proportional hazard) vs. a standard ordinary least squares (OLS) approach? Or perhaps even a more segmented model in which the likelihood of experiencing the binary event e is modeled on a monthly basis for each subject...

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
blacksite
  • 614
  • 1
  • 10
  • 22

1 Answers1

2

Survival analysis is a powerful tool even if there is no censoring. As it's possible for participants to enter the study during a particular year, however, you do have a form of censoring that needs to be taken into account.

For those who enter after the start of the year, you only know that they did not have the event during their duration in the study that year. They were "exposed" to the "risk" of the study for less time than those who entered at the beginning of the year, so that needs to be taken into account or you will have bias. All you know is that the time to event for the later-entering participants was greater than their duration in the study for that year, which is left censoring of the time to event.*

For example, say it typically takes about 9 months to have the event. If in one year everyone starts at the beginning of the year, most of that year's participants will have the event. If in another year most participants don't enter until the middle of the year, you will will have a lower fraction experiencing the event by year's end even if the characteristics of the participants are otherwise the same. So the "survival" results can seem different just due to different patterns of study entry--unless you take the censoring into account.

One way to handle different exposure times could be to use a Poisson-type model that uses exposure time as an offset, thus modeling an event rate. That would assume that the cumulative risk is directly proportional to the exposure time, however, a pretty strong assumption. An advantage of more flexible parametric or semi-parametric (Cox) models is that they don't require such strong assumptions about the time course of events.


*Those who started at the beginning of the year but didn't have the event might be modeled either as left-censored at 1 year, or as "cured" in a cure model.

EdM
  • 57,766
  • 7
  • 66
  • 187