Suppose I have a "study" that occurs each year. Subjects can enter the study at any point in a given year (although most enter at the very beginning), and once they're in the study, we monitor whether they experience some event e. We are concerned only if subjects experience e within the same year – we don't care about what happens next year (or, more generally, within the study's time boundary, which is a year). So, at the end of the year, we know exactly who "survived" (i.e. didn't experience e) and who did not (i.e. did experience e).
While time-to-event could be a useful analysis here, I don't believe there is actually any censoring at all in this study, since we know for a fact whether e was experienced in the study period for each subject. Would it matter, if I were modeling time-to-event, if I used a survival analysis method (e.g. Cox's proportional hazard) vs. a standard ordinary least squares (OLS) approach? Or perhaps even a more segmented model in which the likelihood of experiencing the binary event e is modeled on a monthly basis for each subject...