I have a dataset with a number of records. For any record $i$, I have:
$(T_i, X_i^{fixed}, X_i^{vary})$
representing an event time $T_i$, a set of features $X_i^{fixed}$ that are time-invariant and a set of features $X_i^{vary}$ that are time-series, reported every minute for every record. Note that I do not have any censoring in the data, every single record has a known event time.
I want to model the event time, conditioned on the features I have. Accelerated failure time models are the most popular in my application, and reading up on these, the authors typically use a set of fixed features to model the log of the event times. That is:
$log(T) = \mu + \beta'X^{fixed} + \epsilon$.
They choose some distribution for $\epsilon$ and then fit the model via optimizing the log-likelihood function.
My question is, are there known methods to incorporate the time-varying features into this framework? I see a lot of discussion of time-varying covariates for cox-models, but not so for accelerated failure time models. The simplest thing I can think to do is just treat this as a standard regression problem (due to no censoring) and at some time $t$, given $X_i^{fixed}$ and the time-series $X_i^{vary}$ up-to $t$, try to predict the remaining time until $T_i$. But, this means each event is present multiple times in the data, so I am unsure if independence among observations is then true.
Any pointers on how to address time-varying AFT models, or other helpful advise on this situation is greatly appreciated.