I'm running a Cox PH model using lifelines
package on Python. I tried the model on a train and holdout set.
These are the scores it gave:
cph.score(holdout_x, scoring_method='concordance_index') = 0.6892575904820802
cph.score(holdout_x, scoring_method='log_likelihood') = -5.160975066321637
cph.score(train_x, scoring_method='concordance_index') = 0.6948730045908684
cph.score(train_x, scoring_method='log_likelihood') = -5.719069733545282
I was wondering, what is a good rule of thumb (if any) to interpret the partial log-likelihood? Let's say, here it says that a "dumb" log-loss for the binary case would range between 0.1 and 0.8, depending on the prevalence. Mine is very different, but I am aware it is also calculated quite differently.