I agree with the above answers that Cox was not primarily meant for absolute risk calculations, but I believe it was
more for historical reasons (people focused on relative risks and not absolute), and don't see a legitimate reason why not to use it for predictions. There are works in prediction modelling that successfully use Cox already, but I think they prefer glmnet package for Cox family, with flexibility of regularization terms and hdnom package for survival probabilities (hdnom:::glmnet_survcurve).
Back to the question, going in cycles for survfit basic Cox model, I came up with this code to back out estimated individual probabilities of an event:
bh=basehaz(cox_model_fit, newdata = data, centered = FALSE) #baseline cumulative hazard function for all covariates set to 0 (centered = false
lps <- predict(cox_model_fit, data, type = "lp") #linear predictors, betas x covariates for each observation
time_of_interest = 6 #if you are interested in survival at t=6
i_time_of_interest = match(1, round(bh$time,1) == time_of_interest, nomatch = -100) #annoyingly, no time argument in basehaz function, so find that time in the list (it builds at all censored and event times, so either choose a point like that or need to interpolate between available points)
event_prob_t = 1-exp(-bh[i_time_of_interest,1])^exp(lps) #calculate risk of event from the baseline hazard and linear predictors. returns a vector of probabilities for each person in the "data"
You can then check event_prob_t separately for cases and controls / censored and test the difference etc.
Or, use this function to calculate concordance statistic for survival data- this gives probability that two random observations from cases and controls have probabilities in the right order (lower chances of event for controls than for the cases). For binary outcome c-index is the same as ROC_AUC, so very common measure of how good a model is for classification/binary models.
survConcordance(Surv(data$survtimed, data$outcome) ~ lps)$concordance