Main question 1:
Please take a look at the highest-voted data-transformation
posts on this site. For regression models, the idea is to have a linear relationship between the outcome (in an appropriate scale) and the predictor. For logistic regression, that would be a relationship between log-odds and the predictor. Splines can be useful, as they let the data tell you what the transformation should be.
Main question 2:
That's wrong. See this answer, for example: "One transforms the dependent variable to achieve approximate symmetry and homoscedasticity of the residuals." Residuals about the regression are key, not the outcome values themselves. Normality of residuals is nice, but not strictly necessary.
Follow-up question 1:
There's no need to transform an unordered categorical variable in an unpenalized regression. Each level of the predictor has its own additive contribution to the overall linear predictor.
If your categories are ordered then you could consider orthogonal polynomial coding, or penalized maximum likelihood or Bayesian shrinkage. See the sections of Frank Harrell's course notes or textbook on "fitting ordinal predictors."
Follow-up question 2:
Ordinal regression allows for something similar to what you want, maintaining the ordering of numeric outcome values while allowing for differences in effective step sizes between outcomes from what you would have with a simple numeric outcome scale. Although it's often described for outcomes with a small number of ordered levels, ordinal regression can be applied to much more general continuous outcomes. The Harrell references above also cover that topic.