In general I standardize my features before regression by subtracting the mean and dividing by unit variance: $$ \hat{X} = \frac{X - \bar{X}}{Var(X)}$$ With this basic standardization, interpreting the regression coefficients is quite straight-forward.
Now I've got some not-really-normal-dist data, for which a power transform is a good choice. Since some features contain negative values, I chose to use a Yeo-Johnson transformation. (I could also transform the features containing neg. values to all-positive features if Box-Cox helps the cause...)
After this transform, the data is standardized additionally, again by stubtracting the mean and dividing by variance.
Question:
Is there an easy way to interprete the regression coefficients after a power transform (may it be Yeo-Johnson or Box-Cox...)?
A similar question is asked here and the answer helps with the fact that feature scaling changes the coefficient interpretation in general, but it is specific to a transformation with the 4th root.
beyond the question
After the power transformation, I apply polynomial transformation of degree 2 or 3 with interaction terms and then perform a feature extraction with a PCA to reduce the dimensionality of the problem. Finally the design matrix is passed to a regression algorithm.
I guess this further reduces the intepretability for both only standardizing and/or power transform? Any other way to interprete the coefficients by using words? (A graphical interpretation using ICE-plots and the like is imho still fairly easy.)