In stats class, the professor talked about the interest of transforming skewed data sets to make them more "normal".
From what I've understood so far, the idea is that the normal curve has nice mathematical properties we'd like to work with, so if we have a strongly skewed data set, we can apply non-linear transformations to it to make its distribution closer to a normal distribution.
A few example:
Linear transformations make sense; if we had data in feet and wanted to have it in inches, we could just apply $y=12x$ to the data set. That makes sense.
Even in the case where we have feet but want to deal with square feet; that's a non-linear transformation but the units still make sense (maybe "making sense" is just a question of degree of familiarity)
But now, let's imagine we have a data set for car prices or employee salaries in dollars. What would be the meaning of applying a log tranformation to our data sets? Or an inverse tranformation? What are log dollars or inverse dollars?
Also, even if we can draw conclusions more easily about the new data set, how relevant are those conlusions to our original data set? Can we just assume that our conclusions hold? How relevant is the mean, or SD or variance of a transformed data set to the original data set?
Or for example (I'm seeing this question on the side right now), it seems you can transform a data set to make it more easily linearly separable (makes sense geometrically, I guess).
But does that really work? It feels weird, like "cheating" in a sense. We're messing with the data and then drawing conclusions or coming up with predictive models based on that messed up data. How does that work?