I almost always used Numpy's StandardScaler to normalize my data for machine learning. I noticed however that simply taking the log of the variables that I wanted to normalize often resulted in better accuracy compared to when I used the StandardScaler method.
To give some more context, I built several binary classifiers for different purposes both with ANNs and XGboost and I noticed that log-normalizing the data always leads to better accuracy.
I'm a little puzzled by this as nobody ever mentions log-normalization as a valid normalization technique. Everyone talks about min-max normalization and Z-score/Numpy's StandardScaler but no one even mentions log-normalization.
How is that possible? Am I doing something wrong?