7

Of the two best known techniques for feature scaling in Machine Learning:

  • Normalizing a feature to a $[0, 1]$ range, through $x - x_{min} \over x_{max} - x_{min}$

or

  • Standardizing the feature (also referred to as z-score), through $x - μ \over σ$, where $μ$ is the mean and $σ$ is the standard deviation.

Is there any reason to prefer one over the other? Does any one outperform the other when used with certain algorithms?

kfn95
  • 346
  • 1
  • 6
  • 1
    It depends on the objective. – user2974951 Sep 14 '18 at 12:32
  • @user2974951 Is there any case where one is preferred over the other? Or does one have any useful properties compared to the other? – kfn95 Sep 14 '18 at 12:35
  • Your comment makes me believe this is a possible duplicate of [Is it a good practice to always scale/normalize data for machine learning?](https://stats.stackexchange.com/questions/189652/is-it-a-good-practice-to-always-scale-normalize-data-for-machine-learning) – Frans Rodenburg Sep 14 '18 at 12:53
  • 1
    @FransRodenburg The author asks *which feature scaling to perform*, while the question you referenced is about *whether or not to use feature scaling*. I think it's a valid question. –  Sep 14 '18 at 14:17
  • @JohnDoe I realise the difference in titles, but if you read the accepted answer you'll see that it is more similar than it might seem at first. – Frans Rodenburg Sep 14 '18 at 14:19

0 Answers0