1

I read this link Bergstra-Bengio in which it is discussed why 60 parameters randomly chosen are enough.

But I do not fully understand the formula used

$$1−(1−0.05)n>0.95$$

$$n⩾60$$

I have read the paper of Bergstra and Bengio; but I have not found anything about that formula. Also, I was wondering if RandomSearch is only good for Neural Networks; as I am using RandomForest, XGBoost and SVM.

Thank you

Aizzaac
  • 989
  • 2
  • 11
  • 21
  • Did you read the explanation at this answer? What part of it do you not understand? http://stats.stackexchange.com/questions/160479/practical-hyperparameter-optimization-random-vs-grid-search – Sycorax Feb 24 '17 at 00:49
  • @Sycorax Hi. Yes I did read the answer, I also read Bengio's paper. I simply do not understand the formula. Can the 60 observations be used with any number of parameters? – Aizzaac Feb 24 '17 at 17:09
  • 2
    Don't you mean "$1 - (1-0.05)^n$"? – whuber Feb 24 '17 at 17:57
  • @Sycorax Is it still true if I optimize 2 or 8 parameters? – Aizzaac Feb 24 '17 at 19:25

0 Answers0