0

I am thinking of building a secondary Neural Net to train the relation between the hyper-parameters and test set accuracy of my primary Neural Net, so as to maximize it efficiently.

And there are 2 ways to approach this:

1). Let the input features be the hyper-parameters(X) and output be the test acc(Y). Then after the network has learned, I can provide the value of test acc and obtain the optimum hyperparams by applying inverted weights and activation functions on Y.

But i realized that the relation between X->Y is many to one. Different set of X could give the same value of Y. So is the above method even possible.

  1. If the above method isnt possible, then is it plausible to train my NN with X as test acc and Y as the hyperparams and hope it would work?
  • 1
    I'm not sure if I'm following you: if you have binary classification problem, than you also have a map of different values, to only two possible values. – Tim Jul 14 '20 at 10:20
  • @Tim - indeed all classifications and most regressions with more than one independent variables are many-to-one – Henry Jul 14 '20 at 10:42
  • suppose input features x1 and x2 .and output y. suppose the actual mathematical relation is y= x1*x2. And i can train my neural network to learn that relation. But after the network has learned and i try to obtain the value of x1 and x2 for y=12. by inverting the forward prop, what will i get 4,3 or 6,2 or 12,1 or something else or some error?? that is the question – Lelouche Lamperouge Jul 14 '20 at 11:52
  • One equation with two unknown values and no further information isn't solvable. Neural networks aren't magic. – Sycorax Jul 14 '20 at 12:43
  • Ah yes you're right. So is there any other way to automate the hyperparamter optimization proces? – Lelouche Lamperouge Jul 14 '20 at 12:47
  • There are lots. Some examples: https://stats.stackexchange.com/questions/193306/optimization-when-cost-function-slow-to-evaluate/193310#193310 – Sycorax Jul 14 '20 at 12:55

0 Answers0