2

Once I read about one kind of neural network (for classification or feature selection) for a supervised training where you start with all inputs, then you proceed with a training step and randomly (or via criteria) you remove one input feature and then proceed with the training step. If this helps to improve the model, you remove this feature but if this is not the case, you reclaim this feature and remove others and evaluate again.

Also, you start with 1 feature and proceed with the train step. as the process before, you add new features and evaluate if this helps to model or not.

Do you remember the name of this kind of algorithm and have some article describing it or using it?

I have made my own research, and search on my browsing history with no success. Some kind of "pull back" or "push forward" but not back-propagation.

Sycorax
  • 76,417
  • 20
  • 189
  • 313
Luis Felipe
  • 143
  • 7
  • 3
    This is stepwise regression and tends not to be recommended by statisticians. – Dave Sep 29 '21 at 00:03
  • @Dave thanks, its not for development, only for comunicative reasons. Thanks for advice – Luis Felipe Sep 29 '21 at 00:05
  • Pullbacks and pushforwards have nothing to do with this, these are terms from automatic differentiation. – Firebug Sep 29 '21 at 00:14
  • yeah I know @Firebug, I am mathematician but not native english speaker, so I was unable to find the correct word for "adding one input to next model/ deleting one input to next model". thats why I said "but not back-propagation" meaning that those terms do not reffer to the update weights process due b-propagation. – Luis Felipe Sep 29 '21 at 00:17
  • 1
    What you want is probably related to Recursive Feature Elimination (RFE). Look it up, if it's not RFE itself it will be listed next to it in machine learning libraries – Firebug Sep 29 '21 at 00:28
  • I remember the structure as a neuronal netword, your answer is very close to what I remember ! – Luis Felipe Sep 29 '21 at 00:30

2 Answers2

2

It's called stepwise selection and is generally not recommended. Moreover, with computationally intensive models like neural networks it is rather inefficient strategy. The standard, and giving much better effects, way of doing it is using some kind of regularization. You could use dropout, $L_1$ or $L_2$ regularization, or many other approaches.

Tim
  • 108,699
  • 20
  • 212
  • 390
1

I guess you are referring to a family of feature selection methods called wrapped methods.

In particular, what you call "pull back" or "push forward" are respectively Backward Elimination and Forward Selection. I suggest you also to have a look at Recursive Feature elimination. In the following link you find a quite clear description of them plus a comparison with pother feature selection techniques, so that you can better choose what suits you better.

https://www.analyticsvidhya.com/blog/2016/12/introduction-to-feature-selection-methods-with-an-example-or-how-to-select-the-right-variables/

example code are in R.

In case you prefer python here is it another valid link:

https://machinelearningmastery.com/feature-selection-with-real-and-categorical-data/

Hope this helps

Dark2018
  • 26
  • 3