1

When learning neural networks one can often hear that Hinton proposed backpropagation in 1986.

After this big leap forward, we could train neural network efficiently.

But I have a question: How did we train a neural network without backpropagation before 1986?

Oren Milman
  • 1,132
  • 11
  • 25
  • I'm not saying you should always check Wikipedia, but [you should check Wikipedia](https://en.wikipedia.org/wiki/Artificial_neural_network#History) before you ask questions so you can gather more info – Thomas Wagenaar Jul 01 '17 at 19:16
  • 2
    [This](https://stats.stackexchange.com/questions/235862/is-it-possible-to-train-a-neural-network-without-backpropagation) may be of interest. – GeoMatt22 Jul 02 '17 at 02:09

1 Answers1

1

I'm not very sure as to how much people used neural networks back then, due to limitations in computational resources. There are, however, other methods with which one can train neural nets!

  • Conjugate gradient, first proposed in 1952
  • Newton's method (i did not find any reference as to when the formula we use today in NN's was published)
  • Quasi - Newton method (again, I found no reference to original publish date)
  • Levenberg-Marquardt algorithm, first published in 1944

For more information on each method, refer to the following link : https://www.neuraldesigner.com/blog/5_algorithms_to_train_a_neural_network

Alex P
  • 161
  • 3
  • 2
    Note that backprop is not an optimizer, but an algorithm to compute gradients. All the approaches you mention require gradients. – GeoMatt22 Jul 02 '17 at 02:12