2

I'm doing some research into neural networks and it seems like every single one i've come across implements a back-propagation algorithm. Is this because they're very easy to implement or are there other reasons?

Also, are there any papers comparing different neural network implementations?

Thank you.

Franck Dernoncourt
  • 42,093
  • 30
  • 155
  • 271

1 Answers1

1

Backpropagation, essentially, is just an algorithm to compute the gradient of a neural network using chain rule. Please refer to this question for details.

If your loss (error) function is differentiable with respect to your model's parameters, there's no reason not to use the gradient, since it tells at each point in the parameter space where to go in order to minimize the loss.

Artem Sobolev
  • 2,571
  • 13
  • 17