Consider a multi-layer neural network that learns its weights with backpropagation (and gradient descent). Hence, there is a probability that we trap into a local minimum.
Will adding more neural units solve the problem?
Consider a multi-layer neural network that learns its weights with backpropagation (and gradient descent). Hence, there is a probability that we trap into a local minimum.
Will adding more neural units solve the problem?
Yes, you will definitely avoid local minima with more neurons. Adding more neurons increases the volume of your solution space exponentially, but at the same time increases the amount of equivalent solutions factorially.