Does it make any sense to have two (or more) neurons in a neural network with the same weights? (intuitively it makes no sense, since all the neurons would behave the same way).
Please consider both the input and hidden layers. What if the weights are equal 0?