Is the following hypothesis true ?
If a simple neural network cannot overfit a single training sample, there is something wrong with its architecture or its implementation.
To give you more background on why I am asking this question, I am working on a single convolution layer that aims at segmenting the input image (classify every pixel of the image from either class 0 or class 1). The network does not manage to overfit a single training sample, so I suppose that there is something wrong with what I have done.
Edit: This is not a duplicate of What should I do when my neural network doesn't learn?. The post (which is very informative) suggest, among other things, to unit test the network to see if it is error-proof. Basically, I am asking a question on how to unit test my network. The hypothesis I stated is the one which I hold to run the unit test. If the hypothesis is wrong, the unit test I am making does not make any sense, thus the question.