1

Why do neural networks outperform SVMs if SVMs have the less generalization error according to Vapnik?

Is generalization error only useful in data scarce environments?

Is it because neural networks are unfairly given an advantage by GPUs?

  • 1
    The main reason are convolution layers, taking into account local image structure. – Michael M Aug 23 '20 at 11:33
  • 2
    I think the answer turns on the specific meaning of "less generalization error according to Vapnik." I assume there is some important qualifying information about what under what conditions the claim is true. Perhaps you could outline this, and provide a citation? – Sycorax Aug 23 '20 at 14:47

1 Answers1

0

One of the most difficult problems in image recognition is feature extraction. When the image is very large, you can't take every pixel as a feature. For SVM, it is very difficult to directly process the image without feature extraction, because the data dimension is too high(too many pixels). For neural network, more precisely, convolution neural network, its convolution layer is actually doing feature extraction. Through convolution layer, the original image will be mapped into a low dimensional vector, so the dimension of image data will be reduced, and then the classification can be easier through the full connection layer behind the convolution layer.

Gid
  • 86
  • 5
  • GPU can only accelerate the training of neural network, and has no effect on the accuracy of the model – Gid Aug 24 '20 at 07:11
  • I heard SVM uses HOG features. But personally looking at a vector field to represent the image seems like a not mathematically justified idea and intuitively it won’t give us any information about the image. –  Aug 24 '20 at 12:03
  • What about PCA or manifold learning or ISOMAP for dim reduction? –  Aug 24 '20 at 12:05
  • @Germania The beauty of deep learning is that it learns the features rather than having the engineer determine them. – Dave Dec 27 '20 at 00:22