I have read in many places such as Stanford's Convolutional neural networks course notes at CS231n (and also here, and here and here), that pooling layer does not have any trainable parameters!
And yet today I was informed by someone that in some paper(here it is)
they say and I quote :
S1 layer for sub sampling, contains six feature map, each feature map contains 14 x 14 = 196 neurons. the sub sampling window is 2 x 2 matrix, sub sampling step size is 1, so the S1 layer contains 6 x 196 x (2 x 2 + 1) = 5880 connections. Every feature map in the S1 layer contains a weights and bias, so a total of 12 parameters can be trained in S1 layer .
What is this ?
Can anyone please enlighten me on this?