0

I am trying to use a simple neural network to predict numerical values of certain properties of sensor data (Somehow a regression problem). My network has only 1 output with Tanh activation, so output will be bound in [-1, 1]. I use MSE and Adam optimizer. The training went well and the model converged. The problem I am observing during test case is that whenever the ground truth for test sample is close to the edges (which is either -1 or 1), the prediction gets significantly worse compared to other ranges. I tried a trick where I "expand" the edge regions by multiplying the NN output with a number slightly larger than 1 (e.g. 1.02) but it didn't seem to solve this problem.

I am curious what is happening here theoretically and what would be a way to address this problem. It will be very helpful to know.

  • Are the sensor values really bounded to [-1, 1] or have you transformed them to be so? Do you really need the output to be bounded to (-1, 1)? – Igor F. Sep 24 '21 at 06:39
  • Thanks for the comment. I indeed transformed the output to be bound to [-1,1]. However, even without transformation, the output should also be bound. – StapleStable Sep 24 '21 at 16:59

0 Answers0