I have implemented a custom loss function that is supposed to enhance a binary cross entropy loss function, by weighting incorrect decisions depending on an opportunity cost.
The whole code is listed here: (unfortunately I don't see any way to post it in stachexchange) https://gist.github.com/dickreuter/dcca63d0699b9b195c88c81530f74c5f
You can see the following: if LTP t-0 is greater than starting price, the cumulative sum of 'back' is positive.
I'm trying to build a neural network, can maximize the payoff (the sum of 'back').
For that I have implemented a custom loss function, that punishes the network by lost opportunity (the value of 'back'). For false positives, 'back' will be -1, and the punishment to the network will be according that value as well.
When training the neural network I would expect it to easily find that it simply needs to look at the column 'starting_price_bigger_than_ltp'. But for some reason this doesn't appear to happen.
Any suggestions what I can make better are highly appreciated.