0

I was wondering how to interpret and tweak a model with 10% accuracy, in binary classification. With common feature engineering, I get something around 65% accuracy. When I do some transformation on the features( fft) it goes down to around 10%. This tells that if I flip my predictions it would be 90% accuracy. As an example in stock exchange when it says buy I sell and vice versa. very Poor binary classification is very good when I reverse it. My question is that, what can I do to in fact make the model look reasonable. Or what is the next CORRECT step in this situation, toward improving the model? It just don't feel right to flip the predictions.

  • If your accuracy in a binary task is 10%, something has gone badly wrong in your model. I don't think we can diagnose the problem without knowing *a lot* more about your data, your model etc. Also: [Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/q/312780/1352) – Stephan Kolassa Nov 12 '21 at 17:05
  • Thanks @StephanKolassa. I actually measure all the scores, and all are bad(good). I agree that something has gone wrong, but not very bad, other wise the accuracy would be 50%, right? I made sure its is not random, by changing random_state, many times, – Nabat Farsi Nov 12 '21 at 17:14

0 Answers0