I built an artificial neural network that has a dependent variable called "Suspicious". This column is binary so only two outcomes. I have 297,771 "0" not suspicious or known good. Then I have only 1,100 rows in my data labeled as "1" for suspicious or bad. After the test set the confusion matrix looks like this:
cm
array([[59552, 0],
[148, 75]])
This gives me a test accuracy of 99.75240%. This seems way too high. Is there a rule of thumb for how many bad or "1's" I should have in the data before I run it though the model, like 1/3, or 1/2?