This is entirely domain specific, there is no single "best threshold" that will apply to all possible scenarios. You can usually move a classification threshold to have higher sensitivity at the cost of lower specificity and vice versa, so understanding that tradeoff in your particular classifier is a good place to start. Where you ultimately want to draw the threshold depends very much on your particular use case and relative "cost" of false positives vs. false negatives. It sounds like the cost of false negatives is high (not detecting real fraud is a big problem for customers) but that false positives are not so costly (reviewing a no-fraud case just costs the company some time but does not impact customers).
You will need to quantify the relative cost of these different misclassifications and combine it with the sensitivity/specificity characteristics of your classifier to identify a threshold that has the highest overall utility with respect to misclassification cost. For example, if a false negative is twice as costly as a false positive, you may set the threshold one way, but if it's 1000 times as costly, you'd be better served by lowering the threshold and sending more cases for fraud review. In the limit where false negatives are catastrophic, you can't afford to miss any real fraud cases, and are forced to lower the threshold all the way and send everything to fraud review.