If you don't know which cases in your test set are true positives and negatives, then you cannot say whether a particular classification is true or false (positive or negative).
Therefore, you cannot estimate the False Positive Rate in new data, nor false_positives = N * FPR
, either, because you don't know the FPR.
If you truly need this, then you could go back one step, partition your training data (where you do know true positives and negatives - right?) into a training and a test sample, then assess the FPR on the test sample.
I recommend that you take a look at more on why choosing a threshold for hard zero-one classification is a bad idea here and in the linked blog posts by Frank Harrell. In addition, if you have a balanced training sample but an unbalanced test sample, then your training sample differs systematically from the true population you want to apply your model to, which will bias your model. Better to use a representative training sample and use probabilistic predictions.