The output given you by logistic regression are probabilities, more precisely their point estimates. Since we are dealing with random variables, the estimates are uncertain and we have a number of metrics that quantify this uncertainty. Standard errors tell us how uncertain are we with our estimates, while metrics such as accuracy, preferably measured on a separate test set, tell us how uncertain are we about future predictions that would be made using your model. It is the same as if you measured the temperature using a thermometer, it says that the temperature is X, but the measurement error is $\pm$Y. Usually non-statisticians want point estimates and don't want to hear about the uncertainty, while statisticians prefer to report all this to make their reports more precise.
Notice that there is a number of factors contributing to accuracy of your predictions. Some of them are related to your data, some to the fact that you are going to make predictions on different data that was used for building the model etc. Moreover, when you use logistic regression for classification, you need to decide for a decision rule (e.g. if predicted probability is greater then 0.5 predict success otherwise predict failure, and this choice may or may not be correct) what leads to transforming the predictions and making them less precise. Finally, the metrics you quote tell you things like "overall accuracy of the model is X%", while what logistic regression model predicts is the conditional probabilities of observing your outcome given the predictors, so this is a very different kind of information!
What you can tell is that your best estimate of probability that Y will happen given X1,X2,X3,... is Z% (predicted probability) and when this information is used for making future predictions, tests have shown that in V% (error measure of your choice) cases it leads to correct classifications.