What you have is, on the one hand, a distribution of observed values, and, on the other hand, a one-number summary (ONS) that attempts to condense your knowledge of this distribution into a single number. And what you are looking for is a way to assess whether your ONS is a "good" one.
The first question you need to ask yourself is what a "good" ONS would be. This will depend on what you will do with your ONS, i.e., which decisions you will take based on it. For instance, you may want an unbiased expectation prediction. Or you may want an ONS such that half of the actuals are above it, and half below, i.e., the median of the distribution. If you want to plan some kind of flight schedule (i.e., capacity), then it makes sense to build some slack into it, and a "good" ONS would be something like a 90% quantile.
Once you know what kind of ONS you are aiming at, you can choose an appropriate error measure. For instance, the (root) mean squared error between your single value and the observations will be minimized in expectation by an unbiased expectation prediction, so if that is the kind of ONS you want, you should use the RMSE. If you want the median of the distribution as your ONS, you should use the mean absolute error. If you want a quantile, you should use a kind of asymmetrically weighted linear loss, where the specific parameter depends on what quantile you want (this is done in quantile regression; see the textbook Quantile Regression by Roger Koenker or any of his publications).
I illustrate some of these points in What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? and in a forthcoming commentary on the M4 forecasting competition, to appear soon in the International Journal of Forecasting - feel free to contact me for the manuscript if you believe it would be helpful.