The statistical part of the prediction exercise ends when you output a predictive distribution. Incidentally, this is a perfect end point - much better than a point prediction.
What follows after you give your predictive distribution is the decision that someone will make based on your predictive distribution. However, more than just your distribution enters into decisions: costs of "wrong" decisions, costs of "correct" decisions, just how "correct" or "wrong" a decision is made, and so forth. These can typically be included in loss-functions, and the task of the decision maker is to minimize the loss based on your $\mathcal{D}$ and the cost structure.
Sometimes the "decision" is just a one-number summary of $\mathcal{D}$, and the loss can be assumed to be proportional to the squared difference between the decision and the actual outcome. If so, then the optimal decision is the one that minimizes the expected squared error. Then the optimal decision is the expectation of $\mathcal{D}$.
Or the loss may be proportional to the absolute difference between this one-number summary and the actual outcome. Then the optimal decision would be the median of $\mathcal{D}$, which minimizes the expected absolute error.
Bottom line: a point prediction makes no sense without considering the cost or the loss function which it aims to minimize. A predictive density, in contrast, can be set up (and evaluated using scoring-rules) even without such costs.
I have written before on similar topics, usually shamelessly stealing from Frank Harrell and his blog, e.g.: Why use a certain measure of forecast error (e.g. MAD) as opposed to another (e.g. MSE)?