Assume we have the posterior distribution of this linear regression model $y = w^Tx$, $P(w | D,\theta)$, where $D = \{(x_i,y_i)\}_{i \in \{1,\dots,n\}}, n $ is the number of data instances, $\theta$ is the set contains $w$ and all hyperparmaters. For a testing point $\hat{x}$, we calculated the posterior distribution as following: $$P(w|D,\theta) = \int_\theta P(\hat{x} | \theta) P(\theta|D)d\theta$$. Since we have a continous possible values of $w$, how to "discretize" the values of $w$?
Assume we already did that, and we have several values of $w$, $\{w_1,\dots,w_k\}$, with $P(w = w_j | D,\theta), j \in \{1,\dots,k\}$, then the prediction based on these different values of $w_j$ must be associated with their posterior values. For instance, I'd trust the prediction $y_1$ (i.e., based on $w_1$) more than $y_3$ as $P(w = w_1 | D,\theta) > P(w = w_3 | D,\theta)$. How to reflect such confidence to the predicted $y$?