The question appears to ask for a predicted value and for a prediction interval about that value.
The predicted value is obtained by means of the formula using the estimated parameters and a specified value of $x$:
$$\hat{y}(x) = \hat{a} + \hat{b} x + \hat{c} x^2 + \hat{d} x^3.$$
In general, the uncertainty in $\hat{y}(x)$ comes from three sources:
Uncertainty in the parameter estimates $\hat{a}$, ..., $\hat{d}$ due to assumed randomness of the original data. An explicit probability model makes sense of this. In most settings, the model is additive. That is, for any datum $(x_i,y_i)$ it assumes
$$y_i = a + b x_i + c x_i^2 + d x_i^3 + \varepsilon_i$$
where $\varepsilon_i$ is a random variable with zero expectation. Furthermore, absent any information to the contrary, it is often the case that the $\varepsilon_i$ are independent of each other and have similar distributions. These assumptions allow us to estimate that common distribution from the data.
Uncertainty in the actual value that would be observed at a given value $x$. This uncertainty is now directly evident in the preceding model: it is the contribution of $\varepsilon$.
Uncertainty in the true value of $x$. In the original question this is explicitly assumed to be inconsequential.
Therefore we would expect the prediction interval formula to depend on three estimates: (i) the predicted value, (ii) the uncertainty in the predicted value due to the parameter uncertainty in (1) above, and (iii) the estimated variance of $\varepsilon$ (from (2) above). The parameter uncertainty plays a relatively small role for values of $x$ close to the average of the data $x_i$, but as $x$ moves away from this average, the parameter uncertainty makes an ever greater contribution.
Formulas can be found by searching this site for "prediction interval." In one thread, @Rob Hyndman gives a general formula that is directly applicable to this question. It assumes the parameters are estimated using least squares. For a design matrix $X$, the formula takes the form
$$\hat{y} \pm k_\alpha \hat{\sigma} \sqrt{1 + \mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'}.$$
($\mathbf{X}^*$ is determined by $x$ and $\mathbf{X}'\mathbf{X}$ encodes the variances and covariances of the $x_i$.)
This contains parts we can explicitly match to (i), (ii), and (iii):
(i) The interval is centered around the predicted value $\hat{y}$.
(ii) The parameter uncertainty appears in the term $\hat{\sigma} \sqrt{\mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'}$. The stuff inside the square root measures a squared distance between $x$ and the average of the $x_i$.
(iii) The contribution of $\varepsilon$ appears as the "$1$" (multiplied by $\hat{\sigma}$) added to the parameter uncertainty. We recognize this as the usual Pythagorean formula for the variance of a sum of independent random variables: take the square root of the sum of their squares.
As usual, the coefficient $k_\alpha$ is chosen to achieve a desired level of confidence. It depends on that level, $1-\alpha$, and on assumptions about how the $\varepsilon_i$ are distributed.
A comment to the original question prompts me to remark that considerations of correlation among the variables $1$, $x$, $x^2$, and $x^3$ are not relevant. Such correlations do of course exist in general--it is a rare situation in which our variables are mutually orthogonal--but the correlations are already accounted for in the least squares regression machinery. It is wise to check that the correlations are not so great that they introduce numerical instability in the solutions. That is unlikely in this particular problem.
Another comment also calls for another follow-up remark to emphasize one point: the presence of the second source of uncertainty (the contribution of $\varepsilon$, which is usually material) shows that we need to do more than consider the variances and covariances of the estimates $\hat{a}$, ..., $\hat{d}$: we also need to incorporate the contribution of $\varepsilon$ in the estimated uncertainty of $\hat{y}(x)$.