If you know the model was fit by least squares, then we have:
$$\hat{a}_0=\overline{y}-\hat{a}_1\overline{x}\;\;\;\;\;var(\hat{a}_0)=\hat{\sigma}^2\frac{s_x^2+\overline{x}^2}{ns_x^2}$$
$$\hat{a}_1=r_{xy}\frac{s_y}{s_x}\;\;\;\;\;var(\hat{a}_1)=\hat{\sigma}^2\frac{1}{ns_x^2}$$
$$cov(\hat{a}_0,\hat{a}_1)=-\hat{\sigma}^2\frac{\overline{x}}{ns_x^2}$$
$$R^2=r_{xy}^2=1-\frac{(n-2)\hat{\sigma}^2}{ns_y^2}$$
Where $\overline{x}=\frac{1}{n}\sum_ix_i$ and $s_x^2=\frac{1}{n}\sum_i(x_i-\overline{x})^2$ and similarly for $\overline{y},s^2_y$ and $r_{xy}$ is the correlation between $x$ and $y$.
Now the forumal for the prediction error is:
$$mse(\hat{y})=\hat{\sigma}^2(1+\frac{1+z^2}{n})$$
Where $z=\frac{x_p-\overline{x}}{s_x}$ and $x_p$ is the predictor used. Hence you need to know $\hat{\sigma}^2,n,\overline{x},s_x$.
However, you need $s_y^2$ in order to rescale $R^2$ properly. One way to get around this, is to note that:
$$\hat{\sigma}^2=\frac{n}{n-2}s_y^2(1-R^2)=\frac{n}{n-2}\frac{\hat{a}_1^2s_x^2}{R^2}(1-R^2)$$
One rough approximation is to use $\hat{y}^2$ in place of $s_y^2$ to get $\hat{\sigma}^2\approx \frac{n}{n-2}\hat{y}^2(1-R^2)$. We can safely approximate $\hat{z}^2= 4$ provided $x_p$ is "typical" of the units used in the model fitting.
An alternative (better?) approach is to estimate $\hat{\overline{x}}$ and $\hat{s}_x^2$ from the predictors that you have available, say as $\hat{\overline{x}}=\frac{1}{n_p}\sum_{j}x_{pj}$ and $\hat{s}_x^2=\frac{1}{n_p}\sum_j(x_{pj}-\hat{\overline{x}})^2$ where $n_p$ is the number of observations you are producing predictions for with $x_{pj}$ being the predictor for the "jth" observation. Then you replace $\hat{z}_j=\frac{x_{pj}-\hat{\overline{x}}}{\hat{s}_x}$ and $\hat{\sigma}^2\approx \frac{n}{n-2}\hat{a}_1^2\hat{s}_x^2\frac{1-R^2}{R^2}$.
Note that if you add $\overline{x}$ and $s_x^2$ to your available information, then you have everything you need to know about the regression fit.