Your intuition is correct. A linear regression model provides a variance-covariance matrix for the coefficients that can be used with the formula for sums of correlated variables to estimate errors in predictions made from the model.*
This answer describes both the general multiple regression result and works through its application to your simple intercept/slope, single-predictor situation. In general, the variance-covariance matrix among linear regression coefficients depends on the estimated residual variance unexplained by the regression, $\hat\sigma^2$, and the design matrix representing the predictor values. The variance-covariance matrix for the estimates of the intercept and the slope in your single-predictor situation is:
$$\frac{\hat\sigma^2 }{n\sum x_i^2 - (\sum x_i)^2}
\left(
\begin{array}{cc}
\sum x_i^2 & -\sum x_i \\
-\sum x_i & n
\end{array}
\right)$$
where the $x_i$ are the values of the independent variables and $n$ is the number of observations. The variances of the intercept and the slope are the diagonal elements of the matrix; the covariance between them is either of the diagonal terms.
So the errors in the estimates of the intercept and the slope are related to the $y$-value measurement errors (of the thermal expansion coefficients in your case) via $\hat\sigma^2$, the variance unexplained by the linear relationship. The covariance between the 2 coefficient estimates depends on the mean values of the independent variables. Note that the covariance between the coefficients is negative if the mean $x$ value, $\bar x$, is positive.
That relationship of the coefficient covariance to your particular choice of $x$ values might seem strange, but it comes down to a pretty simple result when you plug through the formula for the variance of a $y$ value predicted from the model. As shown on this page, the variance in a $y$ value estimated at any specified $x$ value, $x_d$, is:
$$
\hat\sigma^2\left(1+\frac{1}{n} + \frac{\left(x_d - \bar{x}\right)^2}{\sum (x_i - \bar{x})^2}\right)
$$
So the error in a value predicted from your linear regression depends on the measurement error via $\hat\sigma^2$, the number of observations $n$ that went into your model, and the distance of your specified $x$ value for the prediction, $x_d$, from the mean of your original predictor values, $\bar x$.
Those all make intuitive sense as contributions to the error in an estimate. Respectively, they are the estimated measurement error per se, the precision of your estimate of that measurement error as determined by the number of observations, and added extrapolation error as you move away from the mean of your original observations.
You shouldn't have to do all these calculations by yourself; any respectable statistical software suite should be able to provide the variance of a prediction from a linear regression model. You can then use that variance for your further error-propagation analyses.
*This is a somewhat more complicated issue in observational studies with multiple regression, in which several "independent" variables are often highly correlated among each other. Your single-predictor situation is fairly simple in practice,