From my understanding, the property of OLS regression model is based on the assumption that there exists a set of $β_1$, $β_2$, $β_3$ ... $β_k$ that makes the error $ϵ$ satisfy the OLS assumptions (linear, homoscedasticity, etc.). And our further regression on the sample is to approximate the value of this set of coefficient.
However, I am confused about the logic behind the test of their existence. And also how can we conduct the test for each assumption separately but be able to conclude there is a set of $β_1$, $β_2$, $β_3$...$β_k$ that make all assumptions satisfied simultaneously. For example, we first run the test for autocorrelation and then validated the existence of the set of $β_1$, $β_2$, $β_3$...$β_k$ that make error ϵ derived from the model satisfy the independence assumption. Then we also validated the existence of the set of $β_1$, $β_2$, $β_3$...$β_k$ that make ϵ satisfy the homoscedasticity, but how can we conclude that they are the same set of $β_1$, $β_2$, $β_3$...$β_k$?
I know my understanding might be totally wrong, and I am very confused about the logic right now. Thanks in advance for explaining to me!