I assume that you talk about the p-value on the estimated coefficient $\hat{\beta}_1$. (but the reasoning would be similar for $\hat{\beta}_0$).
The theory on linear regression tells us that, if the necessary conditions are fulfilled, then we know the distribution of that estimator namely, it is normal, it has mean equal to the ''true'' (but onknown) $\beta_1$ and we can estimate the variance $\sigma_{\hat{\beta}_1}$. I.e. $\hat{\beta}_1 \sim N(\beta_1, \sigma_{\hat{\beta}_1})$
If you want to ''demonstrate'' (see What follows if we fail to reject the null hypothesis? for more detail) that the true $\beta_1$ is non-zero, then you assume the opposite is true, i.e. $H_0: \beta_1=0$.
Then by the above, you know that, if $H_0$ is true that $\hat{\beta}_1 \sim N(\beta_1=0, \sigma_{\hat{\beta}_1})$.
In your regression result you observe a value for $\hat{\beta_1}$ and you can compute its p-value. If that p-value is smaller than the significance level that you decide (e.g. 5%) then you reject $H_0$ en consider $H_1$ as ''proven''.
In your case the ''true'' $\beta_1$ is $\beta_1=0.5$, so obviously $H_0$ is false, so you expect p-values to be below 0.05.
However, if you look at the theory on hyptothesis testing, then they define ''type-II'' errors, i.e. accepting $H_0$ when it is false. So in some cases you may accept $H_0$ even though it is false, so you may have p-values above 0.05 even though $H_0$ is false.
Therefore, even if in your true model $\beta_1=0.5$ it can be that you accept the $H_0: \beta_1=0$, or that you make a type-II error.
Of course you want to minimize the probability of making such type-II errors where you accept that $H_0: \beta_1=0$ holds while in reality it holds that $\beta=0.5$.
The size of the type-II error is linked to the power of your test. Minimizing the type-II error means maximising the power of the test.
You can simulate the type-II error as in the R-code below:
Note that:
- if you take $\beta_1$ further from the value under $H_0$ (zero) then the type II error decreases (execute the R-code with e.g. beta_1=2) which means that the power increases.
- If you put beta_1 equal to the value under $H_0$ then you find $1-\alpha$.
R-code:
x = rnorm(100,5,1)
b = 0.5
beta_0= 2.5
beta_1= 0.5
nIter<-10000
alpha<-0.05
accept.h0<-0
for ( i in 1:nIter) {
e = rnorm(100,0,3)
y = beta_0 + beta_1*x + e
m1 = lm(y~x)
p.value<-summary(m1)$coefficients["x",4]
if ( p.value > alpha) accept.h0<- accept.h0+1
}
cat(paste("type II error probability: ", accept.h0/nIter))