Doing this detour of a linear regression looks unnecessary to me if you can get paired data. You could also just subtract the paired measures of both instruments and have a single column of data to do a t-test on.
If you want to understand the reasons for not being able to prove the null hypothesis, read up on power analysis. It can broadly be framed as a problem of conflict of interest. As a researcher, you have a research hypothesis in mind that you want to prove. This is $H_a$. In order to arrive there, you need to produce evidence that rejects $H_0$. (Per set theory, $H_a$ and $H_0$ combined need to cover all possible outcomes.) This evidence needs to be strong enough for your p-value to express that the data would have been unlikely to occur under the assumption that $H_0$ is true.
If you want to prove $H_0$, you will be rewarded for sloppy work. The less evidence you collect, the more likely you are not able to reject $H_0$, which was your goal in the first place. In the worst case you actively ignore existing evidence against $H_0$. In the extreme, a researcher that never collects any data at all, will be able to prove all his precious $H_0$ research hypotheses and get published without doing any actual work.
Due to the mathematical nature of such tests, you can also not simply switch $H_0$ with $H_a$. For your purpose of showing a quantity does not differ from a given value, equivalence testing has been developed. It has the disadvantage that you need to define a parameter $\delta$ which represents the maximum acceptable distance to the value of interest the measures can have to still be considered equivalent. With two one sided tests (and a Bonferroni correction), you can show that your sample mean is neither smaller than the value of interest - $\delta$ nor bigger than it + $\delta$, thus equivalent.
That $\delta$ needs to be defined before looking at the data. Otherwise you have a similar conflict of interest again, the statistician would just select the smallest available $\delta$ for which he can prove equivalence. Women are as heavy as men if you take a $\delta$ of +-15kg around the weight of the average man... In this case, the scale is intuitive and the scam obvious. In most research scenarios, you could fool a reader by manipulating $\delta$. Equivalence testing is acceptable if you have principled reasons to justify $\delta$, otherwise it does violence to the epistemic principle of hypothesis testing.