For any linear hypothesis (i.e. determining if a coefficient in a linear model is different from a particular value), one method is to subtract from each $y$ value your desired hypothesis multiplied by the $x$ value. So for your example
fit<-lsfit(log10(M),log10(RS)-.5*log10(M), wt)
You still test a null hypothesis of 0, but you change where the 0 point is.
To figure out why this works, there are few ways to think about it. I'll explain two.
To me, the most intuitive explanation is the model comparison approach. Your model (ignoring the log transformations... or assuming they have already been done and saved as new variables) is:
$$y=\beta_0+\beta_1x_1+\epsilon$$
To test the null hypothesis, $H_0: \beta_1=0$, you compare the full model to the following model:
$$y=\beta_0+0*x_1+\epsilon$$
of equivalently
$$y=\beta_0+\epsilon$$
The hypothesis you want to test is $H_1:\beta_1=.5$. To test this, you compare the full model above to the following model:
$$y=\beta_0+.5x_1+\epsilon$$
or equivalently
$$y-.5x_1=\beta_0+\epsilon$$
The difference between the sum of squares of these models is the your explained sum of squares. You error statistic is the residual of the full model. While I find this the most intuitive method of thought, it doesn't directly map onto your R function, because you are not explicitly creating two models and comparing them (though what R is doing is equivalent to doing that). You can do this by creating two lm()
models and comparing them using anova()
, but you seem to be restricted to lsfit()
You can also think of it in terms of calculating a customized coefficient. For the full model (the first model written) least squares regression (or ANOVA) will compute the value for coefficient $\beta_1$ that best fits the data. Let's say that is $0.3$. Now you want to test if $0.3$ is different from $0.5$, which is to say test $H_1: \beta_1=.5$. Let's adjust your model just a bit by adding .5 to the parameter (in lay terms, let's "spot" the coefficient .5 points).
$$y=\beta_0+(\beta_{1,H1}+.5)x_1+\epsilon$$
I renamed $\beta_1$ as $\beta_{1,H1}$ to distinguish it from $\beta_1$ in your full model. If you were to figure out this model, the coefficients will still be chosen so that your estimates best fit your observed data. Since you already know that $\beta_1$ from your full model best fits the data, then
$$(\beta_{1,H1}+.5)=\beta_1$$
Since $\beta_1=.3$ then $\beta_{1,H1}= -.2$. This makes sense because your estimated $\beta_1$ from the full model is 2 points less than your hypothesis. Your goal is to figure out if being 2 points less than your hypothesis is significant. To finish it off, you work through the math of the model again.
$$y=\beta_0+(\beta_{1,H1}+.5)x_1+\epsilon$$
$$=\beta_0+\beta_{1,H1}x_1+.5x_1+\epsilon$$
$$=y-.5x_1=\beta_0+\beta_{1,H1}x_1+\epsilon$$
This is the model you give to lsfit()
or a single lm()
model. While it looks different from the model comparison approach, it is not. It's just that how you test the coefficients is slightly different (but get the exact same results).