6

I have two coefficients' estimates from a regression, each of which has an estimated standard error.
I would like to know the quotient of these two estimates -- that is, divide one of the estimates by the other. What would be the corresponding standard error?

Would this be a candidate for the Delta Method?

If so, how should the formula be applied? If this is not something that can be computed in a straightforward manner, is there a way to do this in Stata?

In particular, I am interested in the Wald Estimator. In the Delta Method, there is a covariance term typically. Is it zero or non-zero in this case; that is, when can we assume that the two estimates are independent?

Nick Cox
  • 48,377
  • 8
  • 110
  • 156
user1690130
  • 755
  • 2
  • 11
  • 23
  • [This](http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables#Second_moment) might be of some use – Glen_b Jun 04 '13 at 23:48

2 Answers2

5

Here's an example in Stata of how to create the ratio and test a hypothesis using nlcom:

. webuse regress

. regress y x1 x2 x3

      Source |       SS       df       MS              Number of obs =     148
-------------+------------------------------           F(  3,   144) =   96.12
       Model |   3259.3561     3  1086.45203           Prob > F      =  0.0000
    Residual |  1627.56282   144  11.3025196           R-squared     =  0.6670
-------------+------------------------------           Adj R-squared =  0.6600
       Total |  4886.91892   147  33.2443464           Root MSE      =  3.3619

------------------------------------------------------------------------------
           y |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
          x1 |   1.457113    1.07461     1.36   0.177     -.666934    3.581161
          x2 |   2.221682   .8610358     2.58   0.011     .5197797    3.923583
          x3 |   -.006139   .0005543   -11.08   0.000    -.0072345   -.0050435
       _cons |   36.10135   4.382693     8.24   0.000     27.43863    44.76407
------------------------------------------------------------------------------

. nlcom ratio:_b[x1]/_b[x2], post

       ratio:  _b[x1]/_b[x2]

------------------------------------------------------------------------------
           y |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       ratio |   .6558606   .4221027     1.55   0.122    -.1784571    1.490178
------------------------------------------------------------------------------

. test ratio=.5

 ( 1)  ratio = .5

       F(  1,   144) =    0.14
            Prob > F =    0.7125

There are formulas in the pdf manual under nlcom. A terse explanation can be found in the Stata FAQ on the delta method.


Added in response to the OP's comment below:

If you have two separate regressions, you have all the ingredients for the formula that Glen_b linked to, other than the covariance term. Here you have two choices. You can assume it's zero if that makes sense with your model and do the calculation "manually". Or you can estimate the two equations as a system, which will give you cross-equation covariances between the coefficients. It's hard to know which is better without the details. One way (out of several possible ways) to do the latter is with Seemingly Unrelated Regression:

. webuse regress

. sureg (eq1:y x1 x2) (eq2:y x1 x3)

Seemingly unrelated regression
----------------------------------------------------------------------
Equation          Obs  Parms        RMSE    "R-sq"       chi2        P
----------------------------------------------------------------------
eq1               148      2     4.54006    0.3758      91.48   0.0000
eq2               148      2    3.770546    0.5694     211.94   0.0000
----------------------------------------------------------------------

------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
eq1          |
          x1 |   7.472932     .98949     7.55   0.000     5.533568    9.412297
          x2 |  -.4768772   .7799875    -0.61   0.541    -2.005625     1.05187
       _cons |  -1.374358   2.883296    -0.48   0.634    -7.025514    4.276798
-------------+----------------------------------------------------------------
eq2          |
          x1 |   4.338581   .7852935     5.52   0.000     2.799434    5.877728
          x3 |  -.0026865   .0003774    -7.12   0.000    -.0034261   -.0019468
       _cons |   16.32873   3.214735     5.08   0.000     10.02797     22.6295
------------------------------------------------------------------------------

. nlcom ratio:[eq1]_b[x1]/[eq2]_b[x1]

       ratio:  [eq1]_b[x1]/[eq2]_b[x1]

------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       ratio |   1.722437   .2773696     6.21   0.000     1.178803    2.266071
------------------------------------------------------------------------------
dimitriy
  • 31,081
  • 5
  • 63
  • 138
  • If you want to skip to the hypothesis test, you can just `testnl _b[x1]/_b[x2]=.5` after the regression. – dimitriy Jun 04 '13 at 23:41
  • Thank you!! Is there a way to do something similar if the estimates are based on coefficients from two separate regressions? – user1690130 Jun 05 '13 at 01:54
  • See my response above. – dimitriy Jun 05 '13 at 16:34
  • What if the two regressions are not conducive to SUR? Is there a formula that I could use? Or, must the two equations be conducive to SUR? – user1690130 Jun 06 '13 at 13:55
  • By SUR, I meant seemingly unrelated regressions as you said. I guess I just meant is whether there is a formula with delta method. I am hesistant about SUR because if I am not normally doing my regressions with SUR why would I want to now? – user1690130 Jun 06 '13 at 14:50
  • I am not sure what conducive to SUR means. Without some sort of systems approach, there's no covariance term. Where would it come from? – dimitriy Jun 06 '13 at 14:54
  • You suggested sureg. So you are saying to get a sureg term to get a covariance between the two estimators? I did not realize I needed a covariance term. – user1690130 Jun 06 '13 at 15:05
  • That's the $Cov[X,Y]$ term in Greg_b's formula. The $E[]$s are the coefficients, which Stata calls _b[varname]. The $Var[]$s are *squares* of the standard errors of the coefficients, which Stata calls _se[varname]. – dimitriy Jun 06 '13 at 16:26
  • You can also just read these off from the regression output. The covariances are not automatically shown, nor is there a particularly quick way of getting them. But you can get the variance-covariance matrix by typing "estat vce" and just read them off. Note that this already does the squaring for you for the variances. Compare what you see after each regression to what you get after sureg. You'll see that you get the covariance only with sureg. – dimitriy Jun 06 '13 at 16:28
  • I should have mentioned this at the onset, but in particular, I'm interested in constructing a Wald estimator. In that case, is the covariance zero or non-zero? – user1690130 Jun 09 '13 at 13:38
  • There are two posts on Statalist that deal with this exact problem: http://www.stata.com/statalist/archive/2009-10/index.html#00467 and http://www.stata.com/statalist/archive/2009-10/index.html#00670. The second one deals with the standard errors. – dimitriy Jun 09 '13 at 19:53
  • I don't believe the covariance term should be zero. – dimitriy Jun 09 '13 at 21:00
1

Another method of looking at the ratio is due to Fieller. An excellent post is at: How to compute the confidence interval of the ratio of two normal means

soakley
  • 4,341
  • 3
  • 16
  • 27