4

I have some data in the form of (x,y) tuples, 50 tuples. I am trying to fit a quadratic so

y = ax^2+bx+c

My goal is to find the max of the quadratic, so max = -b/2a and also find the confidence interval around that. I find the max after fitting the quadratic. For the confidence I chose two methods

1) Bootstrapping -> resampling a sample size equal to 50 again from the data with replacement, fit the quadratic and find the max, and compute the standard deviation of the estimated max

2) Fiellers's theorem ->http://en.wikipedia.org/wiki/Fieller%27s_theorem

The two in general are close, but are drastically different when the errors are high and model fits are poor. Is that expected?

Secondly sometimes I do find that the confidence interval is like 15 for a max of 110, but the overall R_squared is negative, does that even make sense?

Thanks.

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
gbh.
  • 721
  • 1
  • 6
  • 15
  • Could you clarify your last sentence/question? Not sure what you mean – shadowtalker Jan 26 '15 at 18:17
  • Sure, all I meant was that sometimes I do get a 95% confidence that is a not a very large number in comparison to the optimal value itself, but the R-squared itself is negative. This seems odd, In other words I was expecting to see a larger confidence interval. – gbh. Jan 26 '15 at 19:12
  • That's what I don't understand. A confidence interval is a range of numbers, not a number. Do you mean the interval is _narrow_? – shadowtalker Jan 26 '15 at 22:28
  • Yes I mean narrow vs wide. – gbh. Jan 26 '15 at 22:30

0 Answers0