I have some data in the form of (x,y) tuples, 50 tuples. I am trying to fit a quadratic so
y = ax^2+bx+c
My goal is to find the max of the quadratic, so max = -b/2a and also find the confidence interval around that. I find the max after fitting the quadratic. For the confidence I chose two methods
1) Bootstrapping -> resampling a sample size equal to 50 again from the data with replacement, fit the quadratic and find the max, and compute the standard deviation of the estimated max
2) Fiellers's theorem ->http://en.wikipedia.org/wiki/Fieller%27s_theorem
The two in general are close, but are drastically different when the errors are high and model fits are poor. Is that expected?
Secondly sometimes I do find that the confidence interval is like 15 for a max of 110, but the overall R_squared is negative, does that even make sense?
Thanks.