Suppose I want to fit a function to measured data minimizing the chi^2. "It is known" that if it has too many correlated parameters the fit will not only have very large expected statistical errors on the fitted parameters but many times it will fail (e.g. stuck in a minimum more than 10 standard errors away from the "true" minimum). Here's my question... How can the "robustness" of a function to be fitted be defined and/or studied?
Example. Suppose my data are described by the function y=a*x+b*sin(x). x is in the range [-0.1,0.1] and a,b ~1. I take 6 measurements in this range. I can compute the expected correlation matrix and it is like 0.9xxx between a and b. However if the errors on the measured y are 1e-7 I will fit this function without any problems. If the errors on the y are about 1e-2 there is no possibilty to converge, and the actual chi^2 minimum will be tipically much farther then the expected error.
How can I study a-priori whether a function to be fitted is "robust" or not?