3

When a calibration is generated from a set of standards run on an analytical instrument, should the standards be remade and reanalyzed if not all of the points fit within 20%-30% (depending on regulations) of the regression or give a coefficient of determination of >= 0.99?

I have heard of points being dropped from the calibration if they do not fit well, but isn't that hacking the numbers to get a good fit?

gung - Reinstate Monica
  • 132,789
  • 81
  • 357
  • 650
DifferentialPleiometry
  • 2,274
  • 1
  • 11
  • 27
  • 5
    Hacking would be a polite term for this practice. – Frank Harrell Jul 24 '15 at 15:23
  • 3
    One should check the performance of the analytical instrument in a case like this, particularly if the same standards were measured reliably in the past. – EdM Jul 24 '15 at 15:55

1 Answers1

1

I don't think your question can be answered without further information. 20 - 30 % of what? Which calibration points/concentrations? For what analyte and what application (matrix!) do you calibrate?

Assuming that in the past (or in other labs) much better calibration was possible, however, I'd recommend to think about the possible sources of the trouble.

  • As @EdM points out, your instrument may be out of order.

  • preparing standards afresh and doing a completely new calbration is obviously the thing to do if your standards are degraded or they somehow got contaminated.
    Note that I'd either go for a complete set of freshly prepared calibration standards or at the very least I'd recommend some other (reference) analysis of the standards you want to keep. Particularly in case of degradation, how much can you trust the seemingly OK standards?

  • Excluding points from your calibration in order to get a "nicer" fit will at the very least lead to severely underestimating the expected variance and thus to far too narrow confidence, prediction and tolerance intervals.
    There's one possibility how you may still be allowed to do this: validate the calibration with completely new independent test samples and get the error from this validation experiment. However, that would be far more work than redoing a proper calibration with freshly prepared calibration samples.

  • Note that a relative error of 1/3 is expected at the LOD (limit of detection), and that the LOQ (limit of quantitation) is often defined as the concentration where the relative error falls below 10 %, so if you are calibrating around the LOD, such variation is actually expected behaviour.

cbeleites unhappy with SX
  • 34,156
  • 3
  • 67
  • 133
  • 20 - 30 % of what? The predicted value of the model. – DifferentialPleiometry Aug 01 '15 at 16:19
  • My question is not about troubleshooting. I've been challenged with the claim that it is sometimes acceptable to drop points from a calibration model, but in college I was taught to always run new standards. – DifferentialPleiometry Aug 01 '15 at 16:32
  • This is Galen from the future. That was meant to be 20 - 30 % **difference** from the expected value of the model, formulated as $$\text{% Difference} = \frac{|y_i - \hat{y}_i|}{ \hat{y}_i} \cdot 100$$ for the ith calibration standard. – DifferentialPleiometry Apr 07 '20 at 02:17
  • While choice of analyte and matrix are essential considerations for a chemical analysis in general, it is unclear to me what exact relation you had in mind with respect to the statistical control limits of a calibration. – DifferentialPleiometry Apr 07 '20 at 02:31