2

I had frequently seen the definition for a "rigorous" spatial isotropic semivariogram being defined as:

$$ \gamma(h) = K(0) - K(h) $$

Where $K$ is a positive definite covariance matrix. If the nugget $\sigma$ is applied to the covariance function $K$, then it seems to me that it would always be that $\gamma(0) = 0$, but that $\gamma(0+\delta)>\sigma$ for all distances greater than zero.

However, often in practice a linear function is used to determine $\gamma$:

$$ \gamma(h) = \sigma + \frac{h}{\beta} $$

Should I really define the nugget in terms of the covariance matrix, or should I make it a separate term:

$$ \gamma(h) = \sigma + K(0) - K(h) $$

Am I making a mountain out of mole hill here? Are there any reasons to prefer one form or the other?

To visualize, is it more "correct" to pick one form over the other:

nonzero zero

zero zero

Based on the idea of nugget as allowing a non-zero offset at zero, I could see the former should be preferred: but this seems to imply that the nugget is no really part of the assumed covariance matrix.

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
wdkrnls
  • 297
  • 1
  • 3
  • 11
  • I don't have an answer, but I think that this question is related to mine: https://stats.stackexchange.com/questions/551359/what-is-the-difference-between-a-non-zero-nugget-and-a-noise-term-in-kriging-gpr – naught101 Nov 08 '21 at 01:02

0 Answers0