3

When looking at papers about long memory they tend to analyze data sets whose length is in the thousands, see http://www.math.canterbury.ac.nz/~m.reale/pub/Reaetal2011.pdf for an example.

My question is to the long memory researchers and practitioners out there. What rule of thumb do you use to decide whether a data set is too small to be able to appropriately detect/estimate long memory?

[Naturally the smaller the long memory parameter the more observations you will require to detect it, but i'm after a general rule of thumb rather than exact notions around the number of observations required to detect a specific effect size.]

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
adunaic
  • 1,170
  • 7
  • 14

0 Answers0