8

I have a data set and I have to fit this data set with a stable distribution. The problem is that the stable distributions are known analytically only in the form of the characteristic function (Fourier transform). How can I do this?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
emanuele
  • 2,008
  • 3
  • 21
  • 34
  • 1
    [These distributions](http://en.wikipedia.org/wiki/Stable_distribution) have four parameters: a location, a scale, a power law for the tail, and a skewness. Some of them may be easy to fit based on additional assumptions. For instance, should the data be nonnegative? Could they have two long tails or just one? Do you have any prior information about their location or scale? Also, do you need a good fit throughout the range or do you (perhaps) just need to characterize tail behavior? – whuber Mar 21 '12 at 18:56
  • 1
    `fitdistr` from `MASS`? `densfun` can also be a characteristic function expression. – Roman Luštrik Mar 21 '12 at 11:42
  • In my R package this function is not run.i install package,but qstable and dstable function show error.what should i do? please help me. –  May 10 '14 at 12:54

3 Answers3

6

As suggested in the comments, you can use fitdistr, with the density function from fBasics.

# Sample data
x <- rt(100,df=4)

# Density (I reparametrize it to remove the constraints 
# on the parameters)
library(fBasics)
library(stabledist)
f <- function(u,a,b,c,d) {
  cat(a,b,c,d,"\n")  # Some logging (it is very slow)
  dstable(u, 2*exp(a)/(1+exp(a)), 2*exp(b)/(1+exp(b))-1, exp(c), d)
}

# Fit the distribution
library(MASS)
r <- fitdistr(x, f, list(a=1, b=0, c=log(mad(x)), d=median(x)))
r

# Graphical check
plot(
  qstable(ppoints(100),
    2*exp(r$estimate[1])/(1+exp(r$estimate[1])),
    2*exp(r$estimate[2])/(1+exp(r$estimate[2]))-1,
    exp(r$estimate[3]),
    r$estimate[4]
  ),
  sort(x)
)
abline(0,1)
Vincent Zoonekynd
  • 1,268
  • 10
  • 10
  • One comment. We must pay attention to the parametrization because this algorithm crash near the boundaries of the parameters (alpha=2 or alpha=0 and beta=1 or beta=-1). – emanuele Apr 05 '12 at 16:36
  • @Vincent Zoonekynd, could you please give an example how to statistically check the goodness of fit of estimated parameters? I have tried ks.test(modeldata,original)$p and obtained [1] 0.9999744 – Nick Jan 22 '17 at 11:28
1

One way to fit the $\alpha$ parameter is via the Nagaev transform described by Okoneshnikov. An alternative is the 'Probability of Return' method of Mantegna and Stanley, which is considerably easier.

edit: the other 'classical' method is of Kogon & Williams (S.M. Kogon, Douglas B. Williams, "On Characteristic Function Based Stable Distribution Parameter Estimation Techniques"), see also matlab implementation of K&W

shabbychef
  • 10,388
  • 7
  • 50
  • 93
1

@Vincent's answer sounds good, but here is another approach: Since you know the Fourier transform of the distribution, take the appropriate Fourier transformation of the data, and find parameters that give the best fit in Fourier space.

I think this method should work just as well in theory, and in practice would avoid lots of numerical integration to get the form of the stable distributions. I am not coding up the test now, sorry. Anyone have any insight on this?

petrelharp
  • 344
  • 1
  • 7
  • This is a clever idea. Note, however, that "just as well" in Fourier space is *not* going to be the same as a good fit in the original space: those are two different criteria of goodness of fit. One is also led to wonder about the potential complications of fitting *complex*-valued functions: suddenly we have a two (real) dimensional problem rather than a one-dimensional one. I'm not trying to suggest these are show-stoppers, but they are concerns that would have to be investigated by anyone working through this approach. – whuber Mar 22 '12 at 15:28