2

To calculate the PDF function using Shannon entropy I have scaled my original sample by simply doing $x'=(x-a)/(b-a)$; where $b=\text{max}(x)$, and $a=\text{min}(x)$ and then I found the $\text{PDF}(x')= exp(-\lambda_0-\lambda_1x'-\lambda_2x'^2)$ using optimization with subjection to moments constraints. However it's customary to plot PDF against the real sample (x). Therefore I wonder how I can map the PDF(x') for real sample (x).

Thank you.

user43857
  • 23
  • 3
  • The intuition is explained at http://stats.stackexchange.com/questions/14483 and http://stats.stackexchange.com/questions/71333. The same question (but with various changes of variables $x\to x^\prime$) appears in multiple threads found with a [search](http://stats.stackexchange.com/search?q=PDF+jacobian). A subtlety here is that some estimation procedures will be highly sensitive to your use of the max and min of the dataset to normalize it, so it is not clear how you would assess the uncertainty or goodness of fit in your estimate. – whuber Apr 17 '14 at 15:33

1 Answers1

1

If $X$ is a continuous random variable with pdf $f(x)$ and $Y=kX+\theta,$ then the pdf of $Y$ is given by $$g(y) = {{1} \over {|k|}}f \left({{y-\theta} \over {k}} \right).$$

In your case you have $x=(b-a)x'+a,$ so the pdf of $X$ is given by $$f(x) = {{1} \over {|b-a|}}h \left( {{x-a} \over {b-a}} \right),$$ where $h$ is the pdf you found in your optimization.

soakley
  • 4,341
  • 3
  • 16
  • 27