-1

According to these papers 1 (pg 6) 2 (pg 118) involving Bayesian inference,

$\beta_{jkc}$ ~ $N(\mu_{\beta j k}, \tau_{\beta j})$

$\mu_{\beta j k}$ ~ $N(\lambda_j, \eta_j)$

So $\beta_{jkc}$ is a random variable with random mean? Is it a conditional mean then?

BCLC
  • 2,166
  • 2
  • 22
  • 47
  • 3
    I could have sworn that I saw a longer version of this question either here on stats.SE or on math.SE (possibly on both) within the past two days. In any case, it is just one in a long series of almost identical questions posted by BCLC each of which is followed by vigorous denials that the questions are identical. – Dilip Sarwate Sep 14 '15 at 13:19
  • 1
    @DilipSarwate I already said [the title was edited](http://math.stackexchange.com/questions/1433567/prove-s-doteq-sum-n-1-infty-p-n-infty-to-prod-n-1-infty-1-p-n#comment2920728_1433567). :| – BCLC Sep 14 '15 at 13:21
  • 1
    @Dilip's prediction about denial now confirmed. Funny to see this user switching to stats.SE when the reception of their neverending strings of self-duplicates on maths.SE starts to deteriorate. – Did Sep 15 '15 at 12:21

1 Answers1

4

This is an example of an hierarchical model specified with conditional distributions. $\mu$ is a latent variable, so conditioned on a given value for $\mu$, $\beta$ has a normal distribution.

You can derive the distribution for $\beta$ given the hyperparameters $\lambda$ and $\eta$, or you can use a Gibbs sampler to do inference.

A good example of this can be seen for the Beta-Binomial distribution. Where you have some prior assumptions on the distribution of on of the parameters. The derivation of the distribution that you get is on the wiki page in the link.

In the beta-binomial case you can calculate the posterior distribution, but that is not always possible, so one needs to use sampling techniques to do inference.

Gumeo
  • 3,551
  • 1
  • 21
  • 31
  • '...conditioned on a given value...' --> Is the $\mu$ in the papers kind of like the p in [An Essay towards solving a Problem in the Doctrine of Chances](https://en.wikipedia.org/wiki/An_Essay_towards_solving_a_Problem_in_the_Doctrine_of_Chances)? – BCLC Sep 14 '15 at 13:24
  • 1
    Yes. You have to think about this in terms of prior and posterior distributions. You assume that $\beta$ has a normal distribution conditioned on that you know $\mu$, (this is your data distribution). Then you assume that $\mu$ has a normal distribution, (which is your prior). You can then find the posterior distribution of $\beta$ given values of the hyperparameters $\lambda$ and $\eta$. I think that the notation that you have is either wrong or not standard, because $\beta$ should be conditioned on $\mu$ when you specify it like this. – Gumeo Sep 14 '15 at 13:29
  • Thanks Guðmundur Einarsson. Is data distribution = posterior distribution? – BCLC Sep 14 '15 at 13:30
  • 1
    No. You start by specifying a data distribution $P(y|\theta)$, i.e. how does the model behave on this data given a set of parameters. Then you specify a prior on the parameters $P(\theta)$. Then you use Bayes rule and get the posterior $P(\theta|y)=\frac{P(y|\theta)\cdot P(\theta)}{P(y)}$. The posterior is the distribution of the parameters given the data $y$ that you have. – Gumeo Sep 14 '15 at 13:34
  • Oh [base rate fallacy again](http://stats.stackexchange.com/questions/166845/is-this-posterior-probability-integral-right#comment316537_166871). Thanks :)) – BCLC Sep 14 '15 at 13:39