2

I would like to know why in the below formula the prior distribution of theta is not conditioned on X (observations):

$$P(\theta|X, y)=\frac{P(y|X, \theta)P(\theta)}{P(y|X)}$$

In my understanding, the correct formula should be:

$$P(\theta | X, y) = \frac{P(y| X, \theta) P(\theta| X)}{P(y|X)}$$

But I think I am missing something.

Lerner Zhang
  • 5,017
  • 1
  • 31
  • 52
  • 1
    Why do you think this should be correct? – Tim Sep 09 '20 at 17:21
  • 1
    P(theta | y) = P(y | theta) P(theta) / P(y) right? We added new condition to the formula P(theta | y, X). The entire formula has changed but the peior P(theta). Why does this make sense? If we condition to X, then both likelihood and prior should be conditioned to X. – Amir Jalilifard Sep 09 '20 at 17:23
  • Is this some sort of regression where X is used to explain Y? – TrynnaDoStat Sep 09 '20 at 17:49

2 Answers2

5

This must be in the context of a regression model $P(y \mid X, \theta)$ where $X$ is considered a constant. This is usually done for reasons of inference, as explained in What is the difference between conditioning on regressors vs. treating them as fixed?.

Then, for inference on $\theta$ after seeing $X$, it does not matter if you condition $p(\theta)$ on $X$ or not, since you treat $X$ as fixed and known anyhow.

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
3

Regressors in regression are often assumed to be fixed. In other words, they are fixed, real vectors and not random variables. Typically, they are used to predict or explain the random variable $Y$. Notice, that regressors don't have to be fixed but this is the case in ordinary least squares and many of its variants.

Assuming this is regression and $X$ are regressors and $Y$ is the dependent variable, $P(\theta|X) = P(\theta)$ because $X$ is just some fixed value.

TrynnaDoStat
  • 7,414
  • 3
  • 23
  • 39