What does conditioning on a random variable mean?
For example: in p(X|Y), X and Y are the random variables, so does the conditioning on Y mean Y is fixed (or non-random)?
What does conditioning on a random variable mean?
For example: in p(X|Y), X and Y are the random variables, so does the conditioning on Y mean Y is fixed (or non-random)?
Conditioning on a random variable is much more subtle than conditioning on an event.
Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $B$ by $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)} $$ for every event $A$. This defines a new probability measure $P(\ \cdot\mid B)$ on the underlying probability space, and if $X$ is a random variable which is either non-negative or $P$-integrable on $A$, then we have $$ E[X \mid B] = \int X \, dP(\ \cdot\mid B) = \frac{1}{P(B)} \int X \mathbf{1}_B \, dP. $$ The intuitive interpretation is that $E[X \mid B]$ is the "best guess" for what value $X$ takes, knowing that the event $B$ actually happens. This intuition is justified by the last integral above: we integrate $X$ with respect to $P$, but only on the event $B$ (and dividing by $P(B)$ is due to us concentrating all our attention on $B$ and hence re-weighting $B$ to have probability $1$).
That's the easy case. To understand conditioning on a random variable, we need the more general idea of conditioning on information. A probability measure by itself gives us prior probabilities for all possible events. But probabilities that certain events happen change if we know that certain other events do or do not happen. That is, when we have information about whether certain events happen or not, we can update our probabilities for the remaining events.
Formally, suppose $\mathcal{G}$ is a $\sigma$-algebra of events. Assume that it is known whether each event in $\mathcal{G}$ happens or not. We want to define the conditional probability $P(\ \cdot\mid \mathcal{G})$ and the conditional expectation $E[\ \cdot\mid \mathcal{G}]$. The conditional probability $P(A \mid \mathcal{G})$ should reflect our updated probability of an event $A$ after knowing the information contained in $\mathcal{G}$, and $E[X \mid\mathcal{G}]$ should be our "best guess" for the value of a random variable $X$ using the information contained in $\mathcal{G}$.
(NB: Why should $\mathcal{G}$ be a $\sigma$-algebra and not a more general collection of events? Because if $\mathcal{G}$ weren't a $\sigma$ algebra but we know whether each event in $\mathcal{G}$ happens or not, then we would know whether each event in the $\sigma$-algebra generated by $\mathcal{G}$ happens or not, so we might as well replace $\mathcal{G}$ with $\sigma(\mathcal{G})$.)
Here's where things get interesting. $P(A \mid\mathcal{G})$ is no longer just a number: it is a random variable!. We define $P(A \mid\mathcal{G})$ to be any $\mathcal{G}$-measurable random variable $X$ such that $$ E[X \mathbf{1}_B] = P(A \cap B) $$ for every event $B \in \mathcal{G}$. Moreover, if $X$ and $X^\prime$ are two random variables satisfying this definition, then $X = X^\prime$ almost surely. That is pretty abstract stuff, so hopefully an example can shed some light on the abstraction.
Example. Let $(\Omega, \mathcal{F}, P)$ be a probability space, and let $B \in \mathcal{F}$ be an event with $0 < P(B) < 1$. Suppose $\mathcal{G} = \{\emptyset, B, B^c, \Omega\}$. That is, $\mathcal{G}$ is the $\sigma$-algebra containing all the information about whether $B$ happens or not. Then for any event $A \in \mathcal{F}$ we have $$ P(A \mid \mathcal{G}) = P(A \mid B) \mathbf{1}_B + P(A \mid B^c) \mathbf{1}_{B^c}. $$ That is, for an outcome $\omega \in \Omega$, we have $$ P(A \mid \mathcal{G})(\omega) = P(A \mid B) $$ if $\omega \in B$ (i.e., if $B$ happens), and $$ P(A \mid \mathcal{G})(\omega) = P(A \mid B^c) $$ if $\omega \notin B$ (i.e., if $B$ doesn't happen). It is easy to check that this random variable actually satisfies the definition of the conditional probability $P(A \mid \mathcal{G})$ defined above.
I mentioned already that conditional probabilities aren't unique, but they are unique almost surely. It turns out that if $X$ is a nonnegative or integrable random variable, $\mathcal{G}$ is a $\sigma$-algebra of events, and $Q$ is the distribution of $X$ (a Borel probability measure on $\mathbb{R}$) then it is possible to choose versions of conditional probabilities $Q(B \mid \mathcal{G})$ for all Borel subsets $B$ of $\mathbb{R}$ such that $Q(\ \cdot \mid \mathcal{G})(\omega)$ is a probability measure for each outcome $\omega$. Given this possibility, we may define $$ E[X\mid\mathcal{G}]=\int_{\mathbb{R}} x \, Q(dx\mid\mathcal{G}), $$ which is again a random variable. It can be shown that this is the almost surely unique random variable $Y$ which is $\mathcal{G}$-measurable and satisfies $$ E[Y \mathbf{1}_A] = E[X \mathbf{1}_A] $$ for all $A \in \mathcal{G}$.
Given the general definitions of conditional probability and conditional expectation given above, we may easily define what it means to condition on a random variable $Y$: it means conditioning on the $\sigma$-algebra generated by $Y$: $$ \sigma(Y) = \big\{\{Y \in B\} : \text{$B$ is a Borel subset of $\mathbb{R}$}\big\}. $$ I said "easy to define," but I am aware that that doesn't mean "easy to understand." But at least we can now say what an expression like $E[X \mid Y]$ means: it is a random variable that satisfies $$ E[E[X \mid Y] \mathbf{1}_A] = E[X \mathbf{1}_A] $$ for every event $A$ of the form $A = \{Y \in B\}$ for some Borel subset $B$ of $\mathbb{R}$. Wow, that's abstract! Fortunately, there are easy ways to work with $E[X \mid Y]$ if $Y$ is discrete or absolutely continuous.
Suppose $Y$ takes values in a countable set $S \subseteq \mathbb{R}$. Then it can be shown that $$ P(A \mid Y)(\omega) = P(A \mid Y = Y(\omega)) $$ for each outcome $\omega$. The right-hand side above is shorthand for the more verbose $$ P(A \mid \{Y = Y(\omega)\}) $$ where $\{Y = Y(\omega)\}$ is the event $$ \{Y = Y(\omega)\} = \{\omega^\prime : Y(\omega^\prime) = Y(\omega)\}. $$ That is, if our outcome is $\omega$, and $Y(\omega) = k$, then $$ P(A \mid Y)(\omega) = P(A \mid Y = k) = \frac{P(A \cap \{Y = k\})}{P(Y = k)}. $$ Similarly, if $X$ is another random variable taking values in $S$, then we have $$ E[X \mid Y](\omega) = E[X \mid Y = Y(\omega)] = \sum_{x \in S} x P(X = x \mid Y = Y(\omega)) $$
Suppose now that $Y$ is absolutely continuous with density $f_Y$. Let $X$ be another absolutely continuous random variable, with density $f_X$. Let $f_{X, Y}$ be the joint density of $X$ and $Y$. Then we define the conditional density of $X$ given $Y = y$ by $$ f_{X\mid Y}(x \mid y) = \frac{f_{X, Y}(x, y)}{f_Y(y)} = \frac{f_{X, Y}(x, y)}{\int_{\mathbb{R}} f_{X, Y}(x^\prime, y) \, dx^\prime}. $$ Now we may define a function $g : \mathbb{R} \to \mathbb{R}$ given by $$ g(y) = E[X \mid Y = y] = \int_{\mathbb{R}} x f_{X \mid Y}(x \mid y) \, dx. $$ In particular, $g(y) = E[X \mid Y = y]$ is a real number for each $y$. Using this $g$, we can show that $$ E[X \mid Y] = g(Y), $$ meaning that $$ E[X \mid Y](\omega) = g(Y(\omega)) = E[X \mid Y = Y(\omega)] $$ for each outcome $\omega$.
This is just scratching the surface of the theory of conditioning. For a great reference, see chapters 21 and 23 of A Modern Approach to Probability by Fristedt and Gray.
Some Takeaways
- Conditioning on a random variable is different from conditioning on an event.
- Expressions like $P(A \mid Y)$ and $E[X \mid Y]$ are random variables
- Expressions like $P(A \mid Y = y)$ and $E[X \mid Y = y]$ are real numbers.
Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an event $\{ Y=y \}$ where the actual value $y$ is an algebraic variable that falls within some range.$^\dagger$ For example, we might specify the conditional density:
$$p_{X|Y}(x|y) = p(X=x | Y=y) = {y \choose x} \frac{1}{2^y} \quad \quad \quad \text{for all integers } 0 \leqslant x \leqslant y.$$
This refers to the probability density for the random variable $X$ conditional on the known event $\{ Y=y \}$, where we are free to set any $y \in \mathbb{N}$. The use of the variable $y$ in this formulation simply means that the conditional distribution has a form that allows us to substitute a range of values for this variable, so we write it as a function of the conditioning value as well as the argument value for the random variable $X$. Regardless of which particular value $y$ we choose, the resulting density is conditional on that event being treated as known ---i.e., no longer random.
As I have stated in another answer here, it is also worth noting that many theories of probability regard all probability to be conditional on implicit information. This idea is most famously associated with the axiomatic approach of the mathematician Alfréd Rényi (see e.g., Kaminski 1984). Rényi argued that every probability measure must be interpreted as being conditional on some underlying information, and that reference to marginal probabilities was merely a reference to probability where the underlying conditions are implicit, rather than explicit.
$^\dagger$ Technically, it's worth noting that if we conditioning on the value of a continuous random variable (an event with probability zero) then there is an extended definition of the conditional probability. Essentially this is just a function that satisfies the required integral statement for the marginal probability. In the present answer we will stick to discrete random variables to keep things simple.
It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, $E(X|Y=2)=14.$