11

Prove/Disprove $E[1_A | \mathscr{F_t}] = 0 \ \text{or} \ 1 \ \text{a.s.} \ \Rightarrow E[1_A | \mathscr{F_{s}}] = E[1_A | \mathscr{F_t}] \ \text{a.s.}$


Given a filtered probability space $(\Omega, \mathscr{F}, \{\mathscr{F}_n\}_{n \in \mathbb{N}}, \mathbb{P})$, let $A \in \mathscr{F}$.

Suppose $$\exists t \in \mathbb{N} \ \text{s.t.} \ E[1_A | \mathscr{F_t}] = 1 \ \text{a.s.}$$ Does it follow that $$E[1_A | \mathscr{F_{s}}] = E[1_A | \mathscr{F_t}] \ \text{a.s.} \ \forall s > t \ ?$$ What about $\forall s < t$?

What if instead $$\exists t \in \mathbb{N} \ \text{s.t.} \ E[1_A | \mathscr{F_t}] = 0 \ \text{a.s.} \ ?$$ Or what if $$E[1_A | \mathscr{F_t}] = p \ \text{a.s.} \ \text{for some} \ p \in (0,1) \ ?$$


What I tried:


If $\Bbb E[1_A|\mathscr F_t]=1$, then $\Bbb E[1_A]=1$, which is the same as $1_A=1$ (almost surely). In this case $\Bbb E[1_A|\mathscr F_s]=1$ (almost surely) for each $s$.

Likewise, if $\Bbb E[1_A|\mathscr F_t]=0$, then $\Bbb E[1_A]=0$, which is the same as $1_A=0$ (almost surely). In this case $\Bbb E[1_A|\mathscr F_s]=0$ (almost surely) for each $s$.

If $\Bbb E[1_A|\mathscr F_t]=p$, for a constant $p\in(0,1)$, then we have

$\Bbb E[1_A|\mathscr F_s]=E[E[1_A|\mathscr F_t]|\mathscr F_s] = E[p|\mathscr F_s] = p$. This may fail if $s>t$.

Alternatively for $= p$ case:

Let $F$ be a bounded $\mathscr F_t$-measurable random variable.

$$\Bbb E[1_A\cdot F]=\Bbb E[E[1_A\cdot F|\mathscr F_t]]=\Bbb E[F\cdot E[1_A|\mathscr F_t]]$$

$$=\Bbb E[p\cdot F]=p\Bbb E[F]=\Bbb E[1_A]\cdot\Bbb E[F]$$

meaning that $1_A$ and $F$ are independent. In other words, $\sigma(A)$ and $\mathscr F_t$ are independent. So $\sigma(A)$ and $\mathscr F_s$ are also independent if $s<t$ and hence $E[1_A|\mathscr F_s] = E[1_A] = p$ . This may fail if $s>t$.

I guess the idea is that a constant is both independent of $\mathscr F_s$ and $\mathscr F_s$-measurable.

BCLC
  • 2,166
  • 2
  • 22
  • 47

1 Answers1

5

Your argument seems to be valid, but you start off by assuming that $E[1_A | \mathscr{F_t}] = 1$. However, the question states that $E[1_A | \mathscr{F_t}] \in \{0, 1\}$, which I would take to mean that the random variable $E[1_A | \mathscr{F_t}]$ takes values in the set $\{0, 1\}$ i.e. $E[1_A | \mathscr{F_t}]=1_B$ where $B\in\mathscr{F_t}$. The defining property of this conditional expectation is that $\int_F 1_B d\mathbb{P}=\int_F 1_A d\mathbb{P}$ for all $F\in\mathscr{F_t}$. In particular, taking $F=B$ leads to $P(B)=P(A\cap B)$, from which we can conclude that $B\subset A$ (except possibly on a set of probability zero). However, we also know (as in the argument you have written) that $E[E[1_A | \mathscr{F_t}]] = E[1_B]$ i.e. $P(A)=P(B)$, so the only possible conclusion is that $A=B$ (except possibly for a set of probability zero).

For $s\gt t$, $\mathscr{F_t}\subset\mathscr{F_s}$, so the tower law for conditional expectations implies that $E[1_A | \mathscr{F_t}]=E[E[1_A | \mathscr{F_t}] | \mathscr{F_s}]$. But $E[1_A | \mathscr{F_t}]=1_A$, so $E[1_A | \mathscr{F_s}]=1_A$. So all the conditional expectations for $s>t$ are equal (to $1_A$). For $s<t$, if $A\in\mathscr{F_s}$ then we will still have $E[1_A | \mathscr{F_s}]=1_A$. On the other hand, if we go back to a time where $A$ is not in $\mathscr{F_s}$, then I don't think anything can be said about $E[1_A | \mathscr{F_s}]$ in general. For a concrete example, see this paper, Figure 1. Taking $A=\{\omega_2\}\in\mathscr{F_2}\setminus\mathscr{F_1}$, for example, gives the sequence of conditional expectations $E[1_A | \mathscr{F_0}]=\frac{1}{8} 1_\Omega$, $E[1_A | \mathscr{F_1}]=\frac{1}{2}1_{\{\omega_1,\omega_2\}}$, $E[1_A | \mathscr{F_2}]=1_{\{\omega_2\}}$, $E[1_A | \mathscr{F_3}]=1_{\{\omega_2\}}$.

S. Catterall
  • 3,672
  • 1
  • 10
  • 18
  • Thanks S. Catterall. How do you know 1 $P(B) = P(A \cup B) \to B \subseteq A$? 2 $E[1_A | \mathscr{F}_t] = 1_A$ ? Also going to edit question. Sorry for any inconvenience. I'm going to use some of your insight for edit – BCLC Nov 18 '15 at 00:37
  • 1
    Let me try to summarise in natural language; a filtration corresponds to an increasingly finer subdivision of the outcome space, and the conditional expectation of event $A$ w.r.t. successive elements of the filtration ("as more information becomes available") becomes more peaked around the event (at the initial state of information $\mathscr{F}_0$ it is just the uniform distribution). The stopping time is the stochastic level set surface of the process (in the paper, the outcome variable is binary, and value $0$ was chosen). – ocramz Nov 18 '15 at 01:39
  • @ocramz and S. Catterall, done editing. How is it pls? ^-^ – BCLC Nov 18 '15 at 01:40
  • 1
    In this picture, if we are measuring event $A$, but the sample process ends up in a configuration $\omega_i$ that doesn't belong to $A$, $A$ becomes effectively "unknowable" (measure $0$). Is this description correct? Moreover, how the conditional expectations at consecutive times behave reminds me of the iterative Bayes' process, is there a connection between these concepts? @S. Catterall – ocramz Nov 18 '15 at 01:52
  • Reread answer. Still not sure how $E = 1_A$ like I asked earlier. Is $1_A$ measurable in $F_t$? A is an event, but I don't see how A is in $F_t$ – BCLC Nov 18 '15 at 15:20
  • @BCLC Thanks for editing the question, it is clearer now and maybe I misunderstood the precise nature of what you were asking. Your Case 1 and Case 2 are covered by my answer above, by taking $B=\Omega$ and $B=\emptyset$ respectively. I'm not sure if your argument $\exists C...$ is valid; I would recommend showing that $E[1_A | \mathscr{F_t}]=1_A$ and then use the tower law as in the answer above. For Case 3, as you correctly point out, we can't conclude that $E[1_A | \mathscr{F_{s}}] = E[1_A | \mathscr{F_t}]$. See the example in the paper with $t=0$ and $A=\{\omega_2\}$ (so $p=\frac{1}{8}$). – S. Catterall Nov 18 '15 at 22:12
  • @S.Catterall Oh right. I thought of a flaw in my 1C thing but forgot to edit. I'll edit later. thanks. I don't accept your thanks because I'm the one w/ the question ^-^ – BCLC Nov 18 '15 at 22:16
  • 1
    In answer to the questions in your first comment: if $P(B) = P(A \cap B)$ then, because $B$ is a disjoint union of $A\cap B$ and $A^c\cap B$ we must have $P(A^c\cap B)=0$, which means that $B\subset A$ a.s. Now, in the same way, we can use $P(A)=P(B)$ to conclude that $A=B\in \mathscr{F_t}$ a.s. – S. Catterall Nov 18 '15 at 22:30
  • Right symmetry. Thanks S. Catterall. Edited OP. How is it? – BCLC Nov 25 '15 at 13:41
  • 1
    @BCLC I've checked your edits and it looks much better now, great. I assume that when you write $\Bbb E[1_A\cdot F]$ you actually mean $\Bbb E[1_A\cdot 1_F]$ where $F\in \mathscr{F_t}$. – S. Catterall Nov 29 '15 at 12:00
  • @S.Catterall Thanks! Edited. F is supposed to be any bounded $\mathscr F_t$-measurable random variable. That works too – BCLC Nov 29 '15 at 12:06