Informally, it is not what you actually know, it is what the symbols say you should "pretend" to know and not know.
So $E(X)$ says that you should calculate the expected value of $X$ in a situation where you don't know anything abut any event having happened or not having happened. $E(X\mid A)$ on the other hand says that you should calculate the expected value of $X$ in a situation where we assume that event $A$ has happened.
Even in a temporal setting, when looking at $E(X_{t})$ we mean "expected value of $X_{t}$ in a situation where we don't know anything", while with $E[X_t \mid \mathcal{F_{t-1})}]$ we mean "expected value of $X_{t}$ in a situation where we know what $\mathcal{F_{t-1}}$ can tell us.
WORKED OUT EXAMPLE
Following discussion in comments, the OP describes the following situation: Let an experiment where we throw two fair coins, perhaps sequentially but independently. Let $S_2$ represent the number of heads in the second time we do that. Let $S_2$ be stochastically dependent on what happened in the first period. And consider the information set $\mathcal{F_1} = \{ \{HH, HT \}, \{ TT, TH \} \}$. How will we obtain the conditional expectation function $E(S_2 \mid \mathcal{F_1})$?
Well, we have to determine the structure of the stochastic dependence first, and this may be many different things. And also, the Conditional Expectation Function should be completely determined, once we "feed" it with one of the events that we are considering.
So we assume that the two coins are indeed thrown sequentially in each period, and what matters for the outcome in the second period is heads in the first coin or not. This is binary, so we can map the information set given by defining the indicator function
$$I_1 \equiv I_1\{ (HH, HT )\}$$
So this random variable takes the value $1$ if we get heads in the first coin in the first period, and zero otherwise. We now assume the following:
If we get heads in the first coin in the first period, we will get $S_2 =2$. If we don't get heads in the first coin in the first period, we will get $S_2=0$. Then, we obtain
$$E(S_2 \mid \mathcal{F_1}) = 2I_1$$
which is a function, not a specific value. This should satisfy the defining property of the Conditional Expectation $E[E(S_2 \mid \mathcal{F_1})]=E(S_2)$. Does it?
We have
$$E(S_2) = 2\cdot 0.25 + 1\cdot 0.5 + 0\cdot 0.25 = 1$$
while
$$E[E(S_2 \mid \mathcal{F_1})] = E(2I_1) = 2E(I_1)= 2\cdot 0.5 =1$$
It does.
Note that we could use the usual shorthand and write $E(S_2 \mid I_1) = 2I_1$, because given how we assumed the dependence to be, conditioning on $\mathcal{F_1}$ is equivalent to conditioning on (the sigma algebra induced by) $I_1$.
I hope this helped. This answer may be relevant here.