I am reading Reinforcement Learning, An Introduction by Sutton, Barto and I came across the derivation
$$ \begin{align} v_{\pi}(s) &= \mathbb{E}_{\pi}\left[ G_{t} | S_{t} = s \right] \\ &= \mathbb{E}_{\pi}\left[ R_{t+1} + \gamma G_{t+1} | S_{t} = s \right] & (1) \\ &= \sum_{a}\pi(a|s) \sum_{s'}\sum_{r} p(s',r|s,a)\left[r+ \gamma \mathbb{E}_{\pi}[G_{t+1} | S_{t+1}=s']\right] & (2) \\ &= \sum_{a}\pi(a|s) \sum_{s'}\sum_{r} p(s',r|s,a)\left[r+ \gamma v_{\pi}(s') \right]. \end{align} $$
I understand that
$$ \mathbb{E}_{\pi}\left[R_{t+1}|S_{t}=s\right] = \sum_{a}\pi(a|s) \sum_{s'}\sum_{r} p(s',r|s,a) \cdot r $$
but I do not understand how
$$ \begin{align} \mathbb{E}_{\pi}\left[\gamma G_{t+1} | S_{t}=s\right] = \sum_{a}\pi(a|s) \sum_{s'}\sum_{r} p(s',r|s,a) \mathbb{E}_{\pi}\left[\gamma G_{t+1} | S_{t+1}=s'\right] & (3) \end{align} $$
I have read other questions about this like Deriving Bellman's Equation in Reinforcement Learning but I don't see any answers that talk about this directly. They mention that the law of total expectation comes into play but I am unable to use that to derive $(3)$.
Can you explain how the author goes from $(1)$ to $(2)$?
EDIT: To add a little more detail on the way I tried to convince myself:
$$ \begin{align} \mathbb{E}_{\pi}\left[ R_{t+1} + \gamma G_{t+1} | S_{t} = s \right] &= \mathbb{E}_{\pi}\left[ R_{t+1} | S_{t} = s \right] + \mathbb{E}_{\pi}\left[ \gamma G_{t+1} | S_{t} = s \right] \\ &= \left[\sum_{a}\pi(a|s) \sum_{s'}\sum_{r} p(s',r|s,a) \cdot r\right] + \mathbb{E}_{\pi}\left[ \gamma G_{t+1} | S_{t} = s \right] \\ &= \left[\sum_{a}\pi(a|s) \sum_{s'}\sum_{r} p(s',r|s,a) \cdot r\right] + \mathbb{E}_{\pi}\left[ \gamma \mathbb{E}_{\pi}\left[G_{t+1} | S_{t+1}=s' \right] | S_{t} = s \right] \\ \end{align} $$
and the only way I can see (2) being derived from this is if (3) holds but I am unable to convince myself that (3) is true.