In a symmetric random walk the probability of returning to the origin is $1$.
All paths in such a system are equally probable, in appearance following a branching system governed by $\frac{1}{2^{\mathbb N}}$. In comes measure theory to challenge this statement based on the uncountability of these infinite pathways, and dictating that the probability of return to the origin be constructed on events that have occurred (or not occurred) before a finite number of steps.
Regardless, the fact that the probability is $1$ does not mean that a path infinitely steering to the right (or to the left) cannot occur; it just means that its measured probability is zero.
I would like to imagine a computer so powerful that could run an infinite symmetric random walk endlessly until it found such a path always to the right or to the left of zero.
To illustrate, and thanks to the inspiration in one of Glen_b's phenomenal posts, I let the computer run overnight doing something silly like this with $1$ million steps for each iteration, and color-coding the paths that did not return to the origin after a certain number of steps in blue, and the ones that kept on crossing zero in red. Following the tails of some of the blue paths, the unavoidable drop to zero can easily be intuited:
OK, I paid tribute to Barcelona, FC and the Fourth of July at the same time. Now the question:
How would a computer pseudorandom number generation algorithm with unavoidable repeating sequences affect the expected time to return to origin of such an infinite simulation? And, possibly by extension, how would a generic pseudorandom algorithm impact the quest for that elusive path consistently to the left (or to the right)?