This question brings on to the surface the fact that, usually, it is not stressed enough how important is to accompany the "orthogonality" assumption with the assumption $E(u)=0$, in order to get consistency. It is assumed, but it is not pointed out that if the assumption is that "regressors are orthogonal to the error term", then consistency hinges on this additional assumption also.
Consider the simple regression model as the OP does,
$$y_i = b_0 +b_1x_i +u_i$$
without making any assumptions apart from the regularity ones (expected values exist, matrices converge, sample is ergodic stationary). Then at the limit one gets for theOLS estimator of the beta vector $\mathbf b = (b_0,b_1)'$,
$$\text{plim} \hat {\mathbf b} -\mathbf b=[\text{Var}(x)]^{-1}\cdot \left[ \begin {array} \\ E(x^2) &-E(x)\\-E(x) & 1
\end {array}\right] \cdot \left[ \begin {array} \\ E(u) \\E(xu)
\end {array}\right]$$
Carrying out the multiplications we get
$$\text{plim} \hat b_0 -b_0 = [\text{Var}(x)]^{-1} \cdot [E(x^2)E(u) - E(x)E(xu)]$$
$$\text{plim} \hat b_1 -b_1 = [\text{Var}(x)]^{-1} \cdot [-E(x)E(u) + E(xu)]$$
Now we can see what alternative assumptions can lead us to consistency for $b_1$.
Alternative A. A1: $x$ and $u$ are orthogonal (so $E(xu) = 0$) and A2: $E(u)=0$.
Alternative B. $x$ and $u$ are uncorrelated, so $\text{Cov}(x,u) = E(xu)-E(x)E(u) =0$, without making assumptions about $E(u)$.
Alternative C. C1: $x$ and $u$ are orthogonal (so $E(xu) = 0$) and C2: $E(x)=0$.
(Note that under $B$ or under $C$, we do not have consistency for the $\hat b_0$, this is why the usual assumption made is $A$).
Let's move to the centered regression. Something that the OP neglected is that at the limit sample means become expected values, i.e. constants. Then (using the tilde to denote population values)
$$E[(x_i-\bar x)(u_i-\bar u)] \to E[(x_i-E(x))(u_i-E(u))] = E(\tilde x \tilde u) = E(xu) - E(x)E(u)$$
and we do not have any cross-observation products.
Here we have
$$\text{plim} \hat b_{1,centered} -b_1 = [\text{Var}(x)]^{-1} \cdot E(\tilde x \tilde u)$$
$$\implies \text{plim} \hat b_{1,centered} -b_1 = [\text{Var}(x)]^{-1} \cdot [ E(xu)-E(x)E(u)]$$
which is the exact same result as in the uncentered regression.
So in the centered regression consistency hinges on
$$E(\tilde x \tilde u) = 0 \implies E(xu) - E(x)E(u) = 0$$
This can be seen to hold irrespective of whether we assume $A$, or $B$, or $C$ for the uncentered regression. So if we have consistency in the uncentered regression for the $b_1$ coefficient, we will have it in the uncentered regression too.
PS: When stating the regression model in matrix form, it is customary to state the "orthogonality condition" as $E(\mathbf X'\mathbf u)=0$. Then, in most cases, the author says that "the rergessor matrix includes a constant". But then in the stated orthogonality condition, the assumption $E(\mathbf u) = 0$ is automatically included, since the first row of $\mathbf X'$ is a series of ones.