The Problem
To re-state the problem: You have a time series $\{X_t\}_{t=1}^\infty$ and are interested in estimating the unconditional mean, $E[X_t]$.
You do not know the conditional distribution of $X_t$ (i.e. $f(X_t|X_1,...,X_{t-1})$ is unknown). However, you do know that the time series is stationary and ergodic (obviously) and that it is homoscedastic; $var(X_t)=\sigma^2\;\;\forall t \in \mathbb{N}$.
Also, for any lag $k \in \mathbb{N}$ you know the auto-correlation coefficient $\rho_k = cor(X_t,X_{t-k})$ (at least you can approximate it well in a large enough sample). With this information, you can write the covariance matrix for the finite sample $X_1,...,X_T$;
$$
\mathbf{cov}(X_1,...,X_T)=\Sigma = \sigma^2 R
$$
where
$$
R=\begin{bmatrix}
1 & \rho_{1} & \cdots & \rho_{T-1} \\
\rho_{1} & 1 & \cdots & \rho_{T-2} \\
\vdots & \vdots & \ddots & \vdots \\
\rho_{T-1} & \rho_{T-2} & \cdots & 1
\end{bmatrix}
$$
Deriving The Best Linear Unbiased Estimator (BLUE)
(See the Wikipedia entry on GLS and the references cited therein for more details on what follows)
Define the vectors $\mathbf{X} = (X_1,X_2,...,X_T)$ and $\boldsymbol{\mu}=(\mu,...,\mu) \in \mathbb{R}^T$. You can consistently estimate $E[X_t]$, by minimizing the following quadratic
$$
\hat\mu_{W} = \min_\mu\bigg\{(\mathbf{X}-\boldsymbol{\mu})'W(\mathbf{X}-\boldsymbol{\mu}) \bigg\}
$$
Where $W$ is a $T\times T$ positive-definite symmetric matrix. The closed form solution is
$$
\hat\mu_{W} = (\mathbf{1}'W\mathbf{1})^{-1}\mathbf{1}'W\mathbf{X}
$$
where $\mathbf{1}$ is a $T\times 1$ vector of ones. The common sample average $\frac{1}{T}\sum_{t=1}^T X_t$ is a special case of the above where we choose $W \propto I$, where $I$ is the identity matrix*.
*As a side note, $\hat\mu_{W}$ is indifferent to proportionality constants in $W$. i.e. $\forall\; a \in \mathbb{R}_{++}$, $\hat\mu_{W}=\hat\mu_{aW}$.
Any choice of $W$, so long as it is positive-definite and symmetric, will result in a consistent estimator of $E[X_t]$. In other-words, it can be shown that as $T\rightarrow \infty$; $\hat\mu_{W}\stackrel{p}{\rightarrow} E[X_t]$, $\forall\; W \in \mathcal{S}$,where $\mathcal{S} \subset \mathbb{R}^{T\times T}$ is the subset of all positive definite symmetric matrices.
However, some choices of $W$ yield more efficient (lower variance) estimates than others. It can be shown that the most efficient choice of $W$, resulting in the Best-Linear Unbiased Estimate (BLUE) of $E[X_t]$, is $W \propto \Sigma^{-1}$. In your case, you can use;
$$
\boxed{
\hat\mu_{R^{-1}} = (\mathbf{1}'R^{-1}\mathbf{1})^{-1}\mathbf{1}'R^{-1}\mathbf{X}
}
$$
Variance and Other Asymptotic Properties
$\hat\mu_{R^{-1}}$ is unbiased, consistent, efficient, and asymptotically normal:
$$
(\hat\mu_{R^{-1}} - E[X_t])\ \xrightarrow{d}\ \mathcal{N}\!\left(0,\,\sigma^2(\mathbf{1}'R^{-1}\mathbf{1})^{-1}\right)
$$
So
$$
var(\hat\mu_{R^{-1}}) \approx \sigma^2(\mathbf{1}'R^{-1}\mathbf{1})^{-1}
$$
Follow Up and Simulation
Because your data is bounded and integer valued, the asymptotic approximations given above may not work well for reasonably sized finite samples.
Also, even though $\hat\mu_{R^{-1}}$ has a lower variance than
$\hat\mu_{I}=\frac{1}{T}\sum_{t=1}^T X_t$, the difference may be negligibly small. If that is the case you may prefer to use $\frac{1}{T}\sum_{t=1}^T X_t$ since calculating $\hat\mu_{R^{-1}}$ is much more computationally expensive. You can calculate the difference in variance analytically to get an idea.
To analysis how well the normality approximation works you can do the following:
simulate $(X_1,...X_T)$ and collect $\hat\mu_{R^{-1}}$ and $\hat\mu_{I}$ multiple times. If you are given the data $X_1,...X_T$ and cannot directly simulate it, you can use a block-bootstrap.
Plot the histogram of $\hat\mu_{R^{-1}}$ and $\hat\mu_{I}$. Do they look approximately normally distributed? Are the sample variance of the estimates close to the asymptotic approximations?
If the asymptotics don't seem to approximate well, you could either increase $T$ or use the empirical results of the simulation in place of the asymptotic results.