An $n$-dimensional Bernoulli distribution can be expressed in terms of an $n$ by $n$ matrix $\Sigma$, which is a matrix analogous to the covariance matrix of the Gaussian distribution, but not necessarily a symmetric matrix.
For example, the diagonal elements of $\Sigma$ represent probabilities for a single element $p(X_i=1) = \Sigma_{ii} = \mu_i$.
Probabilities for pairs of elements are given by the determinant of the submatrix of $\Sigma$:
\begin{align*}
p(X_i=1, X_j=1)=\det \begin{bmatrix} \Sigma_{ii} & \Sigma_{ij} \\
\Sigma_{ji} & \Sigma_{jj} \end{bmatrix}.
\end{align*}
In other words, the covariance between $X_i$ and $X_j$ is expressed as a product of off-diagonal elements as follows,
\begin{align*}
\mathrm{Cov}[X_i, X_j]=\mathrm{E}[(X_i-\mu_i)(X_j-\mu_j)] = -\Sigma_{ij} \Sigma_{ji}.
\end{align*}
Hence, covariance alone cannot uniquely determine the off-diagonal elements of $\Sigma$.
However, model parameters of a distribution having a given mean and covariance can be obtained by the principle of entropy maximization.
I think the above distribution is a canonical distribution for multivariate binary random variables in the sense that it shares similar properties to the multivariate Gaussian distribution. See the following paper for further details:
T. Arai, "Multivariate binary probability distribution in the Grassmann formalism", Physical Review E 103, 062104, 2021.