Logistic regression doesn't really have an error term. Alternatively, you can think of the response distribution (the binomial) as having its random component intrinsically 'built-in' (for more, it may help to read my answer here: Difference between logit and probit models). As a result, I think it is conceptually clearer to generate data for simulations directly from a binomial parameterized as the logistic transformation of the structural component of the model, rather than use the logistic as a sort of error term.
From there, if you want the long run probability that $Y = 1$ to be $.5$ or $.1$, you just need your structural component to be balanced around $0$ (for $.5$), or $-2.197225$ (for $.1$). I got those values by converting the response probability to the log odds:
$$
\log(\text{odds}(Y=1)) = \frac{\exp(Pr(Y = 1))}{(1+\exp(Pr(Y = 1))}
$$
The most convenient way to do this will be to use those values for your intercept ($\beta_0$) and have your slope be $0$. (Alternatively, you can use any two parameter values, $\beta_0$ and $\beta_1$, that you like such that, given your $X$ values, the log mean odds equals, e.g., $-2.197225$.) Here is an example with R
code:
lo2p = function(lo){ # this function will perform the logistic transformation
odds = exp(lo) # of the structural component of the data generating
p = odds / (1+odds) # process
return(p)
}
N = 1000 # these are the true values of the DGP
beta0 = -2.197225
beta1 = 0
set.seed(8361) # this makes the simulation exactly reproducible
x = rnorm(N)
lo = beta0 + beta1*x
p = lo2p(lo) # these will be the parameters of the binomial response
b0h = vector(length=N) # these will store the results
b1h = vector(length=N)
y1prp = vector(length=N) # (for the proportion of y=1)
for(i in 1:1000){ # here is the simulation
y = rbinom(n=N, size=1, prob=p)
m = glm(y~x, family=binomial)
b0h[i] = coef(m)[1]
b1h[i] = coef(m)[2]
y1prp[i] = mean(y)
}
mean(b0h) # these are the results
# [1] -2.205844
mean(b1h)
# [1] -0.0003422177
mean(y1prp)
# [1] 0.100036
hist(b0h)
hist(b1h)

