Most statements in the question are somewhat incorrect:
first propose a likelihood function that describes our problem (Binomial)
The sampling model (or the likelihood) definition is not a part of the MCMC method, it is a given. For instance,
$$L(p|x)={n \choose x} p^x (1-p)^{n-x}$$
assumes that the data $x$ is Binomial
define a conjugate prior (Beta) and posterior distribution (Beta-Binomial)
Similarly, the prior distribution on the parameter is a given from the MCMC perspective, not something that can be calibrated. Furthermore, if the prior is conjugate, then MCMC is usually not necessary. When $p\sim \mathcal{Be}(a,b)$,
the posterior is also a Beta distribution, not a Beta-Binomial distribution (which is the marginal distribution of $x$):$$p|x\sim \mathcal{Be}(a+x,b+n-x)$$which can be used either analytically or numerically to compute posterior quantities. Exact and direct simulation from this posterior is manageable, hence does not require MCMC except in a toy experiment.
define a proposal distribution (Normal) that makes random sampling (Monte Carlo part)
When the posterior distribution is defined on $(0,1)$ it is not the best possible choice to use a Normal distribution that takes values all over $\mathbb R$, even though this is not formally incorrect.
choose to either accept or ignore this step (Metropolis-Hastings)
The term ignore is potentially harmful in that the proposed value is rejected but the step is not ignored: the previous value is reproduced another time in the chain.
iterate this process for thousands of time until posterior distribution converge
Actually the convergence is to the posterior distribution, which does not depend on the MCMC algorithm or on the simulation step, and not of the posterior distribution. The Markov chain converges in distribution to the posterior distribution. A finite sample of values of this Markov chain thus behaves in the limit as a sample from the posterior distribution.
How can randomly chosen parameter through proposal distribution fetched into Bayesian formula give the posterior density probability
A Markov chain either converges to a limiting distribution (positive recurrence) or not (null recurrence, transience). If the algorithm is stationary with respect the posterior distribution, then the Markov chain does converge to this distribution and no other. It thus suffices to establish stationarity. The acceptance step in the Metropolis-Hastings algorithm is constructed precisely for the posterior distribution to be stationary (detailed balance identity).