I don't think either of the comments under the question quite get at what's going on here.
My understanding of it is this (and please correct me if I am wrong): a specific value for $p$ is drawn from a known distribution, but we don't observe that $p$. Instead we observe a binomial random variable with that $p$. You want to use both the information in the known distribution for $p$ as well as in the likelihood to make some inference about $p$.
On that understanding:
A random $p$ obtained from a known distribution plus some information about that specific draw of $p$ from a sample is naturally answered with Bayesian approach; the known distribution of $p$ is the prior, which you then update via the sample (specifically the posterior is proportional to the product of the likelihood and the prior).
$$f(p|X)\,\propto\, f(X|p)\,f(p)$$
For example, if the prior were beta, the posterior would also be beta (since the
beta is conjugate for a binomial likelihood).
If the prior were symmetric triangular, the posterior would be piecewise beta (with two continuous pieces).
More generally we could use a variety of techniques to obtain the updated information about $p$. On this simple problem - assuming a prior that didn't work nicely with the likelihood - one option would use numerical integration to scale the product of the prior and the likelihood to a density.