As I mentioned in the comments, your likelihood makes assessing the probability of $\rho_i = 1$ difficult if not impossible. I present a framework that addresses the other two main interests.
One way to think about the overall problem is as latent-variable density estimation. The summary would then be the "predictive" distribution for $\rho_{N+1}$. (I put "predictive" in quotes since the term is typically applied to observables.)
If you observed $\rho_{1:N} = (\rho_1, \ldots, \rho_N)$ then you could apply Bayesian density estimation directly and compute the predictive distribution:
\begin{equation}
p(\rho_{N+1}|\rho_{1:N}) = \int p(\rho_{N+1}|\theta)\,p(\theta|\rho_{1:N})\,d\theta ,
\end{equation}
where
\begin{equation}
p(\theta|\rho_{1:N}) = \frac{p(\rho_{1:N}|\theta)\,p(\theta)}{p(\rho_{1:N})}
\end{equation}
and
\begin{equation}
p(\rho_{1:N}|\theta) = \prod_{i = 1}^N p(\rho_i|\theta) .
\end{equation}
The main choices are the form of $p(\rho_i|\theta)$ and the prior for $\theta$. [I'll get back to this for your case later.]
But instead of observing $\rho_{1:N}$ directly, you have the "indirect observations"
\begin{equation}
p(Y_{1:N}|\rho_{1:N}) = \prod_{i=1}^N p(Y_i|\rho_i) ,
\end{equation}
where (for notational convenience) I have written
\begin{equation}
p(Y_i|\rho_i) = L_i(\rho_i) .
\end{equation}
In this case we can average the the predictive distribution according to the posterior distribution of latent variables:
\begin{equation}
p(\rho_{N+1}|Y_{1:N}) = \int p(\rho_{N+1}|\rho_{1:N})\,p(\rho_{1:N}|Y_{1:N})\,d\rho_{1:N} ,
\end{equation}
where
\begin{equation}
p(\rho_{1:N}|Y_{1:N}) = \frac{p(Y_{1:N}|\rho_{1:N})\,p(\rho_{1:N})}{p(Y_{1:N})} .
\end{equation}
Although the preceding approach is conceptually straightforward, there is a more direct approach that is simpler to implement:
\begin{equation}
p(\rho_{N+1}|Y_{1:N}) = \int p(\rho_{N+1}|\theta)\,p(\theta|Y_{1:N})\,d\theta ,
\end{equation}
where
\begin{equation}
p(\theta|Y_{1:N}) = \frac{p(Y_{1:N}|\theta)\,p(\theta)}{p(Y_{1:N})}
\end{equation}
and
\begin{equation}
p(Y_{1:N}|\theta) = \prod_{i=1}^N p(Y_i|\theta)
\end{equation}
and
\begin{equation}
p(Y_i|\theta) = \int p(Y_i|\rho_i)\,p(\rho_i|\theta)\,d\rho_i .
\end{equation}
Along the way you would obtain the posterior distributions for each of the $N$ entities:
\begin{equation}
p(\rho_i|Y_{1:N}) = \int p(\rho_i|Y_i,\theta)\,p(\theta|Y_{1:N})\,d\theta,
\end{equation}
where
\begin{equation}
p(\rho_i|Y_i,\theta) = \frac{p(Y_i|\rho_i)\,p(\rho_i|\theta)}{p(Y_i|\theta)} .
\end{equation}
As noted above, the main choices are the form of the prior for the parameter, $p(\rho_i|\theta)$, and the prior for the hyperparameter, $p(\theta)$. Here is one choice that is particularly well-suited to the unit interval. Let
\begin{equation}
p(\rho_i|\theta) = \sum_{j=1}^J \theta_j\,f_j(\rho_i)
\end{equation}
where $\theta = (\theta_1,\ldots, \theta_J)$ is a vector of non-negative weights that sum to one and
\begin{equation}
f_j(\rho_i) = \textsf{Beta}(\rho_i|j,J-j+1) .
\end{equation}
This prior is a mixture of beta distributions; it is equivalent to a Bernstein polynomial restricted to be a density.
The larger $J$ is, the more flexible the prior can be.
This prior has the property that if $\theta_j = 1/J$ for all $j$ then $p(\rho_i|\theta) = \textsf{Uniform}(\rho_i|0,1)$.
Finally, let
\begin{equation}
p(\theta) = \textsf{Dirichlet}(\theta|\lambda) ,
\end{equation}
where $\lambda = (\lambda_1, \ldots, \lambda_J)$. Note $E[\theta] = \lambda/\lambda_0$ where $\lambda_0 = \sum_{j=1}^J \lambda_j$. Also note that
\begin{equation}
p(\rho_i) = \int p(\rho_i|\theta)\,p(\theta)\,d\theta = \sum_{j=1}^J E[\theta_j]\,f_j(\rho_i) .
\end{equation}
If (for example) $\lambda_j = 1/J$ for all $j$, then $p(\rho_i) = \textsf{Uniform}(\rho_i|0,1)$.
A Gibbs sampler is straightforward to implement as long as
\begin{equation}
L_{ij} = \int p(Y_i|\rho_i)\,f_j(\rho_i)\,d\rho_i
\end{equation}
is easy to compute for all $i$ and all $j$. It may be possible to integrate out nuisance parameters $\phi_i$ at this stage:
\begin{equation}
L_{ij} = \iint p(Y_i|\rho_i,\phi_i)\,f_j(\rho_i)\,p(\phi_i)\,d\phi_i\,d\rho_i .
\end{equation}