I have $N$ samples and I would like to fit them to a parametric distribution, for example beta-binomial.
\begin{equation*} X_i \sim \textit{Beta-Binom}(\alpha, \beta) \end{equation*}
If all samples were from a single distribution with a single set of parameter values, the problem would be very easy. But my data points come from different distributions: the same family of parametric distribution but with different parameter values. In particular, each data point also comes with another value, $p$, that indicated which distribution it comes from. If $p_i=p_j$, then $x_i$ and $x_j$ come from the same distribution with the same unknown parameter. So my problem looks like the description below: \begin{equation} \textit{Given} (X,p) \quad X_i \sim \textit{Beta-Binom}(\alpha(p_i), \beta(p_i)) \end{equation} I am interested in $\alpha(.)$ and $\beta(.)$ parameters as a function of $p$. If $p$ took discrete values, you could divide the data into multiple subsets with the same $p$ value in each subset and fit the distribution for each subset separately. However, I cannot do this for two reasons: first, my $p$ is in fact continuous and second, I don't have many data points for some values of $p$ and given that $\alpha(.)$ and $\beta(.)$ are smooth functions, I should be able to use the fitted values in their close vicinity.
I could potentially write a composite likelihood function and impose a certain form on $\alpha(.)$ and $\beta(.)$ functions to solve the problem. But I appreciate any pointers to potential research articles that have looked at this problem. I don't even know how this problem is called and google was not able to point me to the right place.