Consider $n$ independent normally distributed random variables
$$X_i\sim N(\mu_i,\sigma_i^2)$$
and denote $Y = \max\limits_{1\leq i\leq n}\{X_i\}$. We can define the probabilities, for each $1\leq i\leq n$,
$$\tilde{P}_i = \mathbb{P}(X_i=Y)$$
where $\tilde{P}_i$ is a function of $\vec{\mu} = (\mu_1,\ldots,\mu_n)$ and $\vec{\sigma}=(\sigma_1,\ldots,\sigma_n).$ I'm interested in the inverse problem: given some observed probabilities $P_1,\ldots,P_n$ (summing to unity),
$$\min\limits_{(\vec{\mu},\vec{\sigma})\in\mathbb{R}^{n\times n}}\sum_{i=1}^n|P_i - \tilde{P}_i|^2.$$
Does anyone have references where I can learn more about this problem; for instance, whether the objective function is convex?