Imagine you have a two-dimensional multivariate normal random variable with $\mu = [0, 0]$ and $\Sigma\ = \begin{bmatrix}1 & r\\r & 1\end{bmatrix}$. (Conceptually, you have two random normal variables with a correlation of $r$.) You take $N$ samples from this variable, such that you have a $N \times 2$ matrix - the first column contains the samples from the first dimension, the second column contains the samples from the second dimension. (These columns, of course, have a correlation of $r$.)
Here comes the crucial part. You take the $K$ samples which are highest from the first column. Then, you choose the number from among those $K$ which is highest in the second column.
What is the distribution of this final number? It's easy to simulate, but is it analytically solvable (or does anyone know where I could start looking)?