From the definition of multivariate normal distribution, we know that if a k-dimensional random vector X = (X1, X2, ..., Xk) is (multi-variate) normally distributed if every linear combination of its components is normally distributed. This implies that every component of X is also normally distributed.
What about the reverse? What is an example of k-dimensional random vector X where each of X1, X2, ..., Xk are normally distributed but X is not?
If there is a good example in 2-d, that would be even better, because I could plot the probability density in 3-D and visualize to improve my intuition.
Motivation for this question: I have been studying professor Andrew Ng's Machine Learning course where we model a dataset using multi-variate normal distribution to detect anomalies / outliers. So, I am trying to understand when is it "ok" to model the data this way. For the univariate case, I can plot the data, and see if it "generally" follows a bell curve. But it's hard to get an understanding of a multi-dimensional dataset.