Quoting https://en.wikipedia.org/wiki/Elastic_net_regularization : "elastic net is a regularized regression method that linearly combines the L1 and L2 penalties of the lasso and ridge methods."
L1-regularised learning corresponds to a MAP estimate with a Laplacian prior on the weights: p(w) ~ Laplace(0, b)
L2-regularised learning corresponds to a MAP estimate with a normal prior on the weights: p(w) ~ N(0, sigma^2)
I'm struggling to recognise the prior that gives rise to L1 + L2: p(w) ~ e^(a|w| + bw^2). Insight appreciated!