The comments address why provide special treatment for the numerical evaluation of logsumexp, as it's known, but not why it arises.
First consider the special case $x_1 = 0, x_2 = x$. Then logsumexp = $\text{log}(1 + e^x)$, known as the softplus function, is a differentiable approximation to the rectifier max(0,x) . max(0,x) is not differentiable.
Now consider the general version, $\text{logsumexp}(x_1,..,x_n) = \text{log} (\Sigma_{i=1}^n e^{x_i})$. It is readily apparent that $\text{max}(x_1,...x_n) \le \text{logsumexp}(x_1,..,x_n) \le \text{max}(x_1,...x_n) + \text{log}(n)$. Therefore, logsumexp serves as a differentiable approximation to the max of several numbers. Note that max is not differentiable.
It also turns out that logsumexp is jointly convex in its arguments, although this may not be readily apparent. Moreover, it can be be expressed by epigraph formulation in terms of the product of n exponential cones, which are convex. This then facilitates representation of logsumexp as a high level building block which can be used in the formulation of convex optimization problems in conic form in such a way as to enable the used of specialized conic optimization solvers which can solve problems involving logsumexp more efficiently and robustly than general purpose solvers can, presuming that the rest of the problem also admits to a conic formulation. As an example, the CVX convex optimization package has the function log_sum_exp, just for this purpose.
There's a whole heck of a lot more which can be added, but this ought to get you started.