In this case the NaN
(not a number) is returned because the calculation of the exponential overflows in double precision arithmetic.
An algebraically equivalent expression, expanded in a MacLaurin series around $0$, is
$$\frac{\exp(x)}{1+\exp(x)} = \frac{1}{1+\exp(-x)} = 1 - \exp(-x) + \exp(-2x) - \cdots.$$
Because this is an alternating series, the error made in dropping any term is no greater than the size of the next term. Thus when $x \gt 710$, the error is no greater than $\exp(-710) \approx 10^{-308} \approx 2^{-1024}$ relative to the true value. That is far more precise than any statistical calculation needs to be, so you're fine replacing the return value by $1$ in this situation.
Interestingly, R
will not produce an NaN
when the exponential underflows. Thus you could just choose the more reliable version of the calculation, depending on the sign of x
, as in
f <- function(x) ifelse(x < 0, exp(x) / (1 + exp(x)), 1 / (1 + exp(-x)))
This issue shows up in almost all computing platforms (I have yet to see an exception) and they will vary in how they handle overflows and underflows. Exponentials are notorious for creating these kinds of problems, but they are not alone. Therefore it's not enough just to have a solution in R
: a good statistician understands the principles of computer arithmetic and knows how to use these to detect and work around the idiosyncrasies of her computing environment.