Possible Duplicate:
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
I am interested in the effect of a predictor vector $X_i$ on a binary outcome $Y_i$, with corresponding spatial location $s_i$, so I've considered the spatial random effects model,
\begin{equation} logit(P(Y_i=1)) = X_i \beta + Z(s_i) \end{equation}
where $Z(s_i)$ is a stationary mean zero random process whose covariance structure is known up to a number of parameters $\theta$. It appears that this model is usually fit using Bayesian sampling algorithms, which can take a very long time to run when $n$ is large, since you have to sample from the joint posterior distribution of $\beta, \theta$ and the random effects. To circumvent the computational complexity and the expert tuning required for the MCMC, I've instead fit the model
\begin{equation} logit(P(Y_i=1)) = X_i \beta + f(s_i) \end{equation}
where $f(s_i)$ is estimated by a 2-D spline, which can be fit with the gam package in R in a matter of seconds. I have found in simulation studies that this model is effective at removing the spatial autocorrelation from the residuals and gives reasonable estimates for $\beta$. This leaves me wondering why this approach is not prominent in the literature. Why do users often opt for the computationally intensive sampling algorithms? Is there some drawback or something systematically lacking in this approach?