One way to picture the regression function above is that your $y_j$ are approximated by scaled basis functions sitting on the $c_i$. If you plot the basis functions, you can see that are not the same but fall of differently when $r$ (or $x$ in my case) is large. One possibility to compare them would be to consider
\begin{align}
\frac{1}{1+x^{2}} = \exp\left(\log\left(\frac{1}{1+x^{2}}\right)\right)
= \exp\left(-\log\left(1+x^{2}\right)\right)\\
\frac{1}{\sqrt{1+x^{2}}} = \exp\left(-\frac{1}{2}\log\left(1+x^{2}\right)\right)
\end{align}
For $x^2$ close to zero $\log(1+x^2)\approx x^2$ which tells you that, around zero, both function are pretty similar. However, due to the $\log$ in the exponent, the inverse quadratics fall of slower than the Gaussian. You can also see that there is only a factor $\frac{1}{2}$ difference between the inverse quadratic and the inverse multi-quadratic.
Another way of comparing the functions would be to look at their Fourier transforms. The more low frequency components they have, the smoother the functions look.
In the end, you have to decide what properties fit best to your data. If you don't know that you could always look at the prediction error in a cross-validation and choose the function with the lowest average error.
In general, the Gaussian functions are not a bad choice because the have universal approximation properties. This means that (with enough basis functions) you can approximate any continuous function arbitrarily close. I don't know whether this is true for the other functions as well.