Let $A$ be the matrix that R claims is computationally singular. For the purpose of this answer lets assume that it is really mathematically non-singular but close to singular (otherwise lowering the tolerance will do nothing). Let $\epsilon$ be the tolerance in use. For condition numbers see: https://en.wikipedia.org/wiki/Condition_number. The condition number is a (maximal bound on) the quotient between relative error in result and relative error in the data. That is, it is a kind of derivative! But since a matrix can be changed in many different directions, this derivative could (will) depend on in which direction you perturb the matrix, and the effect on the relative error in solution could be large. If you are lucky, the dependence for the directions that are relevant to you could be small! The condition number is then the maximum of all this directional derivatives. The condition number given in your R output is about $\kappa =.5\cdot 10^{21}$.
In R all calculations are done in double precision, so machine epsilon eps is about $10^{-16}$. Suppose you change the input data with about an eps, then the output could change with as much as $\text{eps}\cdot \kappa= .5\cdot 10^{5} $. So you could ask yourself if you are comfortable with that!
Or you could investigate it with your data, by simulation. First find a tolerance low enough that R will give a solution (if that is impossible, nothing can be done). Then, a few times add some random noise to your input data, with a variance determined by the level of measurement noise in your data. Run the analysis in R with your perturbed data, and see how much the output changes. You can use that to determine if the lowered tolerance is acceptable for you.
A very few simulation runs should be enough.
For the case of linear models, there is an R
package on CRAN, perturb
, that helps with this.