The motivation for this question comes from the fact that estimating the relative error where the true value is zero raises the infinite output issue. I have seen multiple replies, but I am not satisfied from any of them; so I thought that I could just modify zeros to $0.0...1$, when the addition of this decimal digit $1$ doesn't make any difference, and that's after the decimal that I would round a float number. Please, let me know if you think that this is a bad strategy.
However, I can imagine that according to the data set we are dealing with, the decimal digit place that we should determine to round the data varies. I think this realization is very basic and it should be somewhere documented but i did not find something relevant. Forgive me if I am mistaken.
For example, say that my data set is: $$0.0001232, 0.0002342, 0.015652, 0.0001456, 0.0021291, 0.0009124, ... $$ and numbers like these; then I suppose it probably makes sense to keep $5$ or $6$ decimal digits in the rounding.
Another example: $$3.32855, 0.55732, 0.815652, 0.09456, 0.009963, 0.47877, 1.78987, ... $$ and numbers like these; then I suppose it probably makes sense to keep $2$ or $3$ decimal digits in the rounding.
I am looking for a mathematical way to determine in such cases how-many-digits-rounding would make most sense. I would expect that this way could either give a general rule of treating the whole data set, or a specific rule of treating each value according to the rest of the values of the data set.
For more specific advise, please, note that in my actual case, values vary somewhat uniformly between $-1.5$ and $1.5$, so I would say that it makes sense to keep $2$ or $3$ decimals when I round the numbers, but I don't like this vague approach and I am looking for something more mathematically strict.
Thanks in advance!