0

The motivation for this question comes from the fact that estimating the relative error where the true value is zero raises the infinite output issue. I have seen multiple replies, but I am not satisfied from any of them; so I thought that I could just modify zeros to $0.0...1$, when the addition of this decimal digit $1$ doesn't make any difference, and that's after the decimal that I would round a float number. Please, let me know if you think that this is a bad strategy.

However, I can imagine that according to the data set we are dealing with, the decimal digit place that we should determine to round the data varies. I think this realization is very basic and it should be somewhere documented but i did not find something relevant. Forgive me if I am mistaken.

For example, say that my data set is: $$0.0001232, 0.0002342, 0.015652, 0.0001456, 0.0021291, 0.0009124, ... $$ and numbers like these; then I suppose it probably makes sense to keep $5$ or $6$ decimal digits in the rounding.

Another example: $$3.32855, 0.55732, 0.815652, 0.09456, 0.009963, 0.47877, 1.78987, ... $$ and numbers like these; then I suppose it probably makes sense to keep $2$ or $3$ decimal digits in the rounding.

I am looking for a mathematical way to determine in such cases how-many-digits-rounding would make most sense. I would expect that this way could either give a general rule of treating the whole data set, or a specific rule of treating each value according to the rest of the values of the data set.

For more specific advise, please, note that in my actual case, values vary somewhat uniformly between $-1.5$ and $1.5$, so I would say that it makes sense to keep $2$ or $3$ decimals when I round the numbers, but I don't like this vague approach and I am looking for something more mathematically strict.

Thanks in advance!

  • I wouldn't round any numbers from the data that is given to me even if some of them have different precision than the others. After doing a calculation with those numbers, such as the sample mean, then I would round the result at the end to have the same precision (or less) as the original numbers. If the original numbers don't have the same precision as each other, then use the least precision. – John L Feb 19 '21 at 15:45
  • @JohnL thanks for your reply. I agree with what you say, but it's not very relevant to my question. Probably I wasn't very clear. I am just looking for a mathematical way that will tell me what is the most meaningful, namely to keep only the most meaningful digits, rounding according to the numbers that my data set consists of. For my task, if for example it gives me 2 decimal digits, I would just modify 0 -> 0.001, to find the relative error with the original values, as you also say. I hope it makes more sense now. – athantas Feb 19 '21 at 17:26

0 Answers0