2

Why do we round up the Standard Deviation when it rounds regularly to 0? For instance, if our SD is 0.02, we round it up to 0.1, but if it is 0.06, we keep it at 0.06. Can someone please explain? Thanks

Edit: The context is in chemistry uncertainties

Jon
  • 121
  • 2
  • 1
    Where do you see this recommended? On the face of it there are no grounds for rounding SD of 0.02 to 0.1. On the contrary, as a reviewer I would be firm in requesting more decimal places. – Nick Cox Feb 17 '19 at 15:19
  • 1
    hmm... where do you see this sort of rounding? – Siong Thye Goh Feb 17 '19 at 15:19

0 Answers0