Why do we round up the Standard Deviation when it rounds regularly to 0? For instance, if our SD is 0.02, we round it up to 0.1, but if it is 0.06, we keep it at 0.06. Can someone please explain? Thanks
Edit: The context is in chemistry uncertainties