The rule you are asserting here sounds like a variation of the idea of significant figures, only your rule asserts that the number of decimal places should depend on the sample size. I do not see any reason that this would be sensible. If the sample size is known, and the sample values are known (up to the level of precision for the computing platform) then the estimator is going to be some number that is a function of that known data, and so it is known (up to the level of precision for the computing platform). In this case, there is no need for any "rounding" in the actual number, since the point estimator is perfectly known. The only effect of rounding down the number of decimal places is to reduce the accuracy of the representation of the real number in question, which introduces a further source of error.
From a practical point of view, one certainly need not report real-valued estimates with lots of decimal places, since they become logarithmically less important, and long lists of decimal values is distracting to a reader. But that is a matter of presentation, not a rule of mathematical estimation --- you should generally give a truncated representation of a real estimate in a paper (i.e., giving its value only up to some reasonable number of decimal places) but still treat the value with its full precision in the calculations. This is a stylistic matter, to assist the reader to grasp the information clearly, without distracting them with unnecessary decimal values.
Note here that Cole (2014) (the paper you link to in your question) is clear that what he is proposing is purely stylistic rules for the presentation of statistical outputs. At the start of the article he identifies that "...too many digits can swamp the reader, overcomplicate the story and obscure the message." Later in the article he explicitly notes that "It is important that any intermediate calculations are carried out to full precision, and that rounding is done only at the reporting stage".