Below are heuristics that are not necessarily direct mistakes, but are frequent ways of using statistics sub-optimally.
Use of underpowered statistical tests, do they mention sample size calculations for example?
Data dredging especially on “big data”. If all variables are highly significant this could be a sign of stepwise regression or similar being used, rather than more logical reasoning having been used.
Over reliance on hypothesis tests, as we as consumers of the paper do not know how many other tests were tried, and compensation schemes for multiple comparisons have side effects such as depending on how many (acknowledged) tests were performed. Further, some fields have much prior knowledge which if not encoded into a test through say a Bayesian approach, are too prone to randomness of “significant” results of hypothesis tests.
Not using multiple imputation or similar but instead dropping observations with missing values from the analysis may bias the remaining data, and also reduces the power of subsequent tests.
Something that is more difficult to discern is if the techniques used are not well understood by the authors. This can be more apparent if they give a presentation of the paper. If the technique is not understood sufficiently, it may have been misused.