This question was asked many times before, but the typical answer is usually 'do a power sample calculations'.
The problem that I can't grasp is that in most well-known papers the samples that they use are ridiculously small. For instance, one of the most famous papers in behavioral economics ('Cooperation and Punishment in Public Goods Experiments', more than 3.600 citations in Google Scholar), uses 112 participants in total divided into several (5) sessions, groups of 4, and in some sessions there was a random matching (participants were randomly shuffled between rounds), so each group wasn't an independent observation anymore.
I believe I fundamentally misunderstand the whole concept of sample size: how such a small sample can be used for statistical analysis?