3

I'm aware of model selection methods based on AIC and backwards/forwards selection, but I'm wondering if there is a general rule about how big your sample size $n$ should be (as an absolute minimum) given your model has $p$ parameters.

For example, suppose we have $p$ parameters all of which we would like to include (for whatever reason), how big should our sample size $n$ be in order for our estimates to have some degree of reliability?

Xiaomi
  • 2,276
  • 6
  • 19
  • 2
    The model selection bit isn't really related to the rest of your question. FWIW, you should be wary of those methods. To understand this more, it may help to read my answer here: [Algorithms for automatic model selection](https://stats.stackexchange.com/a/20856/7290). – gung - Reinstate Monica Oct 05 '18 at 01:54
  • I suppose I considered them related because, as I understand it, when your sample size is small AIC methods tend to be biased towards favoring saturated models. So sample size/parameters is in some way related to whether AIC methods is reliable? And Thanks, will give that a read! – Xiaomi Oct 05 '18 at 02:06

1 Answers1

3

The 'one in ten rule' is often used as a rule of thumb, which suggests 10 observations are needed for each variable being studies.

If you see a model that has substantially fewer observations per significant variable, it should set off some red flags.

Underminer
  • 3,723
  • 1
  • 20
  • 36