There is only one possible good reason for this - that the signal:noise ratio in your data is very high, i.e., the true $R^2$ is intrinsically high. But more likely you have used AIC to compare more than 3 possible models and you are just seeing noise. AIC is a restatement of $P$-values and as such has all the problems of $P$-value-guided stepwise variable selection. AIC just uses a better (i.e., larger) $\alpha$ cutoff than 0.05. In general if you have more then 3 or 4 pre-specified models to compare, AIC has low probability of selecting the "right" model.
On most any dataset (although yours may be too small) you can check all this by bootstrapping the entire variable selection process.
AIC is most helpful when doing highly structured assessment of a large group of parameters, e.g., "I have a 5-variable model and there are 7 other variables not thought in the literature to be relevant. Will I improve model performance by adding the 7?". Or "I have a linear additive model in 6 predictors. What is the value of expanding all of them into restricted cubic splines to allow for nonlinearity?".