After re-reading your question, I believe you mean to ask about model selection among your candidate predictor variables, and not actually running all possible regressions. Fitting all possible models from a given set of predictors is subject to a high degree of data-mining bias. Since many such sub-models will be highly correlated with each other (because they include almost entirely the same set of factors) you would need to adjust your t-statistics to account for the probability that, among the entire set of correlated models, some models just randomly look successful within the particular sample data you have. Adjusting for so many models would imply that you'd need an unrealistically high t-statistic to have any confidence in coefficients from the model that you finally select.
Some better approaches might be Bayesian linear regression where you specify what prior distribution you think is realistic for the coefficient on each of the predictors, or regularized regression like Lasso or Ridge, where you impose some penalty term for how dense or big the set of estimated coefficients is (e.g. the fitting procedure will try to favor models with fewer terms in a suitable sense).
If you start out from one of these perspectives, then there is less risk in testing a couple of models that you think have strong prior evidence.
But in general, if you simply look at all n-choose-k subsets of factors, for k = 1 through n, then by simple random chance, some model will appear very strong but not due to actual forecast efficacy. You should avoid this.