One-at-a-time regressions can suffer from omitted-variable bias. This is true in linear regressions when the omitted variable is correlated both with outcome and with the included predictors, as discussed on this page.*
So one should be highly skeptical of results with one-at-a-time analysis.
Of the 2 methods you propose, multiple regression is the better way to go, provided that you have enough cases that you aren't overfitting. When your multiple regression returns only 1 coefficient significantly different from a value of 0, that means that you only have 1 predictor that is significantly associated with outcome when the other predictors are taken into account by the multiple regression, given the size of your data sample.
That does not mean, however, that the single "significant" predictor is the only (or even most) "important" feature. In particular, you shouldn't just go ahead blindly with a model based solely on that predictor. When predictors are correlated (as they generally are) the issue of which are "most important" becomes quite tricky and the particular choice can depend heavily on the particular data sample at hand. This page discusses the problems with trying to automate feature selection.
There are 2 other approaches that you might consider.
One is LASSO, which provides a principled way to identify a set of features most useful for prediction. Coefficients of retained features are penalized to absolute values lower than they would be in a standard regression based on them, to reduce overfitting. The retained features might not be the "most important" in some theoretical sense, but they can often work well for prediction.
The second is boosted regression trees. That approach can allow for non-linearities and for interactions among features. Measures of feature importance are then based on the difference that omitting a feature makes in terms of model performance. Those measures of importance can be difficult to interpret, however, as they include both direct and interaction terms involving each feature. And again, the importance measure is only in the context of the entire model.
So think carefully about what you mean by "which predictor is the best." There might be no single, simple answer to that question.
*For other types of regressions like logistic or Cox proportional hazards regressions, omitting any predictor associated with outcome will lead to bias in the regression coefficients for the included predictors, regardless of correlations with the included predictors. See this page for a nice analytic proof in the case of profit regressions.