First, yes, you are following a standard way to select the model by choosing the model that corresponds to $0.62$ on the $x$-axis. Second, it depends what you mean by "important predictors", but it seems that you just used a criteria to select predictors. The right plots suggest 2-3 predictors are selected for the models that correspond to the $\sim 0.62$ value.
The model chosen is based on the "one-standard error rule-of-thumb". This rule can be tracked to the 1984 CART book by Breiman et al., and it says that when you do cross-validation for model selection, you should not choose the model that obtains the lowest value of the estimated expected generalization error, but you should take a model that is as simple as possible and with an estimated expected generalization error within 1 standard deviation from the lowest estimate obtained. It's a conservative choice that has some empirical support and perhaps some qualitative theoretical support, but why it should be precisely one standard error is ad hoc.
The resulting model is primarily chosen to optimize performance of the model as a predictive model. The lasso penalization provides a combination of shrinkage of parameter estimates and parameter selection, and in this tradeoff for optimization of predictive performance, lasso is known to generally choose too many predictors. Thus don't expect that all the predictors chosen by a lasso procedure, as implemented in lars
, are important. Moreover, even if they are important, they are important as predictors in the given regression setup, and can not automatically be given any causal interpretation.
I would, by the way, recommend glmnet
over lars
. It's a faster implementation and a more flexible class of models and penalization functions.