Lasso is a common regression technique for variable selection and regularization. By defining many cross validation folds and playing with different values of $\alpha$, you can find the best set of beta coefficients which confidently predicts your outcome without overfitting or underfitting. If the Lasso technique has assigned the beta coefficient of any covariates to 0, you can either chose to drop these features since they do not contribute to the predictor or proceed in the knowledge that those covariates are essentially meaningless.
As such, consider a sample consisting of N observations, each with p covariates and a single outcome (typically the case for most regression problems). Essentially the objective of Lasso is to solve:
$ \min_{ \beta_0, \beta } \left\{ \sum_{i=1}^N (y_i - \beta_0 - x_i^T \beta)^2 \right\} \text{ subject to } \sum_{j=1}^p |\beta_j| \leq \alpha.
$
Here $ \beta_0 $ is the constant coefficient, $ \beta:=(\beta_1,\beta_2,\ldots, \beta_p)$ is the coefficient vector, and $\alpha$ is a prespecified free parameter that determines the degree of regularization.