The paper Regularization Paths for Conditional Logistic Regression: The clogitL1 Package by Reid and Tibshirani gives a lasso solution for conditional logit.
Instead of maximizing the conditional logistic likelihood, they maximize the likelihood minus an L1 penalty (or lasso penalty). The penalty is equal to a tuning parameter $\lambda$ times the L1 norm,
$$\lambda \sum_{j=1}^p |\beta_j|.$$
This penalty encourages some of the coefficients to be equal to 0, which tells you can remove them from your model. Larger values of $\lambda$ remove more variables from the model.
They also implemented an R package called clogitL1.