I've looked at some Kaggle notebooks lately of people using Lasso/Ridge for linear regression. The majority that I've seen don't seem to standardize the predictors before they fit Lasso/Ridge even though the variables are on disparate scales (e.g., multiple orders of magnitude in difference)
Here are a couple of Jupyter notebooks that I've seen that uses no standardization:
https://www.kaggle.com/mohaiminul101/car-price-prediction
https://www.kaggle.com/burhanykiyakoglu/predicting-house-prices/comments
Most of the notebooks I've seen actually lack this standardization, and I only look at the top rated notebooks for popular datasets, so I was thinking there their methodology may be more reputable. So now I'm wondering if there's something I'm missing, or if people are indeed being negligent/incorrect by not standardizing when using regularization.
Is there any theoretical justification or practical advantage to not performing standardization on regressors when they exist on disparate scales?