Has anyone ever applied a ridge regression on a model subset selected from a cross validated lasso? In other words, take a data set with p features and run lasso, grid searched to find optimal penalty parameters. Then record which features were dropped, and run only those features in a ridge regression. This approach seems similar to the "relaxed lasso" suggested by Meinhausen (2007) and clarified in the CV thread.
The only result on using ridge after lasso in literature I could find is this theoretical paper.
My intuition is that if relaxed lasso's objective is to split the separate variable selection and variable shrinkage, then why not go with ridge regression in the second pass? This will ensure that there isn't any variable selection in the second pass, whereas re-running lasso could result in additional feature drops.