It seems that you have performed ridge regression and LASSO, choosing the penalty parameter values by 5 repetitions of leave-one-out CV, with either the correlation between observed and predicted or the mean absolute error (MAE) as your evaluation measure. Your data set actually isn't that small relative to the number of features as you are using penalized regressions, but you can do a bit better in your model building and evaluation.
First, you should consider whether either the correlation coefficient or MAE are the best measures for choosing model parameters. I think that many would prefer to use the mean-squared error, as that is what both ridge and LASSO start with as their errors before adding the penalization term on the coefficients.
Second, once you have chosen your measure for fit, to choose your penalty parameter values you might be better off doing true 10-fold cross validation to find the best values. Leave-one-out CV can be somewhat noisy, and 5 repetitions might not be enough to overcome that.
Third, what you can perhaps best accomplish is to validate the model building process by repeating your entire process (including your cross-validation to choose your penalty parameter values) over multiple bootstrap samples of the original data, and examining your evaluation measure, over those multiple models, on your original data set. That gives you (and your audience) a good estimate of how well your process would have behaved if you had the chance to take multiple samples from the population. Note that for LASSO you will almost certainly select different features in each of the bootstraps; you might find it informative to look at that issue directly.
This thread and this thread may also be useful. ISLR (Chapters 5 and 6), ESLII (esp. Chapter 7) and Harrell's rms course notes provide more details and examples.