My question is about cross validation when there are many more variables than observations. To fix ideas, I propose to restrict to the classification framework in very high dimension (more features than observation).
Problem: Assume that for each variable $i=1,\dots,p$ you have a measure of importance $T[i]$ than exactly measure the interest of feature $i$ for the classification problem. The problem of selecting a subset of feature to reduce optimally the classification error is then reduced to that of finding the number of features.
Question: What is the most efficient way to run cross validation in this case (cross validation scheme)? My question is not about how to write the code but on the version of cross validation to use when trying to find the number of selected feature (to minimize the classification error) but how to deal with the high dimension when doing cross validation (hence the problem above may be a bit like a 'toy problem' to discuss CV in high dimension).
Notations: $n$ is the size of the learning set, p the number of features (i.e. the dimension of the feature space). By very high dimension I mean p>>n (for example $p=10000$ and $n=100$).