Here is some simple code to illustrate my point. Using GridSearchCV with cv=2, cv=20, cv=50 etc makes no difference in the final scoring (48). Even if I use KFold with different values the accuracy is still the same. Even if I use svm instead of knn accuracy is always 49 no metter how many folds I specify. This is just a toy example, but in the case of big, real world data sets I face the same probem. cv parameter in GridSearchCV never makes any difference. Am I doing something wrong or is GridSearchCV 'broken'?
from sklearn import svm
from sklearn.neighbors import KNeighborsClassifier as knn
from sklearn import datasets
from sklearn.model_selection import GridSearchCV
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
iris = datasets.load_iris()
X, y = shuffle(iris.data, iris.target, random_state=0)
X_train, y_train = X[:100], y[:100]
X_test, y_test = X[100:], y[100:]
knn_param_grid = {'n_neighbors': [5, 2]}
# svm_param_grid = {'shrinking': [True, False]}
# cv = KFold(30)
grid_search = GridSearchCV(estimator=knn(), param_grid=knn_param_grid, cv=20)
grid_search.fit(X_train, y_train)
predictions = grid_search.predict(X_test)
print sum(predictions == y_test)
# prints 48 for every value of cv