I am using a KNN model to predict quantity sold for a highly seasonal business. I chose KNN because I thought that using nearest neighbors would inform my model about said seasonality better than a standard regression. For reference, I need this prediction to be fairly close to reality, not a smoothed function that doesn't account for the fact that in December, volume is millions of units greater than in September. Which brings me to my question:
As I have been searching for the optimal K value, I have found that while increasing my K is reducing my out of sample error, it is providing a worse prediction of the test data than a smaller K (by worse I mean that when tracking the predictions next to the actual values, the predicted values are significantly off in certain periods, more so than with a smaller K that has a higher RMSE). My assumption here is that using a higher value of K is effectively moving my model towards a standard regression model, smoothing the curve if you will. Is this a valid intuition, is there something more going on here?