If I have binary-classification data and a Euclidean metric, and I know the best number of nearest neighbors, then I draw circles on my training data based on my K-value which tell me which regions are from class A and B via voting.
How does KNN do prediction when I have new data points statistically outliers with respect to the training data and are located far away from the original training data? What if the test data have no nearest training data neighbors, and only has neighbors too far away to inference the membership?