Yeah, Mathbabe’s got it right: People who use kNN often don’t think about these things.
For those who aren’t familiar with this technique, here’s a description from Zhi-Hua Zhou in Ensemble Methods: Foundations and Algorithms (section 1.2.5):
“The -nearest neighbor (
NN) algorithm relies on the principle that objects similar in the input space are also similar in the output space. It is a lazy learning approach since it does not have an explicit training process, but simply stores the training set instead. For a test instance, a
-near neighbor learner identifieds the
insteances from the training set that are closest to the test instance. Then, for classification, the test instance will be classified to the majority class among the
instances; while for regression, the test instance will be assigned the average value of the
instances.”



