K-nearest neighbors (KNN) regression estimates $f(Y|X)$ and classifies based on predictor values using the highest estimated probability. In KNN, the test observation is plotted in the predictor space, and the k-nearest points are observed. The test observation is classified based on the most common class amongst the k-nearest points.
The number of neighbors to observe $k$ may result in different results. In general, a higher choice of $k$ results in more [[bias]]. A lower the choice of $k$ results in higher [[variance]]. (This is the classic [[bias-variance tradeoff]]).