I Understand

We use cookies.Click here for details.

`K-Nearest Neighbors`

```
Not to be confused with k-means clustering.
In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric classification method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in data set. The output depends on whether k-NN is used for classification or regression:
In k-NN classification, the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.
In k-NN regression, the output is the property value for the object. This value is the average of the values of k nearest neighbors.
```

See full entry on Wikipedia

`These are the most commonly used hashtags on social media when including `

**K-Nearest Neighbors**. The top three related terms are **decisiontree**, **knn**, and **logisticregression**.

` `
### Data Scientist

**K-Nearest Neighbors** is most commonly found in** Data Scientist** job descriptions. To learn more about the role, click the button below.

Explore the role