3.3.4. K-Nearest Neighbors
K-Nearest Neightbors (KNN) is a machine learning algorithm that classifies data by finding the majority label among the closest data points (or “neighbors”). Unlike traditional algorithms, KNN is instance-based, which means it doesn’t have a training phase where it generalizes from data. Instead it uses the full dataset to make predictions by looking at the nearest data points everytime it meets new instance. This approach can be slower and computationally intensive on larger datasets as it needs to search for neighbors. KNN generally performs better with smaller and well separated datasets.
Sarma and Barma used KNN with EEG data processed by LASSO regularization for subject-specific emotion features, achieving 80% accuracy in arousal and 76% in valence classification on DEAP dataset . Anand et al. took a novel approach by combining KNN with time-varying graph signal processing for cross-subject based positive and negative emotion classification on the DREAMER dataset, with results comparable to advanced techniques . Garima et al. used Discrete Wavelet Transform (DWT) and General Factor Analysis for feature extraction before applying KNN, reaching 96.8% accuracy for valence labels on DREAMER .
Last updated