3.5.1. Variational Autoencoders
Variational Autocencoders (VAEs) are a type of generative model, in which the model learns to generate new data similar to the training data through capturing the underlying patterns and distribution. VAEs use neural networks to learn a compact, lower-dimensional representation of complex data. They work by encoding input data into a “latent space”, referring to hidden or underlying factors that are not directly obserable in the data but nevertheless have meaningful influence, which to create a simplified representation capturing the essential data features, then decode it back to reconstruct the original data.
VAEs use probabilistic approach by learning a distribution of possible encodings, which then allows them to generate data with more variety and realism. However, in order to achieve high quality outputs, it is essential to balance the two factors during training: how accurate or close the output is to the input, and ensuring the latent space capture meaningful variations.
Kamble and Sengupta introduced CycleMVAE, a cycle-consistent multi-task VAE designed to handle EEG emotion recognition by translating features between different EEG samples, reconstructing signals, and classifying emotions into categories like arousal and valence . In another approach, their EVNCERS model applies variational nonlinear chirp mode decomposition (VNCMD) with eigenvector centrality to examine EEG rhythms, achieving high accuracy in detecting arousal and dominance . Zhang et al. explored EEG rhythms through variational phase-amplitude coupling, identifying distinct coupling patterns associated with different emotional states . Song and colleagues developed the Variational Instance-Adaptive Graph (V-IAG), combining instance-adaptive and probabilistic graphs to capture individual variations in EEG channels, demonstrating strong performance across datasets .
Bethge et al. proposed EEG2Vec, a VAE-based framework that learns emotion-specific representations from EEG data and generates synthetic EEG signals, which supports improved emotion classification . Wang et al. introduced the Multi-Modal Domain Adaptive VAE (MMDA-VAE), which uses adversarial learning to align multi-modal data, enhancing emotion recognition consistency across subjects . Zhang, Shi, and Yeh applied variational phase-amplitude coupling to examine EEG frequency couplings, showing patterns tied to emotional states, demonstrating model's potential in identifying subtle shifts in neural activity during emotions .
Last updated