3.5.2. Generative Adversarial Networks
Generative Adversarial Networks (GANs) are a type of generative model that, like Variational Autoencoders (VAEs), create new data samples based on learned patterns from the original data. GANs, however, use a unique adversarial setup with two networks—a generator and a discriminator—that "compete" against each other. The generator creates synthetic data and the discriminator evaluates how realistic these generated samples are compared to real data, providing feedback for the generator to improve over time. This setup results in highly realistic synthetic data, making GANs especially effective for tasks where data is limited or costly to obtain. Unlike VAEs, which focus on encoding data into a compact latent space, GANs directly generate high-quality samples through this adversarial process, which is particularly useful in data augmentation.
GANs are powerful for generating realistic synthetic data, however they can be challenging to train and require significant computational resources. GANs typically require a large amount of data to effectively learn the underlying distribution of the original dataset. Without sufficient data, GANs may struggle to generate realistic samples and are more likely to encounter issues like mode collapse, where they fail to capture the full diversity of the data.
Wang et al. used GANs to expand EEG data for real-time emotion recognition, which is important for overcoming the scarcity of high-quality EEG samples. In another study, Kucukler et al. used GANs to transform EEG data into spectrogram images for energy classification, with convolutional neural networks (CNNs) enhancing emotion-specific metrics. Qiao et al. took this further by integrating GANs with attention mechanisms, applying spatial and channel attention to strengthen weak EEG signals, which improved synthetic data generation accuracy on the SEED dataset. Gilakjani and Osman combined GANs with contrastive learning and Graph Neural Networks (GNNs) to tackle inter-and intra-subject variability in EEG signals, achieving consistent emotion classification across the DEAP and MAHNOB datasets. Inter-subject variability is differences between people, while intra-subject variability is differences within one person over time.
Last updated