3.1. Datasets

3.1.1. Existing Datasets for Emotion Recognition

Emotion recognition research has greatly benefited froom datasets that combine EEG, ECG and other physiological signals to capture emotional response. There are many different datasets available, each with unique characteristcs, data structure and collection methods.

3.1.2. Overview of Common EEG/ECG Datasets for Emotion Recognition

Datasets like SEED , DEAP , and MAHNOB-HCI primarily use EEG combined with other signals such as eye tracking and physiological measures, and focus on emotions induced through videos or music clips in structured lab settings. These datasets typically label emotions based on dimensions like arousal, valence, and sometimes dominance or liking, offering extensive annotations for controlled stimuli. Other datasets like DERCFF and MPED expand further by integrating cardiac features and respiration signals to capture physiological responses across diverse environments, while DENS and EEWD emphasize naturalistic and real-time user interactions, reflecting more unstructured emotional responses. THU-EP presents a more experimental approach, focusing solely on EEG signals with unique, less conventional emotional labels.

DREAMER and AMIGOS datasets, on the other hand, include both EEG and ECG signals (with DREAMER using film clips and AMIGOS also incorporating group settings), allowing for more exploration of brain-heart interactions.

Not all these datasets include both EEG and ECG recorded together, however, they can contribute valuable insights and foundational models that can be adapted or or transferred to datasets like DREAMER or AMIGOS, which does include simultaneous EEG-ECG data.

The Table 2.1 below provides an overview of some commonly used emotion recognition datasets, comparing their primary collected signals, emotion labels, dataset structure, and how the data was collected.

2.1: Overview of Commonly Used EEG/ECG Datasets for Emotion Recognition

Dataset
Primary Signals
Emotion Labels
Dataset Structure
Data Collection Method

SEED

EEG, Eye Tracking, Facial Expression

Positive, Negative, Neutral

EEG signals from 15 subjects over 62 video clips; labeled by emotion

EEG recorded during viewing of film clips; eye tracking, expression capture

DEAP

EEG, Peripheral Physiological Signals

Arousal, Valence, Dominance, Liking

32 channels, EEG, physiological data from 32 subjects; 40 music video clips

EEG and peripheral data recorded while watching music videos

MAHNOB-HCI

EEG, Eye Gaze, Physiological Signals

Arousal, Valence

EEG, gaze, physiological data from 30 participants; extensive labeling

EEG, eye tracking, and physiological measures recorded during emotion-inducing stimuli

DREAMER

EEG, ECG

Arousal, Valence, Dominance

14 EEG channels, ECG data, 23 subjects; labeled by arousal, valence, dominance

Film clips used to elicit emotions; ECG and EEG captured simultaneously

AMIGOS

EEG, ECG, GSR

Arousal, Valence, Dominance, Liking

EEG, ECG, GSR from 40 participants; individual and group sessions

Recorded in lab settings; includes individual and social interactions

DERCFF [7

Cardiac Frequency, Facial Features

Arousal, Valence (inferred through physiology)

Cardiac frequency and facial features recorded in multiple settings

Recorded in various settings using wearable devices

DENS

EEG, Physiological Signals

Naturalistic emotional responses

EEG, physiological signals; unstructured naturalistic sessions

Naturalistic data collection with EEG and physiological monitoring

MPED

EEG, ECG, Respiration

Arousal, Valence (based on video stimuli)

EEG, ECG, respiration data from 30 subjects; based on emotional videos

Recorded during emotion-inducing videos in a lab environment

THU-EP

EEG

Experimental emotion labels

EEG data from 20 subjects; structured experimental protocol

Experimental setup using specific stimuli and EEG equipment

EEWD

EEG, Eye Tracking

Emotion-based user responses

EEG and eye tracking; recorded in dynamic user interaction environments

Recorded during real-time user interactions with wearable EEG and eye tracking

3.1.3. Rationale for Using DREAMER Dataset

Datasets such as DERCFF, THU-EP, EEWD are not publicly available. As shown in Table 2.1, other datasets such as MPED, DEAP, SEED and AMIGOS include a wide range of signals with detailed emotional labeling, giving comprehensive insights for various emotion recognition tasks. However, accessing many of these datasets, like DEAP and SEAD, often requires insitutional affliation or supervisor endorsement. Also, they can be technically demanding, especially in terms of preprocessing and handling complexity of multimodal signals.

Given my non-STEM background, choosing a simpler, more accessible dataset is my priority. This allows me to focus on emotion recognition task without getting overwhelmemed by technical challenges. DREAMER stands out as an ideal choice. It has a structured format, clear emotional labelling and straightforward access requirements. I only needed to submit a written request to the authors for access, making it practical for independent study.

The DREAMER dataset includes EEG and ECG recordings from 23 participants, captured as they respond to emotionally charged film clips. Participants also rated their emotions based on arousal, valence, and dominance dimensions. By focusing on DREAMER, this project can focus on exploration of brain-heart interactions in a manageable way, allowing for effective analysis of multimodal data without excessive complexity.

Last updated