3.4.3. Temporal Convolutional Networks

Temporal Convolutional Networks (TCNs) offer an alternative for handling sequential data by capturing patterns across time in a unique way. While LSTMs and BiLSTMs process data one step at a time, TCNs view the whole sequence at once by using “dilated convolution”, a technique that expand their "view" across longer stretches of data without increasing the number of parameters by skipping over some parts of the input, be able to look farther back (or forward) in the sequence, help capture both short-and long-term dependencies without needing to go through each time step individually. This mechanism allows TCNs to be faster and more efficient while capturing the broader temporal patterns, although the lack of step-by-step processing can make them less suited for tasks needing precise sequence order.

In EEG emotion recognition, TCNs seem to receive certain attention and experiements. He et al. utilized TCNs with Adversarial Discriminative Domain Adaptation (ADDA) to deal with domain shifts in cross-subject data, achieving accuracy above 63% for valence and arousal across DEAP and DREAMER datasets. Jia et al. introduced the Temporal Convolutional Broad Learning System (TCBLS), integrating TCNs with Broad Learning Systems to extract features from EEG signals in real-time.

Sartipi et al. proposed a hybrid spatiotemporal attention network that combines TCNs with graph-smoothing techniques for better interpretability and effective transfer learning across EEG datasets. Li et al. developed a Spatio-Temporal Field (STF) model, using TCNs to capture subtle emotional cues by extracting the Rational Asymmetry of Spectral Power features. And finally, Li et al. proposed an Attention-based Spatio-Temporal Graphic LSTM (ASTG-LSTM) for EEG emotion recognition, incorporating TCNs and dynamic structured learning to represent inter-channel connections.

Last updated