Skip to content

glam-imperial/GatedxLSTM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

6 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations

๐Ÿ“˜ Introduction

This repository provides training and evaluation code for the paper

GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations. [Paper Link]

GatedxLSTM Workflow

Key contributions of GatedxLSTM include:

  • CLAP-based Cross-modal Alignment: Incorporating Contrastive Language-Audio Pretraining for improved speech-text alignment.
  • Gated Modality Fusion: A gating mechanism to emphasise emotionally salient utterances.
  • Dialogical Emotion Decoder (DED): Captures context-aware emotional transitions over conversation turns.

๐Ÿš€ How to Run

Step 1: Download the Dataset

Download the IEMOCAP dataset and extract it to your local directory.

Install required dependency packages.

Step 2: Data Preprocessing

Update the dataset path in ./data/preprocess.py, then run:

python ./data/preprocess.py

Step 3: Training and Inference

To train and evaluate the model, run:

python ./Dialogical-Emotion-Decoding/main.py

๐Ÿ“„ Citation

This work has been accepted at ACII 2025.

Welcome to cite our paper:

@article{li2025gatedxlstm,
  title={GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations},
  author={Li, Yupei and Sun, Qiyang and Murthy, Sunil Munthumoduku Krishna and Alturki, Emran and Schuller, Bj{\"o}rn W},
  journal={arXiv preprint arXiv:2503.20919},
  year={2025}
}

About

This repository provides training and evaluation code for the paper GatedxLSTM : A Multimodal Affective Computing Approach for Emotion Recognition in Conversations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors