Skip to content

KysonYoshi/Speech_Deepfake-Emotion_Detection_System

Repository files navigation

XLSR-Mamba Setup and Inference

This document guides you through setting up the environment, installing dependencies, and running inference using the XLSR-Mamba architecture. Doc for more model design details.

Setup Environment

Step 1: Create and Activate Anaconda Environment

First, create and activate a new conda environment:

conda create -n XLSR_Mamba python=3.10
conda activate XLSR_Mamba

Step 2: Install Dependencies

Install the required Python packages:

pip install -r requirements.txt

Step 3: Install Fairseq

Clone and install Fairseq from source:

git clone https://github.com/facebookresearch/fairseq.git fairseq_dir
cd fairseq_dir
pip install --editable ./
cd ..

Note: If installation issues occur, consider temporarily downgrading pip. After installing Fairseq, upgrade pip again:

pip install --upgrade pip

Pretrained Models

Place the downloaded pretrained models into your working directory or specify their paths accordingly.

Directory Structure

Testing files can be downloaded from here: Link

Create the following directories and file structure before running inference:

|audio
|-real
|--real_0.wav
|-fake
|--fake_0.wav

|model
|-model_0.pt

Run Eval

Execute the following command to run inference:

python deepfake_eval.py

Ensure your environment is activated (conda activate XLSR_Mamba) and dependencies are properly installed before running this command.

Test Log

image

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors