This document guides you through setting up the environment, installing dependencies, and running inference using the XLSR-Mamba architecture. Doc for more model design details.
First, create and activate a new conda environment:
conda create -n XLSR_Mamba python=3.10
conda activate XLSR_MambaInstall the required Python packages:
pip install -r requirements.txtClone and install Fairseq from source:
git clone https://github.com/facebookresearch/fairseq.git fairseq_dir
cd fairseq_dir
pip install --editable ./
cd ..Note: If installation issues occur, consider temporarily downgrading pip. After installing Fairseq, upgrade pip again:
pip install --upgrade pip- XLSR Model: Download XLSR pretrained model
- DualMamba Models: Download pretrained DualMamba models
Place the downloaded pretrained models into your working directory or specify their paths accordingly.
Testing files can be downloaded from here: Link
Create the following directories and file structure before running inference:
|audio
|-real
|--real_0.wav
|-fake
|--fake_0.wav
|model
|-model_0.pt
Execute the following command to run inference:
python deepfake_eval.pyEnsure your environment is activated (conda activate XLSR_Mamba) and dependencies are properly installed before running this command.
