Lyriks is an automated lyrics video generator. It transcribes audio and automatically creates a lyrics video using fast subtitle rendering (pysubs2+ffmpeg) or MoviePy.
- Automatic vocal separation using Demucs
- Transcription with OpenAI Whisper and whisper-timestamped
- Fast, high-quality video rendering with pysubs2 + FFmpeg
- Synchronized lyrics video generation with MoviePy (legacy)
- ASS subtitle generation with pysubs2
- Fast video rendering using FFmpeg
- Linux (Windows support is experimental; macOS hasn't been tested yet)
- An NVIDIA GPU (recommended for best performance; CPU is supported but slower)
- 10GB of free disk space
- Python 3.11
- ffmpeg
On Ubuntu/Debian:
sudo apt update
sudo apt install ffmpeg
On Arch Linux:
sudo pacman -S ffmpeg
For other platforms and more details, see the FFmpeg download page.
It is highly recommended to use a virtual environment for isolation:
python3 -m venv .venv
source .venv/bin/activate
Then install Lyriks with pip:
pip install lyriks-video
Set your Gemini API key as an environment variable before running Lyriks:
Linux/macOS:
export GEMINI_API_KEY="your-gemini-api-key"
Windows (Command Prompt):
set GEMINI_API_KEY=your-gemini-api-key
Windows (PowerShell):
$env:GEMINI_API_KEY="your-gemini-api-key"
python -m lyriks generate AUDIO_FILE LYRICS_FILE [OPTIONS]
-
AUDIO_FILE
Path to the input audio file (e.g.,song.mp3
).
This should be a supported audio format (such as MP3 or WAV). -
LYRICS_FILE
Path to the lyrics file (plain text).
The lyrics should be in a text file, one line per lyric segment.
You will be interactively prompted in the CLI for any options you leave unspecified.
-
--output
,-o
Output video file name (without extension).
Example:-o my_lyrics_video
-
--model_size
,-m
Sets the Whisper model size for transcription.
Options:tiny
,base
,small
,medium
,large
,turbo
-
--device
,-d
Which device to use for Whisper model inference.
Options:cpu
,cuda
-
--generator
,-g
Which backend to use for video generation.
Options:ps2
: pysubs2 + ffmpeg (fast, good quality, experimental, ~60 fps)mp
: MoviePy (slow, low quality, legacy, ~10 fps)ts
: Only save transcript (for debugging)
-
--background
,-b
Optional background video file for the video (must be a video the same length or longer than the audio).
Example:-b my_background.mp4
-
--no-gemini
Disable Gemini improvements for Whisper output. -
--karaoke
,-k
Generate a karaoke-style video (music only, vocals removed).
When this option is enabled, Lyriks will automatically separate the vocals from the music using Demucs and use the instrumental (music without vocals) as the audio track for the generated video.
python -m lyriks generate path/to/song.mp3 path/to/lyrics.txt -m small -d cuda -o output_video -b background.mp4
Note: This process can take up to 5 minutes on lower end hardware.
- Libary of procedually generated backgrounds
- Batch processing
- Automatic upload to YouTube
- Config file for video style
- Config file generator function
This project uses:
- Demucs for music vocal separation.
- whisper-timestamped for word-level timestamped transcription.
If you use this in your research, please cite the following:
@inproceedings{rouard2022hybrid,
title={Hybrid Transformers for Music Source Separation},
author={Rouard, Simon and Massa, Francisco and D{'e}fossez, Alexandre},
booktitle={ICASSP 23},
year={2023}
}
@inproceedings{defossez2021hybrid,
title={Hybrid Spectrogram and Waveform Source Separation},
author={D{'e}fossez, Alexandre},
booktitle={Proceedings of the ISMIR 2021 Workshop on Music Source Separation},
year={2021}
}
@misc{lintoai2023whispertimestamped,
title={whisper-timestamped},
author={Louradour, J{\'e}r{\^o}me},
journal={GitHub repository},
year={2023},
publisher={GitHub},
howpublished = {\url{https://github.com/linto-ai/whisper-timestamped}}
}
@article{radford2022robust,
title={Robust speech recognition via large-scale weak supervision},
author={Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
journal={arXiv preprint arXiv:2212.04356},
year={2022}
}
@article{JSSv031i07,
title={Computing and Visualizing Dynamic Time Warping Alignments in R: The dtw Package},
author={Giorgino, Toni},
journal={Journal of Statistical Software},
year={2009},
volume={31},
number={7},
doi={10.18637/jss.v031.i07}
}
This project is licensed under the GPL-3.0 License.
Contributions are welcome!
If you have suggestions, bug reports, or want to add features, please open an issue or submit a pull request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/my-feature
) - Commit your changes (
git commit -am 'Add new feature'
) - Push to the branch (
git push origin feature/my-feature
) - Open a pull request
For questions, bug reports, or feedback, please open an issue on GitHub
or contact the maintainer: simon0302010 (GitHub username).