Skip to content

Commit 0a496c1

Browse files
authored
Merge pull request #1507 from FunAudioLLM/dev/lyuxiang.lx
Dev/lyuxiang.lx
2 parents 11515d0 + 05bdf4c commit 0a496c1

File tree

19 files changed

+3473
-0
lines changed

19 files changed

+3473
-0
lines changed

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,10 @@
2929

3030
## Roadmap
3131

32+
- [x] 2025/08
33+
34+
- [x] Thanks to the contribution from NVIDIA Yuekai Zhang, add triton trtllm runtime support
35+
3236
- [x] 2025/07
3337

3438
- [x] release cosyvoice 3.0 eval set
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
FROM nvcr.io/nvidia/tritonserver:25.06-trtllm-python-py3
2+
LABEL maintainer="[email protected]"
3+
4+
RUN apt-get update && apt-get install -y cmake
5+
RUN git clone https://github.com/pytorch/audio.git && cd audio && git checkout c670ad8 && PATH=/usr/local/cuda/bin:$PATH python3 setup.py develop
6+
COPY ./requirements.txt /workspace/requirements.txt
7+
RUN pip install -r /workspace/requirements.txt
8+
WORKDIR /workspace

runtime/triton_trtllm/README.md

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
## Best Practices for Serving CosyVoice with NVIDIA Triton Inference Server
2+
3+
Thanks to the contribution from NVIDIA Yuekai Zhang.
4+
5+
### Quick Start
6+
Launch the service directly with Docker Compose:
7+
```sh
8+
docker compose up
9+
```
10+
11+
### Build the Docker Image
12+
Build the image from scratch:
13+
```sh
14+
docker build . -f Dockerfile.server -t soar97/triton-cosyvoice:25.06
15+
```
16+
17+
### Run a Docker Container
18+
```sh
19+
your_mount_dir=/mnt:/mnt
20+
docker run -it --name "cosyvoice-server" --gpus all --net host -v $your_mount_dir --shm-size=2g soar97/triton-cosyvoice:25.06
21+
```
22+
23+
### Understanding `run.sh`
24+
The `run.sh` script orchestrates the entire workflow through numbered stages.
25+
26+
Run a subset of stages with:
27+
```sh
28+
bash run.sh <start_stage> <stop_stage> [service_type]
29+
```
30+
- `<start_stage>` – stage to start from (0-5).
31+
- `<stop_stage>` – stage to stop after (0-5).
32+
33+
Stages:
34+
- **Stage 0** – Download the cosyvoice-2 0.5B model from HuggingFace.
35+
- **Stage 1** – Convert the HuggingFace checkpoint to TensorRT-LLM format and build TensorRT engines.
36+
- **Stage 2** – Create the Triton model repository and configure the model files (adjusts depending on whether `Decoupled=True/False` will be used later).
37+
- **Stage 3** – Launch the Triton Inference Server.
38+
- **Stage 4** – Run the single-utterance HTTP client.
39+
- **Stage 5** – Run the gRPC benchmark client.
40+
41+
### Export Models to TensorRT-LLM and Launch the Server
42+
Inside the Docker container, prepare the models and start the Triton server by running stages 0-3:
43+
```sh
44+
# Runs stages 0, 1, 2, and 3
45+
bash run.sh 0 3
46+
```
47+
*Note: Stage 2 prepares the model repository differently depending on whether you intend to run with `Decoupled=False` or `Decoupled=True`. Rerun stage 2 if you switch the service type.*
48+
49+
### Single-Utterance HTTP Client
50+
Send a single HTTP inference request:
51+
```sh
52+
bash run.sh 4 4
53+
```
54+
55+
### Benchmark with a Dataset
56+
Benchmark the running Triton server. Pass either `streaming` or `offline` as the third argument.
57+
```sh
58+
bash run.sh 5 5
59+
60+
# You can also customise parameters such as num_task and dataset split directly:
61+
# python3 client_grpc.py --num-tasks 2 --huggingface-dataset yuekai/seed_tts_cosy2 --split-name test_zh --mode [streaming|offline]
62+
```
63+
> [!TIP]
64+
> Only offline CosyVoice TTS is currently supported. Setting the client to `streaming` simply enables NVIDIA Triton’s decoupled mode so that responses are returned as soon as they are ready.
65+
66+
### Benchmark Results
67+
Decoding on a single L20 GPU with 26 prompt_audio/target_text [pairs](https://huggingface.co/datasets/yuekai/seed_tts) (≈221 s of audio):
68+
69+
| Mode | Note | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF |
70+
|------|------|-------------|------------------|------------------|-----|
71+
| Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 1 | 758.04 | 615.79 | 0.0891 |
72+
| Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 2 | 1025.93 | 901.68 | 0.0657 |
73+
| Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 4 | 1914.13 | 1783.58 | 0.0610 |
74+
| Decoupled=True | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 1 | 659.87 | 655.63 | 0.0891 |
75+
| Decoupled=True | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 2 | 1103.16 | 992.96 | 0.0693 |
76+
| Decoupled=True | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 4 | 1790.91 | 1668.63 | 0.0604 |
77+
78+
### OpenAI-Compatible Server
79+
To launch an OpenAI-compatible service, run:
80+
```sh
81+
git clone https://github.com/yuekaizhang/Triton-OpenAI-Speech.git
82+
pip install -r requirements.txt
83+
# After the Triton service is up, start the FastAPI bridge:
84+
python3 tts_server.py --url http://localhost:8000 --ref_audios_dir ./ref_audios/ --port 10086 --default_sample_rate 24000
85+
# Test with curl
86+
bash test/test_cosyvoice.sh
87+
```
88+
89+
### Acknowledgements
90+
This section originates from the NVIDIA CISI project. We also provide other multimodal resources—see [mair-hub](https://github.com/nvidia-china-sae/mair-hub) for details.
91+

0 commit comments

Comments
 (0)