File tree Expand file tree Collapse file tree 1 file changed +24
-8
lines changed Expand file tree Collapse file tree 1 file changed +24
-8
lines changed Original file line number Diff line number Diff line change 11# Sesame CSM
22
3+ This demo shows running inference of [ Sesame CSM] ( https://github.com/SesameAILabs/csm ) using llama.cpp / GGML
4+
5+ It contains 3 components (each has its own GGUF file):
6+ 1 . Backbone LLM
7+ 2 . Decoder LLM
8+ 3 . Mimi decoder
9+
10+ ## Quick start
11+
12+ By default, all GGUF files are downloaded from [ ggml-org Hugging Face's account] ( https://huggingface.co/ggml-org/sesame-csm-1b-GGUF )
13+
14+ ``` sh
15+ # build (make sure to have LLAMA_CURL enabled)
16+ cmake -B build -DLLAMA_CURL=ON
17+ cmake --build build -j --target llama-tts-csm
18+
19+ # run it
20+ ./build/bin/llama-tts-csm -p " [0]Hi, my name is Xuan Son. I am software engineer at Hugging Face."
21+ ```
22+
23+ ## Convert the model yourself
24+
325To get the GGUF:
426
527``` sh
@@ -14,16 +36,10 @@ python examples/tts/convert_csm_to_gguf.py
1436python examples/tts/convert_csm_to_gguf.py --outtype q8_0
1537```
1638
17- Compile the example:
18-
19- ``` sh
20- cmake --build build -j --target llama-tts-csm
21- ```
22-
23- Run the example:
39+ Run the example using local file:
2440
2541``` sh
26- ./build/bin/llama-tts-csm -m sesame-csm-backbone.gguf -p " [0]Hello world."
42+ ./build/bin/llama-tts-csm -m sesame-csm-backbone.gguf -mv kyutai-mimi.gguf - p " [0]Hello world."
2743# sesame-csm-backbone.gguf will automatically be loaded
2844# make sure the place these 2 GGUF files in the same directory
2945
You can’t perform that action at this time.
0 commit comments