This guide provides examples for running TTS inference using ExampleONNX.java.
2025.11.23 - Enhanced text preprocessing with comprehensive normalization, emoji removal, symbol replacement, and punctuation handling for improved synthesis quality.
2025.11.19 - Added --speed parameter to control speech synthesis speed (default: 1.05, recommended range: 0.9-1.5).
2025.11.19 - Added automatic text chunking for long-form inference. Long texts are split into chunks and synthesized with natural pauses.
This project uses Maven for dependency management.
- Java 11 or higher
- Maven 3.6 or higher
mvn clean installRun inference with default settings:
mvn exec:javaThis will use:
- Voice style:
assets/voice_styles/M1.json - Text: "This morning, I took a walk in the park, and the sound of the birds and the breeze was so pleasant that I stopped for a long time just to listen."
- Output directory:
results/ - Total steps: 5
- Number of generations: 4
Process multiple voice styles and texts at once:
mvn exec:java -Dexec.args="--batch --voice-style assets/voice_styles/M1.json,assets/voice_styles/F1.json --text 'The sun sets behind the mountains, painting the sky in shades of pink and orange.|The weather is beautiful and sunny outside. A gentle breeze makes the air feel fresh and pleasant.'"This will:
- Generate speech for 2 different voice-text pairs
- Use male voice (M1.json) for the first text
- Use female voice (F1.json) for the second text
- Process both samples in a single batch
Increase denoising steps for better quality:
mvn exec:java -Dexec.args="--total-step 10 --voice-style assets/voice_styles/M1.json --text 'Increasing the number of denoising steps improves the output fidelity and overall quality.'"This will:
- Use 10 denoising steps instead of the default 5
- Produce higher quality output at the cost of slower inference
The system automatically chunks long texts into manageable segments, synthesizes each segment separately, and concatenates them with natural pauses (0.3 seconds by default) into a single audio file. This happens by default when you don't use the --batch flag:
mvn exec:java -Dexec.args="--voice-style assets/voice_styles/M1.json --text 'This is a very long text that will be automatically split into multiple chunks. The system will process each chunk separately and then concatenate them together with natural pauses between segments. This ensures that even very long texts can be processed efficiently while maintaining natural speech flow and avoiding memory issues.'"This will:
- Automatically split the text into chunks based on paragraph and sentence boundaries
- Synthesize each chunk separately
- Add 0.3 seconds of silence between chunks for natural pauses
- Concatenate all chunks into a single audio file
Note: Automatic text chunking is disabled when using --batch mode. In batch mode, each text is processed as-is without chunking.
Tip: If your text contains apostrophes, use escaping or run the JAR directly:
java -jar target/tts-example.jar --total-step 10 --text "Text with apostrophe's here"To create a standalone JAR with all dependencies:
mvn clean packageThen run it directly:
java -jar target/tts-example.jarOr with arguments:
java -jar target/tts-example.jar --total-step 10 --text "Your custom text here"| Argument | Type | Default | Description |
|---|---|---|---|
--use-gpu |
flag | False | Use GPU for inference (default: CPU) |
--onnx-dir |
str | assets/onnx |
Path to ONNX model directory |
--total-step |
int | 5 | Number of denoising steps (higher = better quality, slower) |
--n-test |
int | 4 | Number of times to generate each sample |
--voice-style |
str+ | assets/voice_styles/M1.json |
Voice style file path(s) |
--text |
str+ | (long default text) | Text(s) to synthesize |
--save-dir |
str | results |
Output directory |
--batch |
flag | False | Enable batch mode (multiple text-style pairs, disables automatic chunking) |
- Batch Processing: When using
--batch, the number of--voice-stylefiles must match the number of--textentries - Automatic Chunking: Without
--batch, long texts are automatically split and concatenated with 0.3s pauses - Quality vs Speed: Higher
--total-stepvalues produce better quality but take longer - GPU Support: GPU mode is not supported yet
- Voice Styles: Uses pre-extracted voice style JSON files for fast inference