v0.2.0
New
- Simple HTTP API example (#530) by @briancaffey
- Command-line streaming inference example (#512) by @ZaymeShaw
- Batch Vocos decoding (generate matrix by longest token, fill the rest with 0) (6e18575) by @fumiama
- Adaptation to new VQ Encoder (9f0b7a0) by @fumiama
- ZeroShot support (b4da237) by @fumiama
- Initial support for vLLM (8e6184e) by @ylzz1997
- Added
--sourceand--custom_pathparameters to command-line examples (#669) by @weedge - Added
inplaceparameter toSpeakerclass for fine-tuning (#679) by @ain-soph - Added
experimentalparameter toChat.loadfunction (#682) by @ain-soph
Fixed
- Intermittent glitches in streaming inference audio (7ee5426) by @fumiama
- Different accent per generation even under the same parameters in WebUI (3edd47c) by @fumiama
normalizerchanged tag format, causing model to read out the tag (c6bae90) by @fumiama- Replaced unaccessable GitCode mirror (c06f1d4) by @fumiama
- Error in handling repetition penalty idx (#738) by @niuzheng168
Optimized
- Completely removed
pretrain_modelsdictionary (77c7e20) by @fumiama - Made
tokenizera standalone class (77c7e20) by @fumiama normalizerwill remove all unsupported characters now to avoid inference errors (0f47a87) by @fumiama- Removed
configfolder, settings are now directly embedded into the code for easier changes (27331c3) by @fumiama - Removed extra whitespace at the end of streaming inference (#564) by @Ox0400
- Added
manual_seedparameter, which directly providesgeneratortomultinomial, avoiding impact on torch environment (e675a59) by @fumiama tokenizerloading switched from.ptto the built-infrom_pretrainedmethod to eliminate potential malicious code loading (80b24e6) by @fumiama- Made
speakerclass standalone, placingspk_statrelated content within it, and directly wrote its values into the settings class due to its small size (f3dcd97) by @fumiama Chat.loadwill setcompile=Falsenow by default (7e33889) by @fumiama- Switched GPT to
safetensormodel (8a503fd) by @fumiama
Dependencies
- Changed code license to open-source AGPL3.0 (9f402ba)
新增
- 简单的 HTTP API 示例 (#530) by @briancaffey
- 命令行流式推理实例 (#512) by @ZaymeShaw
- 批量 Vocos 解码(按最长 token 生成矩阵,其余填0) (6e18575) by @fumiama
- 适配新 VQ Encoder (9f0b7a0) by @fumiama
- ZeroShot 支持 (b4da237) by @fumiama
- 初步支持 vLLM (8e6184e) by @ylzz1997
- 命令行示例增加
--source和--custom_path参数 (#669) by @weedge - 为引入微调,给
Speaker类增加inplace参数 (#679) by @ain-soph - 给
Chat.load函数增加experimental参数 (#682) by @ain-soph
修复
- 流式推理的声音有间断性毛刺 (7ee5426) by @fumiama
- WebUI 相同条件下音频生成每次不同 (3edd47c) by @fumiama
normalizer更改 tag 导致模型将 tag 读出 (c6bae90) by @fumiama- 更换已失效的 GitCode 镜像 (c06f1d4) by @fumiama
- repetition penalty idx 处理错误 (#738) by @niuzheng168
优化
- 彻底移除
pretrain_models字典 (77c7e20) by @fumiama - 将
tokenizer独立为一个类 (77c7e20) by @fumiama normalizer将所有不支持的字符删除以免推理出错 (0f47a87) by @fumiama- 取消
config文件夹,直接把设置写入代码方便更改 (27331c3) by @fumiama - 删除流式推理末尾多余的空白 (#564) by @Ox0400
- 在调用前设置
manual_seed改为直接给multinomial提供generator,避免影响 torch 环境 (e675a59) by @fumiama - 将
tokenizer从直接加载.pt改为调用自带的from_pretrained方法,从而消除可能的恶意代码加载 (80b24e6) by @fumiama - 独立
speaker类,放置spk_stat相关内容,同时因为该模型很小,所以直接将它的值写入了设置类 (f3dcd97) by @fumiama Chat.load参数改为默认关闭编译 (7e33889) by @fumiama- GPT 切换到
safetensor模型 (8a503fd) by @fumiama
依赖
- 代码许可证更改为开源的 AGPL3.0 (9f402ba)