Skip to content

Commit 18fb919

Browse files
committed
fix(i18n): missing translations in zh and ja
1 parent 0dea48e commit 18fb919

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

i18n/locale/ja_JP.json

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
{
2-
"### Model comparison\n> You can get model ID (long) from `View model information` below.\n\nCalculate a similarity between two models.": "### Model comparison\n> You can get model ID (long) from `View model information` below.\n\nCalculate a similarity between two models.",
2+
"### Model comparison\n> You can get model ID (long) from `View model information` below.\n\nCalculate a similarity between two models.": "### モデル比べ\n> モデルID(長)は下の`モデル情報を表示`に得ることが出来ます。\n\n両モデルの推論相似度を比べることが出来ます。",
33
"### Model extraction\n> Enter the path of the large file model under the 'logs' folder.\n\nThis is useful if you want to stop training halfway and manually extract and save a small model file, or if you want to test an intermediate model.": "### モデル抽出\n> ログフォルダー内の大モデルのパスを入力\n\nモデルを半分まで学習し、小モデルを保存しなかった場合、又は中間モデルをテストしたい場合に適用されます。",
4-
"### Model fusion\nCan be used to test timbre fusion.": "### Model fusion\nCan be used to test timbre fusion.",
4+
"### Model fusion\nCan be used to test timbre fusion.": "### モデルマージ\n音源のマージテストに使用できます",
55
"### Modify model information\n> Only supported for small model files extracted from the 'weights' folder.": "### モデル情報の修正\n> `weights`フォルダから抽出された小モデルのみ対応",
6-
"### Step 1. Fill in the experimental configuration.\nExperimental data is stored in the 'logs' folder, with each experiment having a separate folder. Manually enter the experiment name path, which contains the experimental configuration, logs, and trained model files.": "### Step 1. Fill in the experimental configuration.\nExperimental data is stored in the 'logs' folder, with each experiment having a separate folder. Manually enter the experiment name path, which contains the experimental configuration, logs, and trained model files.",
6+
"### Step 1. Fill in the experimental configuration.\nExperimental data is stored in the 'logs' folder, with each experiment having a separate folder. Manually enter the experiment name path, which contains the experimental configuration, logs, and trained model files.": "### 第一歩 実験設定入力\n実験データはlogsフォルダーに、実験名別のフォルダで保存されたため、その実験名をご自分で決定する必要があります。実験設定、ログ、学習されたモデルファイルなどがそのフォルダに含まれています。",
77
"### Step 2. Audio processing. \n#### 1. Slicing.\nAutomatically traverse all files in the training folder that can be decoded into audio and perform slice normalization. Generates 2 wav folders in the experiment directory. Currently, only single-singer/speaker training is supported.": "### 第二歩 音声処理\n#### 1. 音声切分\n学習フォルダー内のすべての音声ファイルを自動的に探し出し、切分と正規化を行い、2つのwavフォルダーを実験ディレクトリに生成します。現在は単人モデルの学習のみを支援しています。",
88
"### Step 3. Start training.\nFill in the training settings and start training the model and index.": "### 第三歩 学習開始\n学習設定を入力して、モデルと索引の学習を開始します。",
99
"### View model information\n> Only supported for small model files extracted from the 'weights' folder.": "### モデル情報を表示\n> `weights`フォルダから抽出された小さなのみ対応",
@@ -19,7 +19,7 @@
1919
"Batch processing for vocal accompaniment separation using the UVR5 model.<br>Example of a valid folder path format: D:\\path\\to\\input\\folder (copy it from the file manager address bar).<br>The model is divided into three categories:<br>1. Preserve vocals: Choose this option for audio without harmonies. It preserves vocals better than HP5. It includes two built-in models: HP2 and HP3. HP3 may slightly leak accompaniment but preserves vocals slightly better than HP2.<br>2. Preserve main vocals only: Choose this option for audio with harmonies. It may weaken the main vocals. It includes one built-in model: HP5.<br>3. De-reverb and de-delay models (by FoxJoy):<br>  (1) MDX-Net: The best choice for stereo reverb removal but cannot remove mono reverb;<br>&emsp;(234) DeEcho: Removes delay effects. Aggressive mode removes more thoroughly than Normal mode. DeReverb additionally removes reverb and can remove mono reverb, but not very effectively for heavily reverberated high-frequency content.<br>De-reverb/de-delay notes:<br>1. The processing time for the DeEcho-DeReverb model is approximately twice as long as the other two DeEcho models.<br>2. The MDX-Net-Dereverb model is quite slow.<br>3. The recommended cleanest configuration is to apply MDX-Net first and then DeEcho-Aggressive.": "UVR5モデルを使用したボーカル伴奏の分離バッチ処理。<br>有効なフォルダーパスフォーマットの例: D:\\path\\to\\input\\folder (エクスプローラーのアドレスバーからコピーします)。<br>モデルは三つのカテゴリに分かれています:<br>1. ボーカルを保持: ハーモニーのないオーディオに対してこれを選択します。HP5よりもボーカルをより良く保持します。HP2とHP3の二つの内蔵モデルが含まれています。HP3は伴奏をわずかに漏らす可能性がありますが、HP2よりもわずかにボーカルをより良く保持します。<br>2. 主なボーカルのみを保持: ハーモニーのあるオーディオに対してこれを選択します。主なボーカルを弱める可能性があります。HP5の一つの内蔵モデルが含まれています。<br>3. ディリバーブとディレイモデル (by FoxJoy):<br>  (1) MDX-Net: ステレオリバーブの除去に最適な選択肢ですが、モノリバーブは除去できません;<br>&emsp;(234) DeEcho: ディレイ効果を除去します。AggressiveモードはNormalモードよりも徹底的に除去します。DeReverbはさらにリバーブを除去し、モノリバーブを除去することができますが、高周波のリバーブが強い内容に対しては非常に効果的ではありません。<br>ディリバーブ/ディレイに関する注意点:<br>1. DeEcho-DeReverbモデルの処理時間は、他の二つのDeEchoモデルの約二倍です。<br>2. MDX-Net-Dereverbモデルは非常に遅いです。<br>3. 推奨される最もクリーンな設定は、最初にMDX-Netを適用し、その後にDeEcho-Aggressiveを適用することです。",
2020
"Batch size per GPU": "GPUごとのバッチサイズ",
2121
"Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training, but caching large datasets will consume a lot of GPU memory and may not provide much speed improvement": "すべての学習データをメモリにキャッシュするかどうか。10分以下の小さなデータはキャッシュして学習を高速化できますが、大きなデータをキャッシュするとメモリが破裂し、あまり速度が上がりません。",
22-
"Calculate": "计算",
22+
"Calculate": "計算",
2323
"Choose sample rate of the device": "デバイスサンプリング率を使用",
2424
"Choose sample rate of the model": "モデルサンプリング率を使用",
2525
"Convert": "変換",
@@ -50,7 +50,7 @@
5050
"Hidden": "無表示",
5151
"ID of model A (long)": "AモデルID(長)",
5252
"ID of model B (long)": "BモデルID(長)",
53-
"ID(long)": "ID(long)",
53+
"ID(long)": "ID()",
5454
"ID(short)": "ID(短)",
5555
"If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness.": ">=3 次に、harvestピッチの認識結果に対してメディアンフィルタを使用します。値はフィルター半径で、ミュートを減衰させるために使用します。",
5656
"Inference time (ms)": "推論時間(ms)",
@@ -59,7 +59,7 @@
5959
"Input device": "入力デバイス",
6060
"Input noise reduction": "入力騒音低減",
6161
"Input voice monitor": "入力返聴",
62-
"Link index to outside folder": "链接索引到外部",
62+
"Link index to outside folder": "索引を外部フォルダへリンク",
6363
"Load model": "モデルをロード",
6464
"Load pre-trained base model D path": "事前学習済みのDモデルのパス",
6565
"Load pre-trained base model G path": "事前学習済みのGモデルのパス",
@@ -121,7 +121,7 @@
121121
"Select the pitch extraction algorithm ('pm': faster extraction but lower-quality speech; 'harvest': better bass but extremely slow; 'crepe': better quality but GPU intensive), 'rmvpe': best quality, and little GPU requirement": "ピッチ抽出アルゴリズムの選択、歌声はpmで高速化でき、harvestは低音が良いが信じられないほど遅く、crepeは良く動くがGPUを喰います",
122122
"Select the pitch extraction algorithm: when extracting singing, you can use 'pm' to speed up. For high-quality speech with fast performance, but worse CPU usage, you can use 'dio'. 'harvest' results in better quality but is slower. 'rmvpe' has the best results and consumes less CPU/GPU": "ピッチ抽出アルゴリズムの選択:歌声はpmで高速化でき、入力した音声が高音質でCPUが貧弱な場合はdioで高速化でき、harvestの方が良いが遅く、rmvpeがベストだがCPU/GPUを若干食います。",
123123
"Similarity": "相似度",
124-
"Similarity (from 0 to 1)": "相似度(0到1)",
124+
"Similarity (from 0 to 1)": "相似度(0~1)",
125125
"Single inference": "一度推論",
126126
"Specify output folder": "出力フォルダを指定してください",
127127
"Specify the output folder for accompaniment": "マスター以外の出力音声フォルダーを指定する",

i18n/locale/zh_CN.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"### Model comparison\n> You can get model ID (long) from `View model information` below.\n\nCalculate a similarity between two models.": "### Model comparison\n> You can get model ID (long) from `View model information` below.\n\nCalculate a similarity between two models.",
2+
"### Model comparison\n> You can get model ID (long) from `View model information` below.\n\nCalculate a similarity between two models.": "### 模型比较\n> 模型ID(长)请于下方`查看模型信息`中获得\n\n可用于比较两模型推理相似度",
33
"### Model extraction\n> Enter the path of the large file model under the 'logs' folder.\n\nThis is useful if you want to stop training halfway and manually extract and save a small model file, or if you want to test an intermediate model.": "### 模型提取\n> 输入logs文件夹下大文件模型路径\n\n适用于训一半不想训了模型没有自动提取保存小文件模型, 或者想测试中间模型的情况",
44
"### Model fusion\nCan be used to test timbre fusion.": "### 模型融合\n可用于测试音色融合",
55
"### Modify model information\n> Only supported for small model files extracted from the 'weights' folder.": "### 修改模型信息\n> 仅支持weights文件夹下提取的小模型文件",

0 commit comments

Comments
 (0)