|  | 
| 8 | 8 | msgstr "" | 
| 9 | 9 | "Project-Id-Version: Xinference \n" | 
| 10 | 10 | "Report-Msgid-Bugs-To: \n" | 
| 11 |  | -"POT-Creation-Date: 2025-04-28 18:35+0800\n" | 
|  | 11 | +"POT-Creation-Date: 2025-10-28 13:54+0800\n" | 
| 12 | 12 | "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" | 
| 13 | 13 | "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" | 
| 14 | 14 | "Language: zh_CN\n" | 
| @@ -206,61 +206,123 @@ msgstr "" | 
| 206 | 206 | " 。具体信息请参考 :ref:`这里 <about_model_engine>` 。" | 
| 207 | 207 | 
 | 
| 208 | 208 | #: ../../source/getting_started/troubleshooting.rst:112 | 
| 209 |  | -msgid "" | 
| 210 |  | -"Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is " | 
| 211 |  | -"incompatible with libgomp-a34b3233.so.1 library." | 
| 212 |  | -msgstr "" | 
| 213 |  | -"错误:mkl-service + Intel(R) MKL:MKL_THREADING_LAYER=INTEL 与 libgomp-a34b3233.so.1 库不兼容。" | 
|  | 209 | +msgid "Resolving MKL Threading Layer Conflicts" | 
|  | 210 | +msgstr "解决 MKL 线程层冲突" | 
| 214 | 211 | 
 | 
| 215 | 212 | #: ../../source/getting_started/troubleshooting.rst:114 | 
| 216 | 213 | msgid "" | 
| 217 |  | -"When start Xinference server and you hit the error \"ValueError: Model " | 
| 218 |  | -"architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check " | 
| 219 |  | -"the logs for more details. \"" | 
|  | 214 | +"When starting the Xinference server, you may encounter the error: " | 
|  | 215 | +"``ValueError: Model architectures ['Qwen2ForCausalLM'] failed to be " | 
|  | 216 | +"inspected. Please check the logs for more details.``" | 
| 220 | 217 | msgstr "" | 
| 221 |  | -"在启动 Xinference 服务器时,如果遇到错误:“ValueError: Model architectures " | 
| 222 |  | -"['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details.”" | 
|  | 218 | +"在启动 Xinference 服务器时,如果遇到错误:``ValueError: Model " | 
|  | 219 | +"architectures ['Qwen2ForCausalLM'] failed to be inspected. . Please check the logs for more details.``" | 
| 223 | 220 | 
 | 
| 224 | 221 | #: ../../source/getting_started/troubleshooting.rst:116 | 
|  | 222 | +msgid "The underlying cause shown in the logs is:" | 
|  | 223 | +msgstr "日志中显示的根本原因是:" | 
|  | 224 | + | 
|  | 225 | +#: ../../source/getting_started/troubleshooting.rst:123 | 
| 225 | 226 | msgid "" | 
| 226 |  | -"The logs shows the error, ``\"Error: mkl-service + Intel(R) MKL: " | 
| 227 |  | -"MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 " | 
| 228 |  | -"library. Try to import numpy first or set the threading layer " | 
| 229 |  | -"accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.\"``" | 
|  | 227 | +"This typically occurs when NumPy was installed via conda. Conda's NumPy " | 
|  | 228 | +"is built with Intel MKL optimizations, which conflicts with the GNU " | 
|  | 229 | +"OpenMP library (libgomp) already loaded in your environment." | 
|  | 230 | +msgstr "" | 
|  | 231 | +"这通常是因为你的 NumPy 是通过 conda 安装的,而 conda 的 NumPy 是使用 " | 
|  | 232 | +"Intel MKL 优化构建的,这导致它与环境中已加载的 GNU OpenMP 库(libgomp)" | 
|  | 233 | +"产生冲突。" | 
|  | 234 | + | 
|  | 235 | +#: ../../source/getting_started/troubleshooting.rst:126 | 
|  | 236 | +msgid "Solution 1: Override the Threading Layer" | 
|  | 237 | +msgstr "解决方案 1:重写线程层" | 
|  | 238 | + | 
|  | 239 | +#: ../../source/getting_started/troubleshooting.rst:128 | 
|  | 240 | +msgid "Force Intel's Math Kernel Library to use GNU's OpenMP implementation:" | 
| 230 | 241 | msgstr "" | 
| 231 |  | -"日志中显示错误:Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL " | 
| 232 |  | -"is incompatible with libgomp-a34b3233.so.1 library. Try to import numpy first " | 
| 233 |  | -"or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it." | 
|  | 242 | +"设置 MKL_THREADING_LAYER=GNU 可以强制 Intel 数学核心库(MKL)使用 GNU 的 " | 
|  | 243 | +"OpenMP 实现:" | 
|  | 244 | + | 
|  | 245 | +#: ../../source/getting_started/troubleshooting.rst:135 | 
|  | 246 | +msgid "Solution 2: Reinstall NumPy with pip" | 
|  | 247 | +msgstr "解决方案 2:使用 pip 重新安装 NumPy" | 
| 234 | 248 | 
 | 
|  | 249 | +#: ../../source/getting_started/troubleshooting.rst:137 | 
|  | 250 | +msgid "Uninstall conda's NumPy and reinstall using pip:" | 
|  | 251 | +msgstr "卸载 conda 安装的 numpy,然后使用 pip 重新安装。" | 
| 235 | 252 | 
 | 
| 236 |  | -#: ../../source/getting_started/troubleshooting.rst:118 | 
|  | 253 | +#: ../../source/getting_started/troubleshooting.rst:146 | 
|  | 254 | +msgid "Related Note: vLLM and PyTorch" | 
|  | 255 | +msgstr "相关说明:vLLM 与 PyTorch" | 
|  | 256 | + | 
|  | 257 | +#: ../../source/getting_started/troubleshooting.rst:148 | 
| 237 | 258 | msgid "" | 
| 238 |  | -"This is mostly because your NumPy is installed by conda and conda's Numpy" | 
| 239 |  | -" is built with Intel MKL optimizations, which is causing a conflict with " | 
| 240 |  | -"the GNU OpenMP library (libgomp) that's already loaded in the " | 
| 241 |  | -"environment." | 
|  | 259 | +"If you're using vLLM, avoid installing PyTorch with conda. Refer to the " | 
|  | 260 | +"official vLLM installation guide for GPU-specific instructions: " | 
|  | 261 | +"https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html" | 
| 242 | 262 | msgstr "" | 
| 243 |  | -"这通常是因为你的 NumPy 是通过 conda 安装的,而 conda 的 NumPy 是使用 Intel MKL 优化构建的," | 
| 244 |  | -"这导致它与环境中已加载的 GNU OpenMP 库(libgomp)产生冲突。" | 
|  | 263 | +"如果你在使用 vLLM,请避免通过 conda 安装 PyTorch。有关特定 GPU 的安装说明," | 
|  | 264 | +"请参阅 vLLM 官方安装指南:" | 
|  | 265 | +"https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html" | 
|  | 266 | + | 
|  | 267 | +#: ../../source/getting_started/troubleshooting.rst:151 | 
|  | 268 | +msgid "Configuring PyPI Mirrors to Speed Up Package Installation" | 
|  | 269 | +msgstr "配置 PyPI 镜像以加快软件包安装速度" | 
| 245 | 270 | 
 | 
| 246 |  | -#: ../../source/getting_started/troubleshooting.rst:124 | 
|  | 271 | +#: ../../source/getting_started/troubleshooting.rst:153 | 
| 247 | 272 | msgid "" | 
| 248 |  | -"Setting ``MKL_THREADING_LAYER=GNU`` forces Intel's Math Kernel Library to" | 
| 249 |  | -" use GNU's OpenMP implementation instead of Intel's own implementation." | 
|  | 273 | +"If you're in Mainland China, using a PyPI mirror can significantly speed " | 
|  | 274 | +"up package installation. Here are some commonly used mirrors:" | 
| 250 | 275 | msgstr "" | 
| 251 |  | -"设置 MKL_THREADING_LAYER=GNU 可以强制 Intel 数学核心库(MKL)使用 GNU 的 OpenMP 实现,而不是使用 Intel 自己的实现。" | 
|  | 276 | +"如果你在中国大陆,使用 PyPI 镜像可以显著加快软件包的安装速度。" | 
|  | 277 | +"以下是一些常用的镜像源:" | 
| 252 | 278 | 
 | 
| 253 |  | -#: ../../source/getting_started/troubleshooting.rst:126 | 
| 254 |  | -msgid "Or you can uninstall conda's numpy and reinstall with pip." | 
|  | 279 | +#: ../../source/getting_started/troubleshooting.rst:155 | 
|  | 280 | +msgid "Tsinghua University: ``https://pypi.tuna.tsinghua.edu.cn/simple``" | 
|  | 281 | +msgstr "清华大学镜像:``https://pypi.tuna.tsinghua.edu.cn/simple``" | 
|  | 282 | + | 
|  | 283 | +#: ../../source/getting_started/troubleshooting.rst:156 | 
|  | 284 | +msgid "Alibaba Cloud: ``https://mirrors.aliyun.com/pypi/simple/``" | 
|  | 285 | +msgstr "阿里云镜像:``https://mirrors.aliyun.com/pypi/simple/``" | 
|  | 286 | + | 
|  | 287 | +#: ../../source/getting_started/troubleshooting.rst:157 | 
|  | 288 | +msgid "Tencent Cloud: ``https://mirrors.cloud.tencent.com/pypi/simple``" | 
|  | 289 | +msgstr "腾讯云镜像:``https://mirrors.cloud.tencent.com/pypi/simple``" | 
|  | 290 | + | 
|  | 291 | +#: ../../source/getting_started/troubleshooting.rst:159 | 
|  | 292 | +msgid "" | 
|  | 293 | +"However, be aware that some packages may not be available on certain " | 
|  | 294 | +"mirrors. For example, if you're installing ``xinference[audio]`` using " | 
|  | 295 | +"only the Aliyun mirror, the installation may fail." | 
| 255 | 296 | msgstr "" | 
| 256 |  | -"或者你也可以卸载 conda 安装的 numpy,然后使用 pip 重新安装。" | 
|  | 297 | +"但请注意,某些镜像源上可能缺少部分软件包。" | 
|  | 298 | +"例如,如果你仅使用阿里云镜像安装 ``xinference[audio]``,安装可能会失败。" | 
| 257 | 299 | 
 | 
| 258 |  | -#: ../../source/getting_started/troubleshooting.rst:128 | 
|  | 300 | +#: ../../source/getting_started/troubleshooting.rst:161 | 
|  | 301 | +msgid "" | 
|  | 302 | +"This happens because ``num2words``, a dependency used by ``MeloTTS``, is " | 
|  | 303 | +"not available on the Aliyun mirror. As a result, ``pip install " | 
|  | 304 | +"xinference[audio]`` will resolve to older versions like " | 
|  | 305 | +"``xinference==1.2.0`` and ``xoscar==0.8.0`` (as of Oct 27, 2025)." | 
|  | 306 | +msgstr "" | 
|  | 307 | +"这是因为 ``MeloTTS`` 所依赖的 ``num2words`` 软件包在阿里云镜像上不可用。" | 
|  | 308 | +"因此,在执行 ``pip install xinference[audio]`` 时," | 
|  | 309 | +"可能会回退安装旧版本,如 ``xinference==1.2.0`` 和 ``xoscar==0.8.0`` (截至 2025 年 10 月 27 日)。" | 
|  | 310 | + | 
|  | 311 | +#: ../../source/getting_started/troubleshooting.rst:163 | 
|  | 312 | +msgid "" | 
|  | 313 | +"These older versions are incompatible and will produce the error: " | 
|  | 314 | +"``MainActorPool.append_sub_pool() got an unexpected keyword argument " | 
|  | 315 | +"'start_method'``" | 
|  | 316 | +msgstr "" | 
|  | 317 | +"这些旧版本不兼容,会导致以下错误:" | 
|  | 318 | +"``MainActorPool.append_sub_pool() got an unexpected keyword argument 'start_method'``" | 
|  | 319 | + | 
|  | 320 | +#: ../../source/getting_started/troubleshooting.rst:174 | 
| 259 | 321 | msgid "" | 
| 260 |  | -"On a related subject, if you use vllm, do not install pytorch with conda," | 
| 261 |  | -" check " | 
| 262 |  | -"https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html for " | 
| 263 |  | -"detailed information." | 
|  | 322 | +"To avoid this issue when installing the xinference audio package, use " | 
|  | 323 | +"multiple mirrors:" | 
| 264 | 324 | msgstr "" | 
| 265 |  | -"相关地,如果你使用 vllm,不要通过 conda 安装 pytorch,详细信息请参考:https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html 。" | 
|  | 325 | +"为避免在安装 xinference 音频包时出现此问题," | 
|  | 326 | +"建议同时使用多个镜像源:" | 
|  | 327 | + | 
| 266 | 328 | 
 | 
0 commit comments