Skip to content

Commit a1fadd2

Browse files
qiulangqinxuye
andauthored
DOC: Add PyPI mirror configuration guide for audio package installation (#4177)
Co-authored-by: qinxuye <[email protected]>
1 parent 720b94c commit a1fadd2

File tree

2 files changed

+168
-48
lines changed

2 files changed

+168
-48
lines changed

doc/source/getting_started/troubleshooting.rst

Lines changed: 68 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -108,21 +108,79 @@ Missing ``model_engine`` parameter when launching LLM models
108108
Since version ``v0.11.0``, launching LLM models requires an additional ``model_engine`` parameter.
109109
For specific information, please refer to :ref:`here <about_model_engine>`.
110110

111-
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 library.
112-
================================================================================================================
111+
Resolving MKL Threading Layer Conflicts
112+
========================================
113113

114-
When start Xinference server and you hit the error "ValueError: Model architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details. "
114+
When starting the Xinference server, you may encounter the error: ``ValueError: Model architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details.``
115115

116-
The logs shows the error, ``"Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 library. Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it."``
117-
118-
This is mostly because your NumPy is installed by conda and conda's Numpy is built with Intel MKL optimizations, which is causing a conflict with the GNU OpenMP library (libgomp) that's already loaded in the environment.
116+
The underlying cause shown in the logs is:
119117

120118
.. code-block:: text
121119
122-
MKL_THREADING_LAYER=GNU xinference-local
120+
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 library.
121+
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
122+
123+
This typically occurs when NumPy was installed via conda. Conda's NumPy is built with Intel MKL optimizations, which conflicts with the GNU OpenMP library (libgomp) already loaded in your environment.
124+
125+
Solution 1: Override the Threading Layer
126+
-----------------------------------------
127+
128+
Force Intel's Math Kernel Library to use GNU's OpenMP implementation:
129+
130+
.. code-block:: bash
131+
132+
MKL_THREADING_LAYER=GNU xinference-local
133+
134+
Solution 2: Reinstall NumPy with pip
135+
-------------------------------------
136+
137+
Uninstall conda's NumPy and reinstall using pip:
138+
139+
.. code-block:: bash
140+
141+
pip uninstall -y numpy && pip install numpy
142+
#Or just --force-reinstall
143+
pip install --force-reinstall numpy
144+
145+
Related Note: vLLM and PyTorch
146+
-------------------------------
147+
148+
If you're using vLLM, avoid installing PyTorch with conda. Refer to the official vLLM installation guide for GPU-specific instructions: https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html
149+
150+
Configuring PyPI Mirrors to Speed Up Package Installation
151+
==========================================================
152+
153+
If you're in Mainland China, using a PyPI mirror can significantly speed up package installation. Here are some commonly used mirrors:
154+
155+
- Tsinghua University: ``https://pypi.tuna.tsinghua.edu.cn/simple``
156+
- Alibaba Cloud: ``https://mirrors.aliyun.com/pypi/simple/``
157+
- Tencent Cloud: ``https://mirrors.cloud.tencent.com/pypi/simple``
158+
159+
However, be aware that some packages may not be available on certain mirrors. For example, if you're installing ``xinference[audio]`` using only the Aliyun mirror, the installation may fail.
160+
161+
This happens because ``num2words``, a dependency used by ``MeloTTS``, is not available on the Aliyun mirror. As a result, ``pip install xinference[audio]`` will resolve to older versions like ``xinference==1.2.0`` and ``xoscar==0.8.0`` (as of Oct 27, 2025).
162+
163+
These older versions are incompatible and will produce the error: ``MainActorPool.append_sub_pool() got an unexpected keyword argument 'start_method'``
164+
165+
.. code-block:: bash
166+
167+
curl -s https://mirrors.aliyun.com/pypi/simple/num2words/ | grep -i "num2words"
168+
# Returns NOTHING! But it works on Tsinghua or Tencent mirrors.
169+
# uv pip install "xinference[audio]" will then install the following packages (as of Oct 27, 2025):
170+
+ x-transformers==2.10.2
171+
+ xinference==1.2.0
172+
+ xoscar==0.8.0
173+
174+
To avoid this issue when installing the xinference audio package, use multiple mirrors:
123175

124-
Setting ``MKL_THREADING_LAYER=GNU`` forces Intel's Math Kernel Library to use GNU's OpenMP implementation instead of Intel's own implementation.
176+
.. code-block:: bash
125177
126-
Or you can uninstall conda's numpy and reinstall with pip.
178+
uv pip install xinference[audio] --index-url https://mirrors.aliyun.com/pypi/simple --extra-index-url https://pypi.tuna.tsinghua.edu.cn/simple
127179
128-
On a related subject, if you use vllm, do not install pytorch with conda, check https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html for detailed information.
180+
# Optional: Set this globally in your uv config
181+
mkdir -p ~/.config/uv
182+
cat >> ~/.config/uv/uv.toml << EOF
183+
[tool.uv]
184+
index-url = "https://mirrors.aliyun.com/pypi/simple"
185+
extra-index-url = ["https://pypi.tuna.tsinghua.edu.cn/simple"]
186+
EOF

doc/source/locale/zh_CN/LC_MESSAGES/getting_started/troubleshooting.po

Lines changed: 100 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ msgid ""
88
msgstr ""
99
"Project-Id-Version: Xinference \n"
1010
"Report-Msgid-Bugs-To: \n"
11-
"POT-Creation-Date: 2025-04-28 18:35+0800\n"
11+
"POT-Creation-Date: 2025-10-28 13:54+0800\n"
1212
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
1313
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
1414
"Language: zh_CN\n"
@@ -206,61 +206,123 @@ msgstr ""
206206
" 。具体信息请参考 :ref:`这里 <about_model_engine>` 。"
207207

208208
#: ../../source/getting_started/troubleshooting.rst:112
209-
msgid ""
210-
"Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is "
211-
"incompatible with libgomp-a34b3233.so.1 library."
212-
msgstr ""
213-
"错误:mkl-service + Intel(R) MKL:MKL_THREADING_LAYER=INTEL 与 libgomp-a34b3233.so.1 库不兼容。"
209+
msgid "Resolving MKL Threading Layer Conflicts"
210+
msgstr "解决 MKL 线程层冲突"
214211

215212
#: ../../source/getting_started/troubleshooting.rst:114
216213
msgid ""
217-
"When start Xinference server and you hit the error \"ValueError: Model "
218-
"architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check "
219-
"the logs for more details. \""
214+
"When starting the Xinference server, you may encounter the error: "
215+
"``ValueError: Model architectures ['Qwen2ForCausalLM'] failed to be "
216+
"inspected. Please check the logs for more details.``"
220217
msgstr ""
221-
"在启动 Xinference 服务器时,如果遇到错误:ValueError: Model architectures "
222-
"['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details."
218+
"在启动 Xinference 服务器时,如果遇到错误:``ValueError: Model "
219+
"architectures ['Qwen2ForCausalLM'] failed to be inspected. . Please check the logs for more details.``"
223220

224221
#: ../../source/getting_started/troubleshooting.rst:116
222+
msgid "The underlying cause shown in the logs is:"
223+
msgstr "日志中显示的根本原因是:"
224+
225+
#: ../../source/getting_started/troubleshooting.rst:123
225226
msgid ""
226-
"The logs shows the error, ``\"Error: mkl-service + Intel(R) MKL: "
227-
"MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 "
228-
"library. Try to import numpy first or set the threading layer "
229-
"accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.\"``"
227+
"This typically occurs when NumPy was installed via conda. Conda's NumPy "
228+
"is built with Intel MKL optimizations, which conflicts with the GNU "
229+
"OpenMP library (libgomp) already loaded in your environment."
230+
msgstr ""
231+
"这通常是因为你的 NumPy 是通过 conda 安装的,而 conda 的 NumPy 是使用 "
232+
"Intel MKL 优化构建的,这导致它与环境中已加载的 GNU OpenMP 库(libgomp)"
233+
"产生冲突。"
234+
235+
#: ../../source/getting_started/troubleshooting.rst:126
236+
msgid "Solution 1: Override the Threading Layer"
237+
msgstr "解决方案 1:重写线程层"
238+
239+
#: ../../source/getting_started/troubleshooting.rst:128
240+
msgid "Force Intel's Math Kernel Library to use GNU's OpenMP implementation:"
230241
msgstr ""
231-
"日志中显示错误:Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL "
232-
"is incompatible with libgomp-a34b3233.so.1 library. Try to import numpy first "
233-
"or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it."
242+
"设置 MKL_THREADING_LAYER=GNU 可以强制 Intel 数学核心库(MKL)使用 GNU 的 "
243+
"OpenMP 实现:"
244+
245+
#: ../../source/getting_started/troubleshooting.rst:135
246+
msgid "Solution 2: Reinstall NumPy with pip"
247+
msgstr "解决方案 2:使用 pip 重新安装 NumPy"
234248

249+
#: ../../source/getting_started/troubleshooting.rst:137
250+
msgid "Uninstall conda's NumPy and reinstall using pip:"
251+
msgstr "卸载 conda 安装的 numpy,然后使用 pip 重新安装。"
235252

236-
#: ../../source/getting_started/troubleshooting.rst:118
253+
#: ../../source/getting_started/troubleshooting.rst:146
254+
msgid "Related Note: vLLM and PyTorch"
255+
msgstr "相关说明:vLLM 与 PyTorch"
256+
257+
#: ../../source/getting_started/troubleshooting.rst:148
237258
msgid ""
238-
"This is mostly because your NumPy is installed by conda and conda's Numpy"
239-
" is built with Intel MKL optimizations, which is causing a conflict with "
240-
"the GNU OpenMP library (libgomp) that's already loaded in the "
241-
"environment."
259+
"If you're using vLLM, avoid installing PyTorch with conda. Refer to the "
260+
"official vLLM installation guide for GPU-specific instructions: "
261+
"https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html"
242262
msgstr ""
243-
"这通常是因为你的 NumPy 是通过 conda 安装的,而 conda 的 NumPy 是使用 Intel MKL 优化构建的,"
244-
"这导致它与环境中已加载的 GNU OpenMP 库(libgomp)产生冲突。"
263+
"如果你在使用 vLLM,请避免通过 conda 安装 PyTorch。有关特定 GPU 的安装说明,"
264+
"请参阅 vLLM 官方安装指南:"
265+
"https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html"
266+
267+
#: ../../source/getting_started/troubleshooting.rst:151
268+
msgid "Configuring PyPI Mirrors to Speed Up Package Installation"
269+
msgstr "配置 PyPI 镜像以加快软件包安装速度"
245270

246-
#: ../../source/getting_started/troubleshooting.rst:124
271+
#: ../../source/getting_started/troubleshooting.rst:153
247272
msgid ""
248-
"Setting ``MKL_THREADING_LAYER=GNU`` forces Intel's Math Kernel Library to"
249-
" use GNU's OpenMP implementation instead of Intel's own implementation."
273+
"If you're in Mainland China, using a PyPI mirror can significantly speed "
274+
"up package installation. Here are some commonly used mirrors:"
250275
msgstr ""
251-
"设置 MKL_THREADING_LAYER=GNU 可以强制 Intel 数学核心库(MKL)使用 GNU 的 OpenMP 实现,而不是使用 Intel 自己的实现。"
276+
"如果你在中国大陆,使用 PyPI 镜像可以显著加快软件包的安装速度。"
277+
"以下是一些常用的镜像源:"
252278

253-
#: ../../source/getting_started/troubleshooting.rst:126
254-
msgid "Or you can uninstall conda's numpy and reinstall with pip."
279+
#: ../../source/getting_started/troubleshooting.rst:155
280+
msgid "Tsinghua University: ``https://pypi.tuna.tsinghua.edu.cn/simple``"
281+
msgstr "清华大学镜像:``https://pypi.tuna.tsinghua.edu.cn/simple``"
282+
283+
#: ../../source/getting_started/troubleshooting.rst:156
284+
msgid "Alibaba Cloud: ``https://mirrors.aliyun.com/pypi/simple/``"
285+
msgstr "阿里云镜像:``https://mirrors.aliyun.com/pypi/simple/``"
286+
287+
#: ../../source/getting_started/troubleshooting.rst:157
288+
msgid "Tencent Cloud: ``https://mirrors.cloud.tencent.com/pypi/simple``"
289+
msgstr "腾讯云镜像:``https://mirrors.cloud.tencent.com/pypi/simple``"
290+
291+
#: ../../source/getting_started/troubleshooting.rst:159
292+
msgid ""
293+
"However, be aware that some packages may not be available on certain "
294+
"mirrors. For example, if you're installing ``xinference[audio]`` using "
295+
"only the Aliyun mirror, the installation may fail."
255296
msgstr ""
256-
"或者你也可以卸载 conda 安装的 numpy,然后使用 pip 重新安装。"
297+
"但请注意,某些镜像源上可能缺少部分软件包。"
298+
"例如,如果你仅使用阿里云镜像安装 ``xinference[audio]``,安装可能会失败。"
257299

258-
#: ../../source/getting_started/troubleshooting.rst:128
300+
#: ../../source/getting_started/troubleshooting.rst:161
301+
msgid ""
302+
"This happens because ``num2words``, a dependency used by ``MeloTTS``, is "
303+
"not available on the Aliyun mirror. As a result, ``pip install "
304+
"xinference[audio]`` will resolve to older versions like "
305+
"``xinference==1.2.0`` and ``xoscar==0.8.0`` (as of Oct 27, 2025)."
306+
msgstr ""
307+
"这是因为 ``MeloTTS`` 所依赖的 ``num2words`` 软件包在阿里云镜像上不可用。"
308+
"因此,在执行 ``pip install xinference[audio]`` 时,"
309+
"可能会回退安装旧版本,如 ``xinference==1.2.0`` 和 ``xoscar==0.8.0`` (截至 2025 年 10 月 27 日)。"
310+
311+
#: ../../source/getting_started/troubleshooting.rst:163
312+
msgid ""
313+
"These older versions are incompatible and will produce the error: "
314+
"``MainActorPool.append_sub_pool() got an unexpected keyword argument "
315+
"'start_method'``"
316+
msgstr ""
317+
"这些旧版本不兼容,会导致以下错误:"
318+
"``MainActorPool.append_sub_pool() got an unexpected keyword argument 'start_method'``"
319+
320+
#: ../../source/getting_started/troubleshooting.rst:174
259321
msgid ""
260-
"On a related subject, if you use vllm, do not install pytorch with conda,"
261-
" check "
262-
"https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html for "
263-
"detailed information."
322+
"To avoid this issue when installing the xinference audio package, use "
323+
"multiple mirrors:"
264324
msgstr ""
265-
"相关地,如果你使用 vllm,不要通过 conda 安装 pytorch,详细信息请参考:https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html 。"
325+
"为避免在安装 xinference 音频包时出现此问题,"
326+
"建议同时使用多个镜像源:"
327+
266328

0 commit comments

Comments
 (0)