-
Notifications
You must be signed in to change notification settings - Fork 681
Description
当前尝试在P800上部署fastdeploy,框架运行后尝试部署oss 120B模型,出现如下报错,请问是环境安装依赖问题还是本身不兼容?
`ValueError: Unsupported model source: please choose one of ['MODELSOPE', 'AISTUDIO', 'HUGGINGFACE']
root@zteserver:/Work# export FD_MODEL_SOURCE=MODELSOPE && python -m fastdeploy.entrypoints.openai.api_server
--model unsloth/gpt-oss-20b-BF16
--port 8180
--metrics-port 8181
--engine-worker-queue-port 8182
--max-num-seq 32
XCCL/usr/local/lib/python3.10/dist-packages/paddle/base/../libs/libbkcl.so loaded
/usr/local/lib/python3.10/dist-packages/paddle/utils/cpp_extension/extension_utils.py:718: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)
INFO 2025-12-19 07:58:07,750 1616 api_server.py[line:86] Number of api-server workers: 1.
Downloading Model from https://www.modelscope.cn to directory: /root/unsloth/gpt-oss-20b-BF16
Downloading [configuration.json]: 100%
Downloading [generation_config.json]: 100%
Downloading [chat_template.json]: 100%
Downloading [model-00003-of-00009.safetensors]: 100%
Downloading [model-00005-of-00009.safetensors]: 100%
Downloading [model_safetensors:index.json]: 100%
Downloading [README.md]: 100%
Downloading [special_tokens_map.json]: 100%
Downloading [tokenizer_config.json]: 100%
Downloading [model-00002-of-00009.safetensors]: 100%
Downloading [model-00006-of-00009.safetensors]: 100%
Downloading [model-00009-of-00009.safetensors]: 100%
Downloading [model-00007-of-00009.safetensors]: 100%
Downloading [model-00004-of-00009.safetensors]: 100%
Downloading [model-00008-of-00009.safetensors]: 100%
Processing 18 items: 100%
[2025-12-19 08:06:07,656] INFO - Download model 'unsloth/gpt-oss-20b-BF16' successfully.
[2025-12-19 08:06:07,658] INFO - Using download source: huggingface
[2025-12-19 08:06:07,658] WARNING - You are using a model of type gpt_oss to instantiate a model of type . This is not supported for all configurations of models and can yield errors. In the future, this condition will fail. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils_deprecation.yml
[2025-12-19 08:06:07,659] WARNING - import noaux.tc Failed! None of PyTorch, TensorFlow >= 2.0, Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
[2025-12-19 08:06:08,661] INFO - Using download source: huggingface
[2025-12-19 08:06:08,661] INFO - Loading configuration file /root/unsloth/gpt-oss-20b-BF16/generation_config.json
[2025-12-19 08:06:08,670] INFO - Using download source: huggingface
INFO 2025-12-19 08:06:10,300 1616 engine.py[line:144] Waiting for worker processes to be ready...
ERROR 2025-12-19 08:06:18,319 1616 engine.py[line:153] Failed to launch worker processes, check log/workerlog.* for more details.
ERROR 2025-12-19 08:06:28,117 1616 engine.py[line:424] Error extracting sub services: [Errno 3] No such process, Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/fastdeploy/engine/engine.py", line 421, in _exit_sub_services
pgid = os.getpgid(self.worker_proc.pid)
ProcessLookupError: [Errno 3] No such process
root@zteserver:/Work# /usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d leaked shared_memory objects to clean up at shutdown' % len(self._shared_memory))`