Replies: 19 comments 30 replies
-
|
大佬想请教下命令行的npu中vllm框架的红灯是体现在精度上吗,我在用旧版本的mineru后端vlm-vllm-engine调用do_parse已经适配好了会对精度有影响嘛 |
Beta Was this translation helpful? Give feedback.
This comment has been hidden.
This comment has been hidden.
-
|
怎么实现Mineru使用多张910B的npu显卡跑啊 |
Beta Was this translation helpful? Give feedback.
-
|
不好意思,请问910A设备有没有可以尝试的路子可以运行? |
Beta Was this translation helpful? Give feedback.
-
|
请问 可以支持 310P3 的设备进行运行嘛 |
Beta Was this translation helpful? Give feedback.
This comment has been hidden.
This comment has been hidden.
This comment has been hidden.
This comment has been hidden.
This comment has been hidden.
This comment has been hidden.
-
|
请问下Atlas 300V Pro有适配吗,有什么启动参数需要注意的 |
Beta Was this translation helpful? Give feedback.
-
|
请问支持 海光 K100 么 |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
vlm-transformers支持吗,使用vllm双卡没有成功,想使用双卡,后端换成vlm-transformers,解析时还是有BFloat16的问题,已经在命令中添加了--dtype float16的。 root@ununtu:/workspace# mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enforce-eager onnxruntime cpuid_info warning: Unknown CPU vendor. cpuinfo_vendor value: 15
|
Beta Was this translation helpful? Give feedback.
-
|
请问在昇腾平台上有什么性能优化吗? @Yikun |
Beta Was this translation helpful? Give feedback.
-
|
ppu的镜像构建的下载不了。 是什么原因?FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/ppu:ppu-pytorch2.6.0-ubuntu24.04-cuda12.6-vllm0.8.5-py312 |
Beta Was this translation helpful? Give feedback.
-
|
services: |
Beta Was this translation helpful? Give feedback.
-
|
昇腾910B环境的docker。同一个模型部署两套,一份在notebook,一份在ai服务,识别的文档一样。在notebook中用python命令的方式调用ocr识别正常,但是在ai服务器中通过python程序调用时,返回值部分段落是感叹号和少量乱码,什么原因?命令是mineru -p 输入.pdf -o 输出目录 这种模式。方式是vlm-http-client。 英伟达下正常,昇腾下,notebook下命令调用正常,python程序调用部分段落异常,有感叹号和其它乱码。 notebook调用命令: mineru版本为:2.6.5 启用的是vlm服务,镜像用的是官方镜像 |
Beta Was this translation helpful? Give feedback.
-
|
a3 npu模式部署minerU 解析文档 docker日志报 NNAL/ATB 缺失是什么情况 这个是我按照教程https://opendatalab.github.io/MinerU/zh/usage/acceleration_cards/Ascend/来更新的dockerfile,部分增加是我要弄缓存在改成 下面这种 FROM quay.m.daocloud.io/ascend/vllm-ascend:v0.11.0-a3 ENV PIP_INDEX_URL=https://mirrors.aliyun.com/pypi/simple RUN apt-get update && RUN python3 -m pip install -U pip -i $PIP_INDEX_URL && ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec "$@"", "--"] 运行命令是 下面这个我想用npu的形式解密以 gradio的方式运行 docker run -d --name mineru --privileged --ipc=host --restart unless-stopped -p 7860:7860 --device=/dev/davinci6 --device=/dev/davinci7 --device=/dev/davinci_manager --device=/dev/devmm_svm --device=/dev/hisi_hdc -v /usr/local/Ascend/driver:/usr/local/Ascend/driver -v /var/log/npu/:/usr/slog -v /data:/data -v /output:/output -v /newcache/modelscope:/root/.cache/modelscope -v /newcache/hf:/root/.cache/huggingface -v /newcache/mineru.json:/etc/trae-mineru.json:ro -e ASCEND_RT_VISIBLE_DEVICES=6,7 -e MINERU_MODEL_SOURCE=local -e MINERU_DEVICE_MODE=npu -e MINERU_TOOLS_CONFIG_JSON=/etc/trae-mineru.json -e LD_LIBRARY_PATH=/usr/local/Ascend/driver/lib64:$LD_LIBRARY_PATH -e TORCH_DEVICE_BACKEND_AUTOLOAD=0 mineru:npu-vllm-a3 mineru-gradio --server-name 0.0.0.0 --server-port 7860 啥原因导致的呢,是我改的dockerfile有问题吗 |
Beta Was this translation helpful? Give feedback.
-
|
各位开发者好,我想请教一下,我现在使用的显卡是寒武纪 MLU370-X8 双卡,在官方最新的支持上,使用的是MLU590-M9D的显卡,现在的版本是否支持mlu370,如果不支持下一步有没有适配mlu370的计划 |
Beta Was this translation helpful? Give feedback.
-
|
请教一下目前在310P上部署了mineru,貌似不支持数据并行,请求第二次的时候会报错,然后崩溃。现在我自己写了一个fastapi的服务,代码如下:但是速度特别特别慢,处理一页的图像都是二三十秒,是哪里有问题吗: import os import time from fastapi import FastAPI, UploadFile, File, Form, HTTPException ============ 日志配置 ============LOG_DIR = Path(os.getenv("MINERU_LOG_DIR", "/mnt/share/lzr/code/CV/MinerU/claude_test/logs")) logger.remove() ============ 导入 MinerU 组件 ============from mineru.cli.common import convert_pdf_bytes_to_bytes_by_pypdfium2 ============ 配置区域 ============VLM_SERVER_URL = os.getenv("MINERU_VLM_SERVER_URL", "http://127.0.0.1:30000") 文件类型定义PDF_SUFFIXES = ["pdf"] 全局信号量_request_semaphore: Optional[asyncio.Semaphore] = None def convert_to_pdf_bytes(file_bytes: bytes, suffix: str) -> bytes: def get_timestamp() -> str: def format_duration(seconds: float) -> str: @asynccontextmanager app = FastAPI( async def process_single_file( @app.post("/parse") @app.post("/parse/batch") @app.get("/health") if name == "main": |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
我们新增了一些国产算力平台的支持,包括以下平台:
您可点击链接跳转至文档页查阅各平台的详细部署方案
国产硬件适配是一项复杂且具有挑战性的工作。我们已尽最大努力确保兼容性、功能完整性与运行稳定性,但仍可能存在个别场景下的稳定性问题、兼容性差异或精度对齐偏差。建议您在使用前,参考文档页面中的「红绿灯」状态标识,结合自身需求选择合适的平台与使用场景。
如在使用过程中遇到文档未覆盖的问题,欢迎在本帖中反馈——这将帮助更多用户快速定位和解决问题。
📌 发帖前请注意:
请先浏览已有回复,确认是否已有相同问题的讨论;
若为新问题,请单独开一个楼层发布,便于后续追踪与查阅。
感谢您的理解与支持!
Beta Was this translation helpful? Give feedback.
All reactions