You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+18-10Lines changed: 18 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,6 +44,9 @@
44
44
</div>
45
45
46
46
# Changelog
47
+
- 2025/11/26 2.6.5 Release
48
+
- Added support for a new backend vlm-lmdeploy-engine. Its usage is similar to vlm-vllm-(async)engine, but it uses lmdeploy as the inference engine and additionally supports native inference acceleration on Windows platforms compared to vllm.
49
+
47
50
- 2025/11/04 2.6.4 Release
48
51
- Added timeout configuration for PDF image rendering, default is 300 seconds, can be configured via environment variable `MINERU_PDF_RENDER_TIMEOUT` to prevent long blocking of the rendering process caused by some abnormal PDF files.
49
52
- Added CPU thread count configuration options for ONNX models, default is the system CPU core count, can be configured via environment variables `MINERU_INTRA_OP_NUM_THREADS` and `MINERU_INTER_OP_NUM_THREADS` to reduce CPU resource contention conflicts in high concurrency scenarios.
@@ -632,12 +635,13 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
@@ -690,7 +696,9 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
690
696
<sup>2</sup> Linux supports only distributions released in 2019 or later.
691
697
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
692
698
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
693
-
<sup>5</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
699
+
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
700
+
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
701
+
<sup>7</sup> Windows + LMDeploy only supports Python versions 3.10–3.12, as the critical dependency `ray` does not yet support Python 3.13 on Windows.
694
702
695
703
696
704
### Install MinerU
@@ -710,8 +718,8 @@ uv pip install -e .[core]
710
718
```
711
719
712
720
> [!TIP]
713
-
> `mineru[core]` includes all core features except `vLLM` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
714
-
> If you need to use `vLLM` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
721
+
> `mineru[core]` includes all core features except `vLLM`/`LMDeploy` acceleration, compatible with Windows / Linux / macOS systems, suitable for most users.
722
+
> If you need to use `vLLM`/`LMDeploy` acceleration for VLM model inference or install a lightweight client on edge devices, please refer to the documentation [Extension Modules Installation Guide](https://opendatalab.github.io/MinerU/quick_start/extension_modules/).
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + metax GPU.
3
+
FROM cr.metax-tech.com/public-ai-release/maca/vllm:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-amd64
4
+
# Base image containing the LMDeploy inference environment, requiring amd64(x86-64) CPU + metax GPU.
5
+
# FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/maca:maca.ai3.1.0.7-torch2.6-py310-ubuntu22.04-lmdeploy0.10.2-amd64
6
+
7
+
# Install libgl for opencv support & Noto fonts for Chinese characters
8
+
RUN apt-get update && \
9
+
apt-get install -y \
10
+
fonts-noto-core \
11
+
fonts-noto-cjk \
12
+
fontconfig \
13
+
libgl1 && \
14
+
fc-cache -fv && \
15
+
apt-get clean && \
16
+
rm -rf /var/lib/apt/lists/*
17
+
18
+
# mod torchvision to be compatible with torch 2.6
19
+
RUN sed -i '3s/^Version: 0.15.1+metax3\.1\.0\.4$/Version: 0.21.0+metax3.1.0.4/' /opt/conda/lib/python3.10/site-packages/torchvision-0.15.1+metax3.1.0.4.dist-info/METADATA && \
0 commit comments