@@ -7,13 +7,14 @@ and handling responses from the MinerU Vision-Language Model.
77
88## About Backends
99
10- We provides 4 different backends(deployment modes):
10+ We provides 6 different backends(deployment modes):
1111
12121 . ** http-client** : A HTTP client for interacting with the OpenAI-compatible model server.
13132 . ** transformers** : A backend for using HuggingFace Transformers models. (slow but simple to install)
14143 . ** mlx-engine** : A backend for using Apple Silicon devices with macOS.
15- 4 . ** vllm-engine** : A backend for using the VLLM synchronous batching engine.
16- 5 . ** vllm-async-engine** : A backend for using the VLLM asynchronous engine. (requires async programming)
15+ 4 . ** lmdeploy-engine** : A backend for using the LmDeploy engine.
16+ 5 . ** vllm-engine** : A backend for using the VLLM synchronous batching engine.
17+ 6 . ** vllm-async-engine** : A backend for using the VLLM asynchronous engine. (requires async programming)
1718
1819## About Output Format
1920
@@ -67,6 +68,12 @@ For `mlx-engine` backend, install the package with the `mlx` extra:
6768pip install -U " mineru-vl-utils[mlx]"
6869```
6970
71+ For ` lmdeploy-engine ` backend, install the package with the ` lmdeploy ` extra:
72+
73+ ``` bash
74+ pip install -U " mineru-vl-utils[lmdeploy]"
75+ ```
76+
7077> [ !NOTE]
7178> For using the ` http-client ` backend, you still need to have another
7279> ` vllm ` (or other LLM deployment tool) environment to serve the model as a http server.
@@ -176,6 +183,54 @@ extracted_blocks = client.two_step_extract(image)
176183print (extracted_blocks)
177184```
178185
186+ ### ` lmdeploy-engine ` Example
187+
188+ For default inference engine(` turbomind ` by now).
189+
190+ ``` python
191+ from lmdeploy.serve.vl_async_engine import VLAsyncEngine
192+ from mineru_vl_utils import MinerUClient
193+ from PIL import Image
194+
195+ if __name__ == " __main__" :
196+ lmdeploy_engine = VLAsyncEngine(" opendatalab/MinerU2.5-2509-1.2B" )
197+
198+ client = MinerUClient(
199+ backend = " lmdeploy-engine" ,
200+ lmdeploy_engine = lmdeploy_engine,
201+ )
202+
203+ image = Image.open(" /path/to/the/test/image.png" )
204+ extracted_blocks = client.two_step_extract(image)
205+ print (extracted_blocks)
206+ ```
207+
208+ For pytorch inference engine and ` ascend ` accelerator.
209+
210+ ``` python
211+ from lmdeploy import PytorchEngineConfig
212+ from lmdeploy.serve.vl_async_engine import VLAsyncEngine
213+ from mineru_vl_utils import MinerUClient
214+ from PIL import Image
215+
216+ if __name__ == " __main__" :
217+ lmdeploy_engine = VLAsyncEngine(
218+ " opendatalab/MinerU2.5-2509-1.2B" ,
219+ backend = " pytorch" ,
220+ backend_config = PytorchEngineConfig(
221+ device_type = " ascend" ,
222+ ),
223+ )
224+
225+ client = MinerUClient(
226+ backend = " lmdeploy-engine" ,
227+ lmdeploy_engine = lmdeploy_engine,
228+ )
229+
230+ image = Image.open(" /path/to/the/test/image.png" )
231+ extracted_blocks = client.two_step_extract(image)
232+ print (extracted_blocks)
233+ ```
179234
180235### ` vllm-engine ` Example
181236
0 commit comments