Skip to content

PaddleOCRVL llama-cpp-server 报错 #17815

@RainLoo

Description

@RainLoo

🔎 Search before asking

  • I have searched the PaddleOCR Docs and found no similar bug report.
  • I have searched the PaddleOCR Issues and found no similar bug report.
  • I have searched the PaddleOCR Discussions and found no similar bug report.

🐛 Bug (问题描述)

Image

官方文本显示支持llama-cpp-server,但是python调用报错:Invalid backend for the VL recognition module: llama-cpp-server. Supported values are ['native', 'vllm-server', 'sglang-server', 'fastdeploy-server', 'mlx-vlm-server'].

🏃‍♂️ Environment (运行环境)

Name: paddleocr
Version: 3.4.0

🌰 Minimal Reproducible Example (最小可复现问题的Demo)

pipeline = PaddleOCRVL(
vl_rec_backend="llama-cpp-server",
vl_rec_server_url="http://127.0.0.1:8111/v1",
vl_rec_api_model_name="PaddlePaddle/PaddleOCR-VL-1.5-GGUF",
)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions