Inference speed in CPU
#15692
-
It's that normal i got really slow inference speed in CPU ? from paddleocr import PaddleOCR
from time import perf_counter
def create_model():
return PaddleOCR(
use_doc_orientation_classify=False,
use_doc_unwarping=False,
use_textline_orientation=False,
ocr_version="PP-OCRv3",
device="cpu",
cpu_threads=4,
# text_det_limit_side_len=960,
text_det_limit_type="max",
enable_mkldnn=True,
lang="fr",
)
for _ in range(8):
model = create_model()
t = perf_counter()
model.predict(input="tests/data/valid/large.png")
print(perf_counter() - t) but when I run it locally I got < 0.1s. Does anyone have any idea ? Paddle version : python version : 3.10
"paddleocr==3.0.1"
"paddlepaddle==3.0.0",
Dockerfile: FROM python:3.10-bullseye AS paddle_model
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -qy && \
apt-get install -qy --no-install-recommends \
ffmpeg \
libsm6 \
libxext6 \
libgl1 \
libglib2.0-0 \
poppler-utils \
libopenblas-openmp-dev \
liblapack-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* |
Beta Was this translation helpful? Give feedback.
Answered by
mitchou10
Jun 12, 2025
Replies: 1 comment
-
I resolve it by using only. cpu_threads %2 == 0 from paddleocr import PaddleOCR
from time import perf_counter
def create_model():
return PaddleOCR(
use_doc_orientation_classify=False,
use_doc_unwarping=False,
use_textline_orientation=False,
ocr_version="PP-OCRv5",
device="cpu",
cpu_threads=4,
text_det_limit_type="max",
enable_mkldnn=True,
) |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
mitchou10
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I resolve it by using only.