You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I already tried all possibilities and i cannot do these things work.
someone can help me?
I trained a model using rec_svtrnet.yml on paddle 2.6.2 version (on 3.0.0 version, only the train works, inference requires model registered and it is not the case with svtr).
Well, i trained my model using this script
PreProcess:
transform_ops:
- DecodeImage:
channel_first: falseimg_mode: BGR
- CTCLabelEncode: null
- SVTRRecResizeImg:
image_shape:
- 3
- 64
- 256padding: false
- KeepKeys:
keep_keys:
- image
- label
- lengthPostProcess:
name: CTCLabelDecodecharacter_dict:
- A
- B
- C
- D
- E
- F
- G
- H
- I
- J
- K
- L
- M
- N
- O
- P
- Q
- R
- S
- T
- U
- V
- W
- X
- Y
- Z
- '0'
- '1'
- '2'
- '3'
- '4'
- '5'
- '6'
- '7'
- '8'
- '9'
then i ran the inference with the following bash script
and the result is some text completly different from the truth text.
and i cannot understand.
It seems to me that the pipeline and/or pre/post processing that is running on inference is different from that is running on the trainning.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I already tried all possibilities and i cannot do these things work.
someone can help me?
I trained a model using rec_svtrnet.yml on paddle 2.6.2 version (on 3.0.0 version, only the train works, inference requires model registered and it is not the case with svtr).
Well, i trained my model using this script
[train.py]
[rec_svtrnet.yml]
After the training (that reached acc: 1.0) i exported my model with this script (export.sh)
export FLAGS_enable_pir_api=0 python3 /PaddleOCR/tools/export_model.py \ -c /config/rec_svtrnet.yml \ -o Global.pretrained_model=/output/rec/train1/best_accuracy Global.save_inference_dir=/output/inferenceIt generated the files listed below:
[inference.yml]
then i ran the inference with the following bash script
python /PaddleOCR/tools/infer/predict_rec.py \ --image_dir="/data/teste" \ --rec_model_dir="/output/inference/" \ --rec_char_dict_path="/config/placas_dict.txt" \ --rec_image_shape="3, 64, 256"and the result is some text completly different from the truth text.
and i cannot understand.
It seems to me that the pipeline and/or pre/post processing that is running on inference is different from that is running on the trainning.
Beta Was this translation helpful? Give feedback.
All reactions