基于python部署训练环境,训练的文字识别模型在C++源码部署加入该模型文件后不适用,报异常关于Paddle_inference的问题 #12358
Replies: 7 comments
-
您好,希望能给出更多的信息,使用了哪个模型,加入的该模型文件具体是什么,抛出了什么异常等。便于问题定位。可以参考下面的文档核对操作是否正确:https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/deploy/cpp_infer/readme_ch.md |
Beta Was this translation helpful? Give feedback.
-
建议直接使用FD的cpp部署试试哈 |
Beta Was this translation helpful? Give feedback.
-
`// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. #include <gflags/gflags.h> // common args // layout model related // ocr forward related #include <include/args.h> using namespace PaddleOCR; void check_params() { std::string ocr(std::vectorcv::String &cv_all_img_names) { if (FLAGS_benchmark) { std::vectorcv::Mat img_list; std::vectorstd::vector ocr_results = Utility::print_result(ocr_results[i], res_str); void structure(std::vectorcv::String &cv_all_img_names) { if (FLAGS_benchmark) { for (int i = 0; i < cv_all_img_names.size(); i++) { std::vector structure_results = engine.structure( for (int j = 0; j < structure_results.size(); j++) { if (structure_results[j].type == "table") {
} else { int main(int argc, char argv) { if (!Utility::PathExists(FLAGS_image_dir)) { std::vectorcv::String cv_all_img_names; if (!Utility::PathExists(FLAGS_output)) { |
Beta Was this translation helpful? Give feedback.
-
使用python的是可以的,paddleocr2.6版本中提供的训练模型的方法去做的,验证后是可以的。这个是Windows GPU版本的,实际上运用模型的采用C++源码部署的,windows cpu版本。 |
Beta Was this translation helpful? Give feedback.
-
现在报错的是GPU版本的还是CPU版本呢,如果是CPU版本,是因为CPU版本不稳定导致的 |
Beta Was this translation helpful? Give feedback.
-
采用gpu是为了训练模型的,但是,实际运用的是在cpu版本上。出问题的是在cpu
发自我的iPhone
…------------------ 原始邮件 ------------------
发件人: changdazhou ***@***.***>
发送时间: 2024年2月23日 14:29
收件人: PaddlePaddle/PaddleOCR ***@***.***>
抄送: zhaocanglong ***@***.***>, Author ***@***.***>
主题: Re: [PaddlePaddle/PaddleOCR] 基于python部署训练环境,训练的文字识别模型在C++源码部署加入该模型文件后不适用,报异常关于Paddle_inference的问题 (Issue #11612)
现在报错的是GPU版本的还是CPU版本呢,如果是CPU版本,是因为CPU版本不稳定导致的
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
same issue,paddleocr_release_2.6对应对的通过降低paddle_inference的版本为2.2.2可以解决,realease_2.7对应的2.3的版本 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
请提供下述完整信息以便快速定位问题/Please provide the following information to quickly locate the problem
系统环境/System Environment:Windows 11
版本号/Version:2.6 PaddleOCR:python训练的模型不能在C++源码中调用,编译器会引发异常
以下方法中根据输出判断是在std::cout << "predictor_ start:" << input_names.size() << std::endl;后就发生异常
void CRNNRecognizer::Run(std::vectorcv::Mat img_list,
std::vectorstd::string &rec_texts,
std::vector &rec_text_scores,
std::vector ×) {
std::chrono::duration preprocess_diff =
std::chrono::steady_clock::now() - std::chrono::steady_clock::now();
std::chrono::duration inference_diff =
std::chrono::steady_clock::now() - std::chrono::steady_clock::now();
std::chrono::duration postprocess_diff =
std::chrono::steady_clock::now() - std::chrono::steady_clock::now();
int img_num = img_list.size();
std::vector width_list;
for (int i = 0; i < img_num; i++) {
width_list.push_back(float(img_list[i].cols) / img_list[i].rows);
}
std::cout << "indices" << std::endl;
std::vector indices = Utility::argsort(width_list);
std::cout << "indices end:" << indices.size() << std::endl;
for (int beg_img_no = 0; beg_img_no < img_num;
beg_img_no += this->rec_batch_num_) {
auto preprocess_start = std::chrono::steady_clock::now();
int end_img_no = std::min(img_num, beg_img_no + this->rec_batch_num_);
int batch_num = end_img_no - beg_img_no;
int imgH = this->rec_image_shape_[1];
int imgW = this->rec_image_shape_[2];
float max_wh_ratio = imgW * 1.0 / imgH;
for (int ino = beg_img_no; ino < end_img_no; ino++) {
int h = img_list[indices[ino]].rows;
int w = img_list[indices[ino]].cols;
float wh_ratio = w * 1.0 / h;
max_wh_ratio = std::max(max_wh_ratio, wh_ratio);
}
int batch_width = imgW;
std::vectorcv::Mat norm_img_batch;
for (int ino = beg_img_no; ino < end_img_no; ino++) {
cv::Mat srcimg;
img_list[indices[ino]].copyTo(srcimg);
cv::Mat resize_img;
std::cout << "resize_op_ start" << std::endl;
this->resize_op_.Run(srcimg, resize_img, max_wh_ratio,
this->use_tensorrt_, this->rec_image_shape_);
std::cout << "resize_op_ end" << std::endl;
std::cout << "normalize_op_ start" << std::endl;
this->normalize_op_.Run(&resize_img, this->mean_, this->scale_,
this->is_scale_);
std::cout << "normalize_op_ end" << std::endl;
norm_img_batch.push_back(resize_img);
batch_width = std::max(resize_img.cols, batch_width);
}
std::vector input(batch_num * 3 * imgH * batch_width, 0.0f);
std::cout << "permute_op_ start" << std::endl;
this->permute_op_.Run(norm_img_batch, input.data());
std::cout << "permute_op_ end" << std::endl;
auto preprocess_end = std::chrono::steady_clock::now();
preprocess_diff += preprocess_end - preprocess_start;
// Inference.
auto input_names = this->predictor_->GetInputNames();
std::cout << "permute_op_ start" << input_names.size() << std::endl;
auto input_t = this->predictor_->GetInputHandle(input_names[0]);
input_t->Reshape({batch_num, 3, imgH, batch_width});
auto inference_start = std::chrono::steady_clock::now();
input_t->CopyFromCpu(input.data());
std::cout << "predictor_ start:" << input_names.size() << std::endl;
this->predictor_->Run();
std::cout << "predictor_ end:" << input_names.size() << std::endl;
std::vector predict_batch;
auto output_names = this->predictor_->GetOutputNames();
auto output_t = this->predictor_->GetOutputHandle(output_names[0]);
auto predict_shape = output_t->shape();
int out_num = std::accumulate(predict_shape.begin(), predict_shape.end(), 1,
std::multiplies());
predict_batch.resize(out_num);
// predict_batch is the result of Last FC with softmax
output_t->CopyToCpu(predict_batch.data());
auto inference_end = std::chrono::steady_clock::now();
inference_diff += inference_end - inference_start;
// ctc decode
auto postprocess_start = std::chrono::steady_clock::now();
for (int m = 0; m < predict_shape[0]; m++) {
std::string str_res;
int argmax_idx;
int last_index = 0;
float score = 0.f;
int count = 0;
float max_value = 0.0f;
}
auto postprocess_end = std::chrono::steady_clock::now();
postprocess_diff += postprocess_end - postprocess_start;
}
times.push_back(double(preprocess_diff.count() * 1000));
times.push_back(double(inference_diff.count() * 1000));
times.push_back(double(postprocess_diff.count() * 1000));
}
Beta Was this translation helpful? Give feedback.
All reactions