You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am new to this project. I am working on the creation of a large-scale open-source image dataset with OCR annotations and am wondering whether PaddlePaddle can be used to generate bounding boxes for multiple types of text using only one model. Ideally, given an image containing handwritten and printed text, I would like to get bounding boxes for both and subsequently recognize the text within those boxes using several differnt models. Ideally, I would like to detect all available languages, in printed documents, natural scenes, handwritten, handwritten math, latex equations and so on.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am new to this project. I am working on the creation of a large-scale open-source image dataset with OCR annotations and am wondering whether PaddlePaddle can be used to generate bounding boxes for multiple types of text using only one model. Ideally, given an image containing handwritten and printed text, I would like to get bounding boxes for both and subsequently recognize the text within those boxes using several differnt models. Ideally, I would like to detect all available languages, in printed documents, natural scenes, handwritten, handwritten math, latex equations and so on.
Best,
Chris
Beta Was this translation helpful? Give feedback.
All reactions