Replies: 2 comments 4 replies
-
Hi, I'm just a student but I can provide the following feedback to think about and point you towards something.
|
Beta Was this translation helpful? Give feedback.
-
Hey @Dr-Zorg, this is right up our alley at EvoLearns. We’ve built similar pipelines for scanned documents combining OCR, layout parsing, and sketch segmentation. We can help build a lightweight prototype that: Processes scanned clinic cards locally (no patient data stored) Extracts structured fields (diagnosis, treatment, etc.) Segments lesion sketches as separate image files Outputs everything in JSON or CSV for downstream use If you can share a few redacted samples, we’d be happy to put together a proof-of-concept. Let’s connect, happy to support this important work.
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We are trying to create an automated way to capture all data from a leprosy (Hansens disease) patient clinic card (images). This would allow easier patient notification, mapping and treatment followup in diverse developmental setting where leprosy is still endemic.
The forms are a complex combination of structured and unstructured data, including some hand drawings of skin lesions.
The form itself is standardized, and the details just gets filled in.
I am more from a medical background, so the AI learning curve is going too slow (but growing...).
My guess is that this should be very possible with initial document formatting and cleanup, proper bounding boxes and labeling, OCR, and maybe LVM for specific areas or more complex tables.
My initial goal is to demonstrate that this is possible with a working prototype, where-after further development and testing is possible.

Looking for technical partners to get involved or point me in the right direction.
Beta Was this translation helpful? Give feedback.
All reactions