Skip to content

Are image segmentations necessary for training? #14

@precondition

Description

@precondition

I'm interested in leveraging features learned on Japanese manga to do transfer learning on the problem of detecting individual characters in a dataset of handwritten Japanese. While looking through the train_*.py scripts, I saw that you were providing a train_mask_dir argument. I realize that one of the outputs of comic-text-detector is an image mask so it makes sense that the model was trained on image segmentations annotations but I'm only interested in the text block detection module. How can I check out a model pretrained on comics and then fine-tune it with my dataset?

Auxiliary to this, you mention that “All models were trained on around 13 thousand anime & comic style images, 1/3 from Manga109-s” in the README. Does this mean that the entirety of Manga109-s was used during training and that the whole Manga109-s composed a third of the overall training dataset or that you only took one third of Manga-109s and then used this smaller subsample of Manga-109s in your overall training data? I'm wondering because Manga-109s does not provide image segmentation annotations. Did you just use the bounding box annotations or did you make use of the Manga109 image segmentation annotations made in the paper Unconstrained Text Detection in Manga?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions