Skip to content

Concerns Regarding Training Duration of Vectorization Method #1

@IoanaVoica20

Description

@IoanaVoica20

Hello!
I have been experimenting with training a vectorization method using the provided parameters in the 'marked-clean.yaml' configuration file. However, I've noticed that the loss seems to remain constant.

Additionally, I'm unsure about how the loss is aggregated across the 8 GPUs available on my school server. It's unclear whether the loss displayed during training is the sum of losses from all GPUs or if it's a mean value.

Could you provide insights into the expected training duration for this vectorization method and how many epochs does it takes?

Thanks in advance for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions