Skip to content

CUDA out of memory #15

@Terry10086

Description

@Terry10086

Thank you for your impressive work! However, I trained it on a NVIDIA RTX 3090 with a batch size of 2 in womask_pet.conf, and is still resulting in out-of-memory issues. Is there anything wrong with my configuration parameters? How much memory does the model take with the default batch size of 2048?

Exception has occurred: RuntimeError
CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 23.70 GiB total capacity; 20.47 GiB already allocated; 587.56 MiB free; 21.03 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions