I'm aware that previous issues have been raised regarding RuntimeError: CUDA out of memory, but is there a way to avoid this error by making the code run for longer time and not use as much memory? Can the PYTORCH_CUDA_ALLOC_CONF be used on discoart? If so, how?
GPU 0; 15.90 GiB total capacity; 9.86 GiB already allocated; 1.41 GiB free; 13.59 GiB reserved in total by PyTorch
- DiscoArt Version:
0.12.0