I am encountering out-of-memory (OOM) errors during training on H100 (80GB) GPUs.
My understanding is that the model is not explicitly sharded, so it should be able to run on 4 GPUs. However, OOM still occurs during GRPO training. Is there something I might be missing in terms of configuration or setup?
Could you clarify the minimum GPU requirements for GRPO training (e.g., number of GPUs and memory per GPU)? Any guidance or suggestions would be greatly appreciated.