https://github.com/cuda-mode/ring-attention/blob/65f904c812eae6e0c75185c49c8eab5717c25343/ring-llama/modeling_llama.py#L669
While it is acceptable that ring flash attn could lag behind flash attn in terms of speed, I am wondering whether the ring flash attn could outperform the flash attn in terms of memory use?
And it has been witnessed that in both aspects, ring flash attn can underperform flash attn (zhuzilin/ring-flash-attention#23), which is not that reasonable.