Code of Conduct
Search before asking
Describe the feature
Currently, the overlapping decompression mechanism may cause Spark jobs to run out of memory due to the lack of memory limits.
In the first phase, we can introduce limits on the number of staging buffers. Subsequently, we can align with Spark’s memory consumer model to precisely request and release memory.
Motivation
No response
Describe the solution
No response
Additional context
No response
Are you willing to submit PR?