Skip to content

Conversation

@DN6
Copy link
Collaborator

@DN6 DN6 commented Mar 18, 2025

What does this PR do?

Add a low_cpu_mem_usage option to group offloading so that pinning to CPU memory happens when a group is onloaded. The CPU RAM usage should be similar to sequential offloading.

Benchmarked by running 5 forward passes through the Flux Transformer. Although I am observing that using this approach with blocks increases the inference time much more significantly than when using with leaf level offloading. I haven't dug in much deeper but I suspect pinning and moving a larger number of tensors to GPU is slower than pinning and moving tensors individually with prefetching?

Results:

| Config       | offload_type | low_cpu_mem_usage| Runtime (s) | GPU VRAM (GB) | CPU RAM (GB)|
|--------------|--------------|------------------|-------------|---------------|-------------|
| Baseline     | block_level  | -                | 9.75        | 10.59         | 35.30       |
| Baseline     | leaf_level   | -                | 9.48        | 0.53          | 35.11       |
| Group Offload| block_level  | True             | 38.07       | 10.59         | 31.40       |
| Group Offload| leaf_level   | True             | 13.72       | 0.53          | 22.80       |

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@DN6 DN6 requested a review from a-r-r-o-w March 18, 2025 16:57
Copy link
Contributor

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, changes look good! Maybe could refactor to something like this (but not necessary as this is only to reduce on level of indentation):

context = nullcontext() if self.stream is None else torch.cuda.stream(self.stream)
pinned_context = null_context() if self.stream is None else self._pinned_memory_tensors()

with context, pinned_context as pinned_memory:
    ...

I haven't dug in much deeper but I suspect pinning and moving a larger number of tensors to GPU is slower than pinning and moving tensors individually with prefetching?

I don't know the exact reason either but my understanding is: Pinning tensors requires allocating new memory on CPU and introduces a synchronization each time it is done. What we want to do is pay this overhead cost upfront (which is what we do when low_cpu_mem_usage=False) for faster weight transfer overlaps with computation.

In this case, it seems like leaf_level low_cpu_mem_usage=True is able to hide some of this pinning and sync cost (behind the computation) because it is operating at a more granular level, and not with as many tensors as block_level low_cpu_mem_usage=True. With the latter, it looks like we are basically doing a big number of syncs alongside every computation (in which computation will finish very quickly and this cycle of slow pinning + fast computation happens repeatedly). Maybe we'll have to profile this and look at traces to see what's really going on

@a-r-r-o-w
Copy link
Contributor

Btw, even if there is the current overhead with block_level low_cpu_mem_usage=True, we should still ship this due to benefit in the leaf_level case and investigate more later. Could also add a test for numerical correctness in low_cpu_mem_usage False vs True case

@DN6 DN6 merged commit 2c1ed50 into main Mar 20, 2025
29 of 32 checks passed
@a-r-r-o-w a-r-r-o-w deleted the group-offload-ram branch May 10, 2025 12:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants