Skip to content

Conversation

@a-r-r-o-w
Copy link
Contributor

@a-r-r-o-w a-r-r-o-w commented Jul 10, 2025

All thanks to @Cyrilvallez's PR: huggingface/transformers#36380

The accelerate PR is required because we end up calling clear_device_cache in a loop (over the sharded files). This is bad. Without this, you'll see no speedup.

Another small optimization is using non_blocking everywhere and syncing just before returning control to the user. This is slightly faster.

import time
t_ini = time.time()

import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
print(f"import time: {time.time() - t_ini:.3f}s")

model_id = "black-forest-labs/FLUX.1-dev"

t0 = time.time()
torch.cuda.synchronize()
print(f"CUDA sync time: {time.time() - t0:.3f}s")

print("starting model load")
t1 = time.time()
transformer = FluxTransformer2DModel.from_pretrained(model_id, subfolder="transformer", torch_dtype=torch.bfloat16, device_map="cuda")
torch.cuda.synchronize()
t2 = time.time()

diff = t2 - t1
print(f"time: {diff:.3f}s")

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", transformer=transformer, torch_dtype=torch.bfloat16)
pipe.text_encoder.to("cuda")
pipe.text_encoder_2.to("cuda")
pipe.vae.to("cuda")
prompt = "A cat holding a sign that says hello world"
image = pipe(prompt, num_inference_steps=28, guidance_scale=4.0).images[0]
image.save("flux.png")

Sister PR in accelerate required to obtain speedup: huggingface/accelerate#3674

  • On main: 16.765s
  • On this branch: 4.521s

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for quickly getting this up 🔥

My comments are mostly minor, the major one being adding hf_quantizer to the allocator function.

Additionally, for a potentially better user-experience, if we could try to rethink the to() method of DiffusionPipeline, it would be helpful. I mean the following.

Currently, from what I understand, we have to first initialize the denoiser using device_map and then the rest of the components. If a user is calling .to() on a DiffusionPipeline, we could consider using device_map="cuda" for dispatching the model-level components to CUDA. I don't immediately see a downside to it.

return parsed_parameters


def _find_mismatched_keys(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taken out of here:

def _find_mismatched_keys(

if device_type is None:
device_type = get_device()
device_mod = getattr(torch, device_type, torch.cuda)
device_mod.synchronize()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess all different backends ought to have this method. Just flagging.

Copy link
Contributor Author

@a-r-r-o-w a-r-r-o-w Jul 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

afaik, synchronize should be available on all devices. Just the empty_cache function required a special check because it would fail if device was cpu

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ship

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this ! Just a nit

@a-r-r-o-w a-r-r-o-w merged commit c903527 into main Jul 11, 2025
32 checks passed
@a-r-r-o-w a-r-r-o-w deleted the speedup-model-loading branch July 11, 2025 16:13
tolgacangoz pushed a commit to tolgacangoz/diffusers that referenced this pull request Jul 17, 2025
* update

* update

* update

* pin accelerate version

* add comment explanations

* update docstring

* make style

* non_blocking does not matter for dtype cast

* _empty_cache -> clear_cache

* update

* Update src/diffusers/models/model_loading_utils.py

Co-authored-by: Marc Sun <[email protected]>

* Update src/diffusers/models/model_loading_utils.py

---------

Co-authored-by: Marc Sun <[email protected]>
tolgacangoz pushed a commit to tolgacangoz/diffusers that referenced this pull request Jul 18, 2025
* update

* update

* update

* pin accelerate version

* add comment explanations

* update docstring

* make style

* non_blocking does not matter for dtype cast

* _empty_cache -> clear_cache

* update

* Update src/diffusers/models/model_loading_utils.py

Co-authored-by: Marc Sun <[email protected]>

* Update src/diffusers/models/model_loading_utils.py

---------

Co-authored-by: Marc Sun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants