Skip to content

Conversation

@hlky
Copy link
Contributor

@hlky hlky commented Mar 12, 2025

What does this PR do?

https://github.com/huggingface/diffusers/actions/runs/13800894621/job/38602946014#step:7:1904

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@hlky hlky changed the title Don't use torch_dtype when quantization_config is set Don't override torch_dtype and don't use when quantization_config is set Mar 12, 2025
@CyberVy
Copy link
Contributor

CyberVy commented Mar 12, 2025

@hlky Hi!

This is a good PR for me. Sometimes I forget to set torch_dtype when loading a model, which can cause some issues, because the loader defaults totorch.float32. 👍

f"Passed `torch_dtype` {torch_dtype} is not a `torch.dtype`. Defaulting to `torch.float32`."
)

if quantization_config is not None and torch_dtype is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, if we were trying to set the dtype of the model e.g. FluxTransformer(quantization_config=BnBConfig, torch_dtype=torch.bfloat16) Wouldn't the dtype be overwritten and then get set to the BnB default float16?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes looks like it, I think it's ok without this part.

@DN6 DN6 merged commit a7d53a5 into huggingface:main Mar 21, 2025
26 of 29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants