-
Couldn't load subscription status.
- Fork 6.5k
Don't override torch_dtype and don't use when quantization_config is set
#11039
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
torch_dtype when quantization_config is settorch_dtype and don't use when quantization_config is set
|
@hlky Hi! This is a good PR for me. Sometimes I forget to set |
| f"Passed `torch_dtype` {torch_dtype} is not a `torch.dtype`. Defaulting to `torch.float32`." | ||
| ) | ||
|
|
||
| if quantization_config is not None and torch_dtype is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, if we were trying to set the dtype of the model e.g. FluxTransformer(quantization_config=BnBConfig, torch_dtype=torch.bfloat16) Wouldn't the dtype be overwritten and then get set to the BnB default float16?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes looks like it, I think it's ok without this part.
What does this PR do?
https://github.com/huggingface/diffusers/actions/runs/13800894621/job/38602946014#step:7:1904
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.