-
Couldn't load subscription status.
- Fork 6.5k
AudioLDM2 Fixes #11244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AudioLDM2 Fixes #11244
Conversation
| device_str = device_type | ||
| if gpu_id or torch_device.index: | ||
| device_str = f"{device_str}:{gpu_id or torch_device.index}" | ||
| device = torch.device(device_str) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note this is a partial fix to enable_model_cpu_offload. This may have been known as this won't be tested from fast tests.
| @unittest.skip("Not supported yet due to CLAPModel.") | ||
| def test_sequential_offload_forward_pass_twice(self): | ||
| pass | ||
|
|
||
| @unittest.skip("Not supported yet, the second forward has mixed devices and `vocoder` is not offloaded.") | ||
| def test_cpu_offload_forward_pass_twice(self): | ||
| pass | ||
|
|
||
| @unittest.skip("Not supported yet. `vocoder` is not offloaded.") | ||
| def test_model_cpu_offload_forward_pass(self): | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above these may have been known, added a skip for now.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Okay for me to investigate the skipped tests in a latter PR.
|
To be clear |
What does this PR do?
Fixes recent test failures https://github.com/huggingface/diffusers/actions/runs/14350349538/job/40227817766?pr=11238#step:6:33917 caused by
transformersdeprecation ofGenerationMixinonPretrainedModel. We have to useGPT2LMHeadModelinstead, this does create a warningExpected types for language_model: (<class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>,), got <class 'transformers.models.gpt2.modeling_gpt2.GPT2Model'>.from the expected type checker as the pretraiend checkpoints haveGPT2Model. Unconfirmed but likely increases VRAM usage as a result ofoutput_hidden_states=True- this can be fixed in the future after huggingface/transformers#33705 lands/is extended to other models.Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.