Replies: 2 comments
-
Do you have a minimal reproducer available? Otherwise, we cannot possibly help to debug this. It should include:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
@BenjaminBossan thanks for the support, Its working for single gpu runs functionally, however, we have decided to try this format not at the moment but in the future so closing this down. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Team:
I am looking at if someone has looked at doing FSDP + QLoRA for models that quantised using llm-compressor and have compressed-tensors format. HF Transformers do support loading such models with
CompressedLinear
module for compressed linear layers, however, it does not seem to work out of the box for FSDP + QLoRA.Beta Was this translation helpful? Give feedback.
All reactions