Decompose after export in export_llama #15951
Draft
+2
−9
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
unwrap_tensor_subclasswas not unwrapping nested lora linears. This meant qdata/scale/zero were bundled together in the subclass, and separated at run decompositions inside to_edge_transform_and_lower. This is after nodes are tagged, meaning that the scales were not tagged, and remained in the PTE file after the rest of the weights were moved to a PTD file.It's recommended to move away from
unwrap_tensor_subclassand rely on export + decomps. This PR adds a decomp after exporting in export_llama, and removes cases ofunwrap_tensor_subclass.TODO: remove all cases of
unwrap_tensor_subclass.Test plan
TODO: add test for this with nn.Linear/LoraLinear, where we can see that qdata/scale (zero not stored) are all in PTD file after quantization, and no weights are in PTE file.