-
Notifications
You must be signed in to change notification settings - Fork 713
Commit b64b1af
committed
Update on "Save foundation weights separately"
This diff:
1. Introduces SerializationConfig to llm_config. Currently, this allows user to save the foundation weights in a separate file; majorly useful for lora case.
2. Adds a pass to tag foundation (non-lora) weights. This is at the top-level (export_llama_lib). The tags are preserved through run_decomps/other passes, and do not affect functionality.
3. Tags are read when placing constants into the named_data_store.
4. Tagged weights are serialized to a separate file.
Notes
1. Adding tags to node.meta['custom']['blah'] means that they will not be discarded by run_decompositions
2. Adding tags to the lifted model (ep.graph_module) requires the EP to check is_param_node for xnnpack constants. Instead, add tags to the unlifted model (ep.module()), so we do not need to go through a re-export to get the EP.
3. Not an issue for this diff as llama doesn't have any higher order ops. Adding tags to models with higher-order ops is problematic due to nested submodules.
Differential Revision: [D79181064](https://our.internmc.facebook.com/intern/diff/D79181064/)
[ghstack-poisoned]1 parent a516927 commit b64b1afCopy full SHA for b64b1af
File tree
Expand file treeCollapse file tree
0 file changed
+0
-0
lines changedOpen diff view settings
Filter options
Expand file treeCollapse file tree
0 file changed
+0
-0
lines changedOpen diff view settings
0 commit comments