You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Allow Hydra CLI to override yaml config (pytorch#11926)
Custom workaround to allow combining `--config` and Hydra CLI options.
Something like this would be possible:
```
python -m extension.llm.export.export_llm
--config llama_xnnpack.yaml
export.max_seq_length=1024
backend.xnnpack.extended_ops=True
```
Note that if a config file is specified and you want to specify a CLI arg that is not in the config, you need to prepend with a `+`. You can read more about this in the Hydra [docs](https://hydra.cc/docs/advanced/override_grammar/basic/).
116
91
117
-
### Export with CoreML backend (iOS optimization)
118
-
```bash
119
-
python -m extension.llm.export.export_llm \
120
-
base.model_class=llama3 \
121
-
model.use_kv_cache=true \
122
-
export.max_seq_length=128 \
123
-
backend.coreml.enabled=true \
124
-
backend.coreml.compute_units=ALL \
125
-
quantization.pt2e_quantize=coreml_c4w \
126
-
debug.verbose=true
127
-
```
92
+
93
+
## Example Commands
94
+
95
+
Please refer to the docs for some of our example suported models ([Llama](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md), [Qwen3](https://github.com/pytorch/executorch/tree/main/examples/models/qwen3/README.md), [Phi-4-mini](https://github.com/pytorch/executorch/tree/main/examples/models/phi_4_mini/README.md)).
128
96
129
97
## Configuration Options
130
98
@@ -134,4 +102,4 @@ For a complete reference of all available configuration options, see the [LlmCon
0 commit comments