Commit 4ddf049
authored
Fixed bug for 16a4w ptq (#12167)
Summary: Currently running the script
executorch/examples/models/llama/export_llama.py with the flag --ptq
16a4w, it does 16a16w quantization; this diff fixes this. This may be
related to some GitHub issues
Differential Revision: D776714681 parent 1466826 commit 4ddf049
1 file changed
+1
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
192 | 192 | | |
193 | 193 | | |
194 | 194 | | |
195 | | - | |
| 195 | + | |
196 | 196 | | |
197 | 197 | | |
198 | 198 | | |
| |||
0 commit comments