Commit facb8b5
authored
convert : fix autoawq gemma (ggml-org#6704)
* fix autoawq quantized gemma model convert error
using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error.
* change code to full string match and print necessary message
change code to full string match and print a short message to inform users that lm_head.weight has been skipped.
---------
Co-authored-by: Zheng.Deng <[email protected]>1 parent 532c173 commit facb8b5
1 file changed
+6
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2458 | 2458 | | |
2459 | 2459 | | |
2460 | 2460 | | |
| 2461 | + | |
| 2462 | + | |
| 2463 | + | |
| 2464 | + | |
| 2465 | + | |
| 2466 | + | |
2461 | 2467 | | |
2462 | 2468 | | |
2463 | 2469 | | |
| |||
0 commit comments