You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
printf(" --leave-output-tensor: Will leave output.weight un(re)quantized. Increases model size but may also increase quality, especially when requantizing\n");
105
105
printf(" --pure: Disable k-quant mixtures and quantize all tensors to the same type\n");
106
106
printf(" --imatrix file_name: use data in file_name as importance matrix for quant optimizations\n");
107
+
printf(" --ignore-imatrix-rules: ignore built-in rules for mandatory imatrix for certain quantization types\n"); // [kawrakow]
107
108
printf(" --include-weights tensor_name: use importance matrix for this/these tensor(s)\n");
108
109
printf(" --exclude-weights tensor_name: use importance matrix for this/these tensor(s)\n");
109
110
printf(" --output-tensor-type ggml_type: use this ggml_type for the output.weight tensor\n");
0 commit comments