You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/backend/OPENCL.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,6 +39,9 @@ The llama.cpp OpenCL backend is designed to enable llama.cpp on **Qualcomm Adren
39
39
| Adreno 830 (Snapdragon 8 Elite) | Support |
40
40
| Adreno X85 (Snapdragon X Elite) | Support |
41
41
42
+
> A6x GPUs with a recent driver and compiler are supported; they are usually found in IoT platforms.
43
+
However, A6x GPUs in phones are likely not supported due to the outdated driver and compiler.
44
+
42
45
## DataType Supports
43
46
44
47
| DataType | Status |
@@ -52,7 +55,7 @@ The llama.cpp OpenCL backend is designed to enable llama.cpp on **Qualcomm Adren
52
55
53
56
You can refer to the general [llama-quantize tool](/tools/quantize/README.md) for steps to convert a model in Hugging Face safetensor format to GGUF with quantization.
54
57
55
-
Currently we support `Q4_0` quantization and have optimized for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize`. For example,
58
+
Currently we support `Q4_0` quantization and have optimized for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize` (i.e., make all weights in `Q4_0`). For example,
@@ -66,10 +69,10 @@ OpenAI gpt-oss models are MoE models in `MXFP4`. The quantized model will be in
66
69
For this quantization, there is no need to specify `--pure`.
67
70
For gpt-oss-20b model, you can directly [download](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF) the quantized GGUF file in `MXFP4_MOE` from Hugging Face.
68
71
69
-
Although it is possible to quantize gpt-oss-20b model in pure `Q4_0`, it is not recommendedsince `MXFP4` has been optimized for MoE while `Q4_0` is not.
70
-
Hence, using the default `MXFP4_MOE` quantization will give better performance compared to pure `Q4_0` quantization for this model.
72
+
Although it is possible to quantize gpt-oss-20b model in pure `Q4_0` (all weights in `Q4_0`), it is not recommended since `MXFP4` has been optimized for MoE while `Q4_0` is not. In addition, accuracy should degrade with such pure `Q4_0` quantization.
73
+
Hence, using the default `MXFP4_MOE` quantization (see the link above) is recommended for this model.
71
74
72
-
However, note that the `Q4_0` model found [here](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_0.gguf) is a mixture of `Q4_0`, `Q8_0` and `MXFP4` and gives better performance than `MXFP4_MOE` quantization.
75
+
> Note that the `Q4_0` model found [here](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_0.gguf) is a mixture of `Q4_0`, `Q8_0` and `MXFP4` and gives better performance than `MXFP4_MOE` quantization.
73
76
74
77
## CMake Options
75
78
@@ -217,11 +220,12 @@ ninja
217
220
218
221
## Known Issues
219
222
220
-
- Flash attention does not always improve performance. Disable it for models above 3B.
223
+
- Flash attention does not always improve performance.
221
224
- Currently OpenCL backend works on A6xx GPUs with recent drivers and compilers (usually found in IoT platforms).
222
225
However, it does not work on A6xx GPUs found in phones with old drivers and compilers.
0 commit comments