Skip to content

Commit 4435600

Browse files
committed
Merge remote-tracking branch 'origin/master' into Mamba2SSD
* origin/master: (21 commits) vulkan: Fix GGML_VULKAN_CHECK_RESULTS to better handle fusion (ggml-org#16919) examples(gguf): GGUF example outputs (ggml-org#17025) mtmd: allow QwenVL to process larger image by default (ggml-org#17020) server : do not default to multiple slots with speculative decoding (ggml-org#17017) mtmd: improve struct initialization (ggml-org#16981) docs: Clarify the endpoint that webui uses (ggml-org#17001) model : add openPangu-Embedded (ggml-org#16941) ggml webgpu: minor set rows optimization (ggml-org#16810) sync : ggml ggml : fix conv2d_dw SVE path (ggml/1380) CUDA: update ops.md (ggml-org#17005) opencl: update doc (ggml-org#17011) refactor: replace sprintf with snprintf for safer string handling in dump functions (ggml-org#16913) vulkan: remove the need for the dryrun (ggml-org#16826) server : do context shift only while generating (ggml-org#17000) readme : update hot topics (ggml-org#17002) ggml-cpu : bicubic interpolation (ggml-org#16891) ci : apply model label to models (ggml-org#16994) chore : fix models indent after refactor (ggml-org#16992) Fix garbled output with REPACK at high thread counts (ggml-org#16956) ...
2 parents 6733bda + a44d771 commit 4435600

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+7044
-5086
lines changed

.github/labeler.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,10 @@ ggml:
7676
- changed-files:
7777
- any-glob-to-any-file:
7878
- ggml/**
79+
model:
80+
- changed-files:
81+
- any-glob-to-any-file:
82+
- src/models/**
7983
nix:
8084
- changed-files:
8185
- any-glob-to-any-file:

README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,13 @@ LLM inference in C/C++
1717

1818
## Hot topics
1919

20-
- **[guide : running gpt-oss with llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/15396)**
21-
- **[[FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗](https://github.com/ggml-org/llama.cpp/discussions/15313)**
20+
- **[guide : using the new WebUI of llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/16938)**
21+
- [guide : running gpt-oss with llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/15396)
22+
- [[FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗](https://github.com/ggml-org/llama.cpp/discussions/15313)
2223
- Support for the `gpt-oss` model with native MXFP4 format has been added | [PR](https://github.com/ggml-org/llama.cpp/pull/15091) | [Collaboration with NVIDIA](https://blogs.nvidia.com/blog/rtx-ai-garage-openai-oss) | [Comment](https://github.com/ggml-org/llama.cpp/discussions/15095)
23-
- Hot PRs: [All](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+label%3Ahot+) | [Open](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+label%3Ahot+is%3Aopen)
2424
- Multimodal support arrived in `llama-server`: [#12898](https://github.com/ggml-org/llama.cpp/pull/12898) | [documentation](./docs/multimodal.md)
2525
- VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
2626
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
27-
- Introducing GGUF-my-LoRA https://github.com/ggml-org/llama.cpp/discussions/10123
2827
- Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggml-org/llama.cpp/discussions/9669
2928
- Hugging Face GGUF editor: [discussion](https://github.com/ggml-org/llama.cpp/discussions/9268) | [tool](https://huggingface.co/spaces/CISCai/gguf-editor)
3029

common/common.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -507,6 +507,10 @@ struct common_params {
507507
// return false from callback to abort model loading or true to continue
508508
llama_progress_callback load_progress_callback = NULL;
509509
void * load_progress_callback_user_data = NULL;
510+
511+
bool has_speculative() const {
512+
return !speculative.model.path.empty() || !speculative.model.hf_repo.empty();
513+
}
510514
};
511515

512516
// call once at the start of a program if it uses libcommon

convert_hf_to_gguf.py

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7187,6 +7187,42 @@ def modify_tensors(self, data_torch: Tensor, name: str, bid: int | None):
71877187
return super().modify_tensors(data_torch, name, bid)
71887188

71897189

7190+
@ModelBase.register("PanguEmbeddedForCausalLM")
7191+
class PanguEmbeddedModel(TextModel):
7192+
model_arch = gguf.MODEL_ARCH.PANGU_EMBED
7193+
7194+
def set_vocab(self):
7195+
self._set_vocab_sentencepiece()
7196+
7197+
tokenizer_config_file = self.dir_model / 'tokenizer_config.json'
7198+
if tokenizer_config_file.is_file():
7199+
with open(tokenizer_config_file, "r", encoding="utf-8") as f:
7200+
tokenizer_config_json = json.load(f)
7201+
if "add_prefix_space" in tokenizer_config_json:
7202+
self.gguf_writer.add_add_space_prefix(tokenizer_config_json["add_prefix_space"])
7203+
7204+
def set_gguf_parameters(self):
7205+
super().set_gguf_parameters()
7206+
hparams = self.hparams
7207+
self.gguf_writer.add_vocab_size(hparams["vocab_size"])
7208+
7209+
# PanguEmbedded's hparam loaded from config.json without head_dim
7210+
if (rope_dim := hparams.get("head_dim")) is None:
7211+
rope_dim = hparams["hidden_size"] // hparams["num_attention_heads"]
7212+
self.gguf_writer.add_rope_dimension_count(rope_dim)
7213+
7214+
if hparams.get("head_dim") is None:
7215+
self.gguf_writer.add_key_length(rope_dim)
7216+
self.gguf_writer.add_value_length(rope_dim)
7217+
7218+
def modify_tensors(self, data_torch: Tensor, name: str, bid: int | None) -> Iterable[tuple[str, Tensor]]:
7219+
if name == "lm_head.weight":
7220+
if self.hparams.get("tie_word_embeddings", False):
7221+
logger.info("Skipping tied output layer 'lm_head.weight'")
7222+
return []
7223+
return [(self.map_tensor_name(name), data_torch)]
7224+
7225+
71907226
@ModelBase.register("Dots1ForCausalLM")
71917227
class Dots1Model(Qwen2MoeModel):
71927228
model_arch = gguf.MODEL_ARCH.DOTS1

docs/backend/OPENCL.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,25 +39,41 @@ The llama.cpp OpenCL backend is designed to enable llama.cpp on **Qualcomm Adren
3939
| Adreno 830 (Snapdragon 8 Elite) | Support |
4040
| Adreno X85 (Snapdragon X Elite) | Support |
4141

42+
> A6x GPUs with a recent driver and compiler are supported; they are usually found in IoT platforms.
43+
However, A6x GPUs in phones are likely not supported due to the outdated driver and compiler.
44+
4245
## DataType Supports
4346

4447
| DataType | Status |
4548
|:----------------------:|:--------------------------:|
4649
| Q4_0 | Support |
4750
| Q6_K | Support, but not optimized |
51+
| Q8_0 | Support |
52+
| MXFP4 | Support |
4853

4954
## Model Preparation
5055

51-
You can refer to the general [*Prepare and Quantize*](README.md#prepare-and-quantize) guide for model prepration.
56+
You can refer to the general [llama-quantize tool](/tools/quantize/README.md) for steps to convert a model in Hugging Face safetensor format to GGUF with quantization.
5257

53-
Currently we support `Q4_0` quantization and have optimize for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize`. For example,
58+
Currently we support `Q4_0` quantization and have optimized for it. To achieve best performance on Adreno GPU, add `--pure` to `llama-quantize` (i.e., make all weights in `Q4_0`). For example,
5459

5560
```sh
5661
./llama-quantize --pure ggml-model-qwen2.5-3b-f16.gguf ggml-model-qwen-3b-Q4_0.gguf Q4_0
5762
```
5863

5964
Since `Q6_K` is also supported, `Q4_0` quantization without `--pure` will also work. However, the performance will be worse compared to pure `Q4_0` quantization.
6065

66+
### `MXFP4` MoE Models
67+
68+
OpenAI gpt-oss models are MoE models in `MXFP4`. The quantized model will be in `MXFP4_MOE`, a mixture of `MXFP4` and `Q8_0`.
69+
For this quantization, there is no need to specify `--pure`.
70+
For gpt-oss-20b model, you can directly [download](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF) the quantized GGUF file in `MXFP4_MOE` from Hugging Face.
71+
72+
Although it is possible to quantize gpt-oss-20b model in pure `Q4_0` (all weights in `Q4_0`), it is not recommended since `MXFP4` has been optimized for MoE while `Q4_0` is not. In addition, accuracy should degrade with such pure `Q4_0` quantization.
73+
Hence, using the default `MXFP4_MOE` quantization (see the link above) is recommended for this model.
74+
75+
> Note that the `Q4_0` model found [here](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_0.gguf) is a mixture of `Q4_0`, `Q8_0` and `MXFP4` and gives better performance than `MXFP4_MOE` quantization.
76+
6177
## CMake Options
6278

6379
The OpenCL backend has the following CMake options that control the behavior of the backend.
@@ -146,10 +162,13 @@ A Snapdragon X Elite device with Windows 11 Arm64 is used. Make sure the followi
146162
* Ninja
147163
* Visual Studio 2022
148164
* Powershell 7
165+
* Python
149166

150167
Visual Studio provides necessary headers and libraries although it is not directly used for building.
151168
Alternatively, Visual Studio Build Tools can be installed instead of the full Visual Studio.
152169

170+
> Note that building using Visual Studio's cl compiler is not supported. Clang must be used. Clang depends on libraries provided by Visual Studio to work. Therefore, Visual Studio must be installed. Alternatively, Visual Studio Build Tools can be installed instead of the full Visual Studio.
171+
153172
Powershell 7 is used for the following commands.
154173
If an older version of Powershell is used, these commands may not work as they are.
155174

@@ -201,9 +220,12 @@ ninja
201220

202221
## Known Issues
203222

204-
- Currently OpenCL backend does not work on Adreno 6xx GPUs.
223+
- Flash attention does not always improve performance.
224+
- Currently OpenCL backend works on A6xx GPUs with recent drivers and compilers (usually found in IoT platforms).
225+
However, it does not work on A6xx GPUs found in phones with old drivers and compilers.
205226

206227
## TODO
207228

208229
- Optimization for Q6_K
209230
- Support and optimization for Q4_K
231+
- Improve flash attention

docs/ops.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,11 @@ Legend:
2222
| ARANGE ||||||||||
2323
| ARGMAX ||||||||||
2424
| ARGSORT ||||||||||
25-
| CEIL |||| ||||||
25+
| CEIL |||| 🟡 ||||||
2626
| CLAMP ||||| 🟡 | 🟡 || 🟡 ||
2727
| CONCAT |||| 🟡 || 🟡 | 🟡 |||
2828
| CONT || 🟡 |||| 🟡 | 🟡 | 🟡 ||
29-
| CONV_2D |||| ||||||
29+
| CONV_2D |||| 🟡 ||||||
3030
| CONV_2D_DW ||||||||||
3131
| CONV_3D ||||||||||
3232
| CONV_TRANSPOSE_1D ||||||||||
@@ -42,7 +42,7 @@ Legend:
4242
| ELU |||| 🟡 | 🟡 || 🟡 |||
4343
| EXP |||| 🟡 | 🟡 || 🟡 |||
4444
| FLASH_ATTN_EXT || 🟡 || 🟡 | 🟡 ||| 🟡 ||
45-
| FLOOR |||| ||||||
45+
| FLOOR |||| 🟡 ||||||
4646
| GATED_LINEAR_ATTN ||||||||||
4747
| GEGLU ||||| 🟡 ||| 🟡 ||
4848
| GEGLU_ERF ||||| 🟡 ||| 🟡 ||
@@ -84,7 +84,7 @@ Legend:
8484
| ROLL ||||||||||
8585
| ROPE || 🟡 ||||||||
8686
| ROPE_BACK ||||||||||
87-
| ROUND |||| ||||||
87+
| ROUND |||| 🟡 ||||||
8888
| RWKV_WKV6 ||||||||||
8989
| RWKV_WKV7 ||||||||||
9090
| SCALE || 🟡 ||||||||
@@ -111,6 +111,6 @@ Legend:
111111
| TANH |||| 🟡 | 🟡 || 🟡 | 🟡 ||
112112
| TIMESTEP_EMBEDDING ||||||||||
113113
| TOPK_MOE ||||||||||
114-
| TRUNC |||| ||||||
114+
| TRUNC |||| 🟡 ||||||
115115
| UPSCALE || 🟡 ||| 🟡 || 🟡 |||
116116
| XIELU ||||||||||

0 commit comments

Comments
 (0)