Skip to content

Commit 875ab5a

Browse files
authored
cont
1 parent 523fbd7 commit 875ab5a

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
1919

2020
- [Hot PRs](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+label%3Ahot+)
2121
- Multimodal support arrived in `llama-server`: [#12898](https://github.com/ggml-org/llama.cpp/pull/12898) | [documentation](./docs/multimodal.md)
22-
- A new binary `llama-mtmd-cli` is introduced to replace `llava-cli`, `minicpmv-cli`, `gemma3-cli` ([#13012](https://github.com/ggml-org/llama.cpp/pull/13012)) and `qwen2vl-cli` ([#13141](https://github.com/ggml-org/llama.cpp/pull/13141)), `libllava` will be deprecated
2322
- VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
24-
- Universal [tool call support](./docs/function-calling.md) in `llama-server` https://github.com/ggml-org/llama.cpp/pull/9639
2523
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
2624
- Introducing GGUF-my-LoRA https://github.com/ggml-org/llama.cpp/discussions/10123
2725
- Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggml-org/llama.cpp/discussions/9669

0 commit comments

Comments
 (0)