Skip to content

Commit 7c8bcc1

Browse files
authored
Add docs for llama_chat_apply_template (ggml-org#5645)
* add docs for llama_chat_apply_template * fix typo
1 parent 7fe4678 commit 7c8bcc1

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

examples/server/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ see https://github.com/ggerganov/llama.cpp/issues/1437
4141
- `--grp-attn-w`: Set the group attention width to extend context size through self-extend(default: 512), used together with group attention factor `--grp-attn-n`
4242
- `-n, --n-predict`: Set the maximum tokens to predict (default: -1)
4343
- `--slots-endpoint-disable`: To disable slots state monitoring endpoint. Slots state may contain user data, prompts included.
44+
- `--chat-template JINJA_TEMPLATE`: Set custom jinja chat template. This parameter accepts a string, not a file name (default: template taken from model's metadata). We only support [some pre-defined templates](https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template)
4445

4546
## Build
4647

llama.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -708,7 +708,7 @@ extern "C" {
708708

709709
/// Apply chat template. Inspired by hf apply_chat_template() on python.
710710
/// Both "model" and "custom_template" are optional, but at least one is required. "custom_template" has higher precedence than "model"
711-
/// NOTE: This function only support some known jinja templates. It is not a jinja parser.
711+
/// NOTE: This function does not use a jinja parser. It only support a pre-defined list of template. See more: https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template
712712
/// @param tmpl A Jinja template to use for this chat. If this is nullptr, the model’s default chat template will be used instead.
713713
/// @param chat Pointer to a list of multiple llama_chat_message
714714
/// @param n_msg Number of llama_chat_message in this chat

0 commit comments

Comments
 (0)