Skip to content

Conversation

@engelmi
Copy link
Contributor

@engelmi engelmi commented Feb 19, 2025

Relates to: #11178

Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model.

This also enables running the granite-code model from ollama:

# using a jinja chat template file 
# (when prefix, e.g. hf://, is not specified, llama-run pulls from ollama)
$ llama-run  --chat-template-file ./chat.tmpl granite-code
> write code

Here is a code snippet in Python:

"""
def f(x):
    return x**2
"""

# without a jinja chat template file
$ llama-run granite-code
> write code
failed to apply the chat template

Make sure to read the contributing guidelines before submitting a PR

Relates to: ggml-org#11178

Added --chat-template-file CLI option to llama-run. If specified, the file
will be read and the content passed for overwriting the chat template of
the model to common_chat_templates_init.

Signed-off-by: Michael Engel <[email protected]>
@engelmi engelmi closed this Feb 19, 2025
@engelmi engelmi deleted the added-chat-template-file-option branch February 19, 2025 16:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant