Skip to content

Conversation

pwilkin
Copy link
Collaborator

@pwilkin pwilkin commented Sep 2, 2025

There are a couple of Jinja tester websites out there, but their biggest flaw is the lack of debugging information (notably the line on which the template has errors), which makes them a bit unsuitable for testing Jinja templates for models.

In this folder is a simple Python script (requiring only jinja2 and PySide6 as prerequisites) which launches a simple app for Jinja template testing. If an error in the template is encountered, the error line number will be printed.

@github-actions github-actions bot added script Script related python python script changes labels Sep 2, 2025
@pwilkin pwilkin marked this pull request as draft September 2, 2025 20:44
@CISC
Copy link
Collaborator

CISC commented Sep 3, 2025

There are a couple of Jinja tester websites out there, but their biggest flaw is the lack of debugging information (notably the line on which the template has errors), which makes them a bit unsuitable for testing Jinja templates for models.

Good call, just added it to Chat Template Editor, thanks!

@ggerganov
Copy link
Member

Good call, just added it to Chat Template Editor, thanks!

Cool!

@pwilkin pwilkin marked this pull request as ready for review September 3, 2025 20:54
@pwilkin
Copy link
Collaborator Author

pwilkin commented Sep 3, 2025

@CISC aight, fixed the Python code, should be gtg.

Copy link
Collaborator

@CISC CISC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to be able to load a jinja file.

@pwilkin
Copy link
Collaborator Author

pwilkin commented Sep 4, 2025

@CISC aight, added:

  • template loading support
  • a formatter (that was the second thing I was missing from all the online tools 😛 )
  • support for commandline launches
  • the missing extensions you mentioned + one more for raise_exception

I removed the tests from the final code, but I had two tests that checked on the DeepSeek template whether:

  • the format function was idempotent (i.e. format(format(x)) = format(x))
  • the format function preserved output (i.e. out(format(x), json) = out(x, json))

Co-authored-by: Sigbjørn Skjæret <[email protected]>
Co-authored-by: Sigbjørn Skjæret <[email protected]>
@CISC CISC merged commit 9e2b1e8 into ggml-org:master Sep 4, 2025
6 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Sep 5, 2025
…g-model-disabled-agent-prefill

* origin/master: (84 commits)
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (ggml-org#15802)
tests : add --list-ops and --show-coverage options (ggml-org#15745)
gguf: gguf_writer refactor (ggml-org#15691)
kv-cache : fix SWA checks + disable cacheless iSWA (ggml-org#15811)
model-conversion : add --embeddings flag to modelcard.template [no ci] (ggml-org#15801)
chat : fixed crash when Hermes 2 <tool_call> had a newline before it (ggml-org#15639)
chat : nemotron thinking & toolcalling support (ggml-org#15676)
scripts : add Jinja tester PySide6 simple app (ggml-org#15756)
llama : add support for EmbeddingGemma 300m (ggml-org#15798)
metal : Add template specialization for mul_mm_id w/ ne20 == 10 (ggml-org#15799)
llama : set n_outputs to 1 to avoid 0 outputs mean-pooling (ggml-org#15791)
CANN: Refactor ND to NZ workspace to be per-device (ggml-org#15763)
server: add exceed_context_size_error type (ggml-org#15780)
Document the new max GPU layers default in help (ggml-org#15771)
ggml: add ops for WAN video model (cuda && cpu) (ggml-org#15669)
CANN: Fix precision issue on 310I DUO multi-devices (ggml-org#15784)
opencl: add hs=40 to FA (ggml-org#15758)
CANN: fix acl_rstd allocation size in ggml_cann_rms_norm (ggml-org#15760)
vulkan: fix mmv subgroup16 selection (ggml-org#15775)
vulkan: don't use std::string in load_shaders, to improve compile time (ggml-org#15724)
...
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Sep 5, 2025
…upport

* origin/master:
Thinking model disabled assistant prefill (ggml-org#15404)
Implement --log-colors with always/never/auto (ggml-org#15792)
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (ggml-org#15802)
tests : add --list-ops and --show-coverage options (ggml-org#15745)
gguf: gguf_writer refactor (ggml-org#15691)
kv-cache : fix SWA checks + disable cacheless iSWA (ggml-org#15811)
model-conversion : add --embeddings flag to modelcard.template [no ci] (ggml-org#15801)
chat : fixed crash when Hermes 2 <tool_call> had a newline before it (ggml-org#15639)
chat : nemotron thinking & toolcalling support (ggml-org#15676)
scripts : add Jinja tester PySide6 simple app (ggml-org#15756)
llama : add support for EmbeddingGemma 300m (ggml-org#15798)
walidbr pushed a commit to walidbr/llama.cpp that referenced this pull request Sep 7, 2025
* feat: add Jinja tester PySide6 simple app

* Linter fixes

* Pylint fixes

* Whitespace

* Add commandline support; add formatter; add extensions

* Remove testing actions

* Silence flake8 warnings for commandline mode

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <[email protected]>

* Fix trailing whitespace/newline logic

* Update scripts/jinja/jinja-tester.py

Co-authored-by: Sigbjørn Skjæret <[email protected]>

* Update scripts/jinja/jinja-tester.py

Co-authored-by: Sigbjørn Skjæret <[email protected]>

---------

Co-authored-by: Sigbjørn Skjæret <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

python python script changes script Script related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants