Skip to content

Commit e03901d

Browse files
committed
Benchmark template rendering times
A concern is that rendering the template this way could be much slower, assuming Jinja2 makes extensive use of caching. We thus write a comparative benchmark between using Jinja2 templates or simply concatenating strings.
1 parent cf21ce6 commit e03901d

File tree

3 files changed

+48
-1
lines changed

3 files changed

+48
-1
lines changed

docs/reference/template.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,11 @@ allows to easily compose complex prompts.
2525
Prompt functions are opinionated when it comes to prompt rendering. These opinions are meant to avoid common prompting errors, but can have unintended consequences if you are doing something unusual. We advise to always print the prompt before using it. You can also [read the
2626
reference](#formatting-conventions) section if you want to know more.
2727

28+
29+
!!! note "Performance"
30+
31+
Prompt templates introduce some overhead compared to standard Python functions, although the rendering time is still very reasonable. In the unlikely scenario where rendering templates are a bottleneck you can replace them with functions that use standard string manipulation.
32+
2833
## Your first prompt
2934

3035
The following snippet showcases a very simple prompt. The variables between

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ packages = ["prompts"]
2020
write_to = "prompts/_version.py"
2121

2222
[project.optional-dependencies]
23-
test = ["pre-commit", "pytest"]
23+
test = ["pre-commit", "pytest", "pytest-benchmark"]
2424
docs = [
2525
"mkdocs",
2626
"mkdocs-material",

tests/test_templates.py

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
import random
2+
import string
3+
14
import pytest
25

36
import prompts
@@ -192,3 +195,42 @@ def simple_prompt_name(query: str):
192195
assert simple_prompt("test") == "test"
193196
assert simple_prompt["gpt2"]("test") == "test"
194197
assert simple_prompt["provider/name"]("test") == "name: test"
198+
199+
200+
def test_benchmark_template_render(benchmark):
201+
202+
@prompts.template
203+
def test_tpl(var0, var1):
204+
prompt = var0
205+
return prompt + """{{var1}} test"""
206+
207+
def setup():
208+
"""We generate random strings to make sure we don't hit any potential cache."""
209+
length = 10
210+
var0 = "".join(
211+
random.choice(string.ascii_uppercase + string.digits) for _ in range(length)
212+
)
213+
var1 = "".join(
214+
random.choice(string.ascii_uppercase + string.digits) for _ in range(length)
215+
)
216+
return (var0, var1), {}
217+
218+
benchmark.pedantic(test_tpl, setup=setup, rounds=500)
219+
220+
221+
def test_benchmark_template_function(benchmark):
222+
223+
def test_tpl(var0, var1):
224+
return var0 + f"{var1} test"
225+
226+
def setup():
227+
length = 10
228+
var0 = "".join(
229+
random.choice(string.ascii_uppercase + string.digits) for _ in range(length)
230+
)
231+
var1 = "".join(
232+
random.choice(string.ascii_uppercase + string.digits) for _ in range(length)
233+
)
234+
return (var0, var1), {}
235+
236+
benchmark.pedantic(test_tpl, setup=setup, rounds=500)

0 commit comments

Comments
 (0)