You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/components/pages/processors/ollama_chat.adoc
+23Lines changed: 23 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,6 +35,7 @@ ollama_chat:
35
35
max_tokens: 0 # No default (optional)
36
36
temperature: 0 # No default (optional)
37
37
save_prompt_metadata: false
38
+
history: "" # No default (optional)
38
39
tools: [] # No default (required)
39
40
runner:
40
41
context_size: 0 # No default (optional)
@@ -67,6 +68,7 @@ ollama_chat:
67
68
frequency_penalty: 0 # No default (optional)
68
69
stop: [] # No default (optional)
69
70
save_prompt_metadata: false
71
+
history: "" # No default (optional)
70
72
max_tool_calls: 3
71
73
tools: [] # No default (required)
72
74
runner:
@@ -247,6 +249,26 @@ Set to `true` to save the prompt value to a metadata field (`@prompt`) on the co
247
249
*Default*: `false`
248
250
249
251
252
+
=== `history`
253
+
254
+
Include historical messages in a chat request. You must use a Bloblang query to create an array of objects in the form of `[{"role": "", "content":""}]` where:
255
+
256
+
- `role` is the sender of the original messages, either `system`, `user`, `assistant`, or `tool`.
257
+
- `content` is the text of the original messages.
258
+
259
+
*Type*: `string`
260
+
261
+
*Default*: `""`
262
+
263
+
```yml
264
+
# Examples
265
+
266
+
history: [{"role": "user", "content": "My favorite color is blue"}, {"role":"assistant", "content":"Nice"}]
267
+
268
+
```
269
+
If the `prompt` is set to `"What is my favorite color?"`, the specified `model` responds with `blue`.
270
+
271
+
250
272
=== `max_tool_calls`
251
273
252
274
The maximum number of sequential calls you can make to external tools to retrieve additional information to answer a prompt.
@@ -449,6 +471,7 @@ output:
449
471
```
450
472
--
451
473
474
+
452
475
Use a series of processors to make calls to external tools::
@@ -156,6 +158,28 @@ The system prompt to submit along with the user prompt. This field supports xref
156
158
157
159
*Type*: `string`
158
160
161
+
=== `history`
162
+
163
+
Include messages from a prior conversation. You must use a Bloblang query to create an array of objects in the form of `[{"role": "user", "content": "<text>"}, {"role":"assistant", "content":"<text>"}]` where:
164
+
165
+
- `role` is the sender of the original messages, either `system`, `user`, or `assistant`.
166
+
- `content` is the text of the original messages.
167
+
168
+
For more information, see <<Examples, Examples>>.
169
+
170
+
*Type*: `string`
171
+
172
+
*Default*: `""`
173
+
174
+
```yml
175
+
# Examples
176
+
177
+
history: [{"role": "user", "content": "My favorite color is blue"}, {"role":"assistant", "content":"Nice"}]
178
+
179
+
```
180
+
If the `prompt` is set to `"What is my favorite color?"`, the specified `model` responds with `blue`.
181
+
182
+
159
183
=== `image`
160
184
161
185
An optional image to submit along with the prompt. The result of the Bloblang mapping must be a byte array.
@@ -660,6 +684,58 @@ output:
660
684
codec: lines: lines
661
685
```
662
686
687
+
--
688
+
Generate chat history::
689
+
+
690
+
--
691
+
In this configuration, a pipeline executes a number of processors, including a cache, to generate and send chat history to a GPT-4o model.
0 commit comments