Skip to content

Commit 4f5a00f

Browse files
authored
DOC-1241 Add history fields to chat completion processors (#225)
1 parent 6a7942d commit 4f5a00f

File tree

2 files changed

+99
-0
lines changed

2 files changed

+99
-0
lines changed

modules/components/pages/processors/ollama_chat.adoc

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ ollama_chat:
3535
max_tokens: 0 # No default (optional)
3636
temperature: 0 # No default (optional)
3737
save_prompt_metadata: false
38+
history: "" # No default (optional)
3839
tools: [] # No default (required)
3940
runner:
4041
context_size: 0 # No default (optional)
@@ -67,6 +68,7 @@ ollama_chat:
6768
frequency_penalty: 0 # No default (optional)
6869
stop: [] # No default (optional)
6970
save_prompt_metadata: false
71+
history: "" # No default (optional)
7072
max_tool_calls: 3
7173
tools: [] # No default (required)
7274
runner:
@@ -247,6 +249,26 @@ Set to `true` to save the prompt value to a metadata field (`@prompt`) on the co
247249
*Default*: `false`
248250

249251

252+
=== `history`
253+
254+
Include historical messages in a chat request. You must use a Bloblang query to create an array of objects in the form of `[{"role": "", "content":""}]` where:
255+
256+
- `role` is the sender of the original messages, either `system`, `user`, `assistant`, or `tool`.
257+
- `content` is the text of the original messages.
258+
259+
*Type*: `string`
260+
261+
*Default*: `""`
262+
263+
```yml
264+
# Examples
265+
266+
history: [{"role": "user", "content": "My favorite color is blue"}, {"role":"assistant", "content":"Nice"}]
267+
268+
```
269+
If the `prompt` is set to `"What is my favorite color?"`, the specified `model` responds with `blue`.
270+
271+
250272
=== `max_tool_calls`
251273

252274
The maximum number of sequential calls you can make to external tools to retrieve additional information to answer a prompt.
@@ -449,6 +471,7 @@ output:
449471
```
450472
--
451473
474+
452475
Use a series of processors to make calls to external tools::
453476
+
454477
--

modules/components/pages/processors/openai_chat_completion.adoc

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ openai_chat_completion:
3131
model: gpt-4o # No default (required)
3232
prompt: "" # No default (optional)
3333
system_prompt: "" # No default (optional)
34+
history: "" # No default (optional)
3435
image: 'root = this.image.decode("base64") # decode base64 encoded image' # No default (optional)
3536
max_tokens: 0 # No default (optional)
3637
temperature: 0 # No default (optional)
@@ -56,6 +57,7 @@ openai_chat_completion:
5657
model: gpt-4o # No default (required)
5758
prompt: "" # No default (optional)
5859
system_prompt: "" # No default (optional)
60+
history: "" # No default (optional)
5961
image: 'root = this.image.decode("base64") # decode base64 encoded image' # No default (optional)
6062
max_tokens: 0 # No default (optional)
6163
temperature: 0 # No default (optional)
@@ -156,6 +158,28 @@ The system prompt to submit along with the user prompt. This field supports xref
156158

157159
*Type*: `string`
158160

161+
=== `history`
162+
163+
Include messages from a prior conversation. You must use a Bloblang query to create an array of objects in the form of `[{"role": "user", "content": "<text>"}, {"role":"assistant", "content":"<text>"}]` where:
164+
165+
- `role` is the sender of the original messages, either `system`, `user`, or `assistant`.
166+
- `content` is the text of the original messages.
167+
168+
For more information, see <<Examples, Examples>>.
169+
170+
*Type*: `string`
171+
172+
*Default*: `""`
173+
174+
```yml
175+
# Examples
176+
177+
history: [{"role": "user", "content": "My favorite color is blue"}, {"role":"assistant", "content":"Nice"}]
178+
179+
```
180+
If the `prompt` is set to `"What is my favorite color?"`, the specified `model` responds with `blue`.
181+
182+
159183
=== `image`
160184

161185
An optional image to submit along with the prompt. The result of the Bloblang mapping must be a byte array.
@@ -660,6 +684,58 @@ output:
660684
codec: lines: lines
661685
```
662686
687+
--
688+
Generate chat history::
689+
+
690+
--
691+
In this configuration, a pipeline executes a number of processors, including a cache, to generate and send chat history to a GPT-4o model.
692+
693+
```yaml
694+
input:
695+
stdin:
696+
scanner:
697+
lines: {}
698+
pipeline:
699+
processors:
700+
- mapping: |
701+
root.prompt = content().string()
702+
- branch:
703+
processors:
704+
- cache:
705+
resource: mem
706+
operator: get
707+
key: history
708+
- catch:
709+
- mapping: 'root = []'
710+
result_map: 'root.history = this'
711+
- branch:
712+
processors:
713+
- openai_chat_completion:
714+
model: gpt-4o
715+
api_key: "${OPENAI_API_KEY}"
716+
prompt: "${!this.prompt}"
717+
history: 'root = this.history'
718+
result_map: 'root.response = content().string()'
719+
- mutation: |
720+
root.history = this.history.concat([
721+
{"role": "user", "content": this.prompt},
722+
{"role": "assistant", "content": this.response},
723+
])
724+
- cache:
725+
resource: mem
726+
operator: set
727+
key: history
728+
value: '${!this.history}'
729+
- mapping: |
730+
root = this.response
731+
output:
732+
stdout:
733+
codec: lines
734+
735+
cache_resources:
736+
- label: mem
737+
memory: {}
738+
```
663739
--
664740
665741
Make calls to external tools::

0 commit comments

Comments
 (0)