Skip to content

Commit e7e5a9d

Browse files
Update docs
1 parent d45a397 commit e7e5a9d

File tree

1 file changed

+13
-17
lines changed

1 file changed

+13
-17
lines changed

docs/guides/ORCHESTRATION_CHAT_COMPLETION.md

Lines changed: 13 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020

2121
This guide provides examples of how to use the Orchestration service in SAP AI Core for chat completion tasks using the SAP AI SDK for Java.
2222

23-
## Prerequisites
23+
# Prerequisites
2424

2525
Before using the AI Core module, ensure that you have met all the general requirements outlined in the [README.md](../../README.md#general-requirements).
2626
Additionally, include the necessary Maven dependency in your project.
@@ -86,7 +86,7 @@ var config = new OrchestrationModuleConfig()
8686

8787
Please also refer to [our sample code](../../sample-code/spring-app/src/main/java/com/sap/ai/sdk/app/controllers/OrchestrationController.java) for this and all following code examples.
8888

89-
### Chat Completion
89+
## Chat Completion
9090

9191
Use the Orchestration service to generate a response to a user message:
9292

@@ -101,7 +101,7 @@ String messageResult = result.getContent();
101101
In this example, the Orchestration service generates a response to the user message "Hello world! Why is this phrase so famous?".
102102
The LLM response is available as the first choice under the `result.getOrchestrationResult()` object.
103103

104-
### Chat completion with Templates
104+
## Chat completion with Templates
105105

106106
Use a prepared template and execute requests with by passing only the input parameters:
107107

@@ -118,7 +118,7 @@ var result = client.chatCompletion(prompt, configWithTemplate);
118118

119119
In this case the template is defined with the placeholder `{{?language}}` which is replaced by the value `German` in the input parameters.
120120

121-
### Message history
121+
## Message history
122122

123123
Include a message history to maintain context in the conversation:
124124

@@ -135,7 +135,7 @@ var prompt = new OrchestrationPrompt(message).messageHistory(messagesHistory);
135135
var result = new OrchestrationClient().chatCompletion(prompt, config);
136136
```
137137

138-
### Chat completion filter
138+
## Chat completion filter
139139

140140
Apply content filtering to the chat completion:
141141

@@ -166,7 +166,7 @@ var configWithFilter = config.withInputFiltering(filterStrict).withOutputFilteri
166166
var result =
167167
new OrchestrationClient().chatCompletion(prompt, configWithFilter);
168168
```
169-
#### Behavior of Input and Output Filters
169+
### Behavior of Input and Output Filters
170170

171171
- **Input Filter**:
172172
If the input message violates the filter policy, a `400 (Bad Request)` response will be received during the `chatCompletion` call.
@@ -179,7 +179,7 @@ var result =
179179

180180
You will find [some examples](../../sample-code/spring-app/src/main/java/com/sap/ai/sdk/app/controllers/OrchestrationController.java) in our Spring Boot application demonstrating response handling with filters.
181181

182-
### Data masking
182+
## Data masking
183183

184184
Use the data masking module to anonymize personal information in the input:
185185

@@ -202,7 +202,7 @@ var result =
202202

203203
In this example, the input will be masked before the call to the LLM and will remain masked in the output.
204204

205-
### Grounding
205+
## Grounding
206206

207207
Use the grounding module to provide additional context to the AI model.
208208

@@ -233,11 +233,11 @@ Use the grounding module to provide additional context to the AI model.
233233

234234
In this example, the AI model is provided with additional context in the form of grounding information. Note, that it is necessary to provide the grounding input via one or more input variables.
235235

236-
### Stream chat completion
236+
## Stream chat completion
237237

238238
It's possible to pass a stream of chat completion delta elements, e.g. from the application backend to the frontend in real-time.
239239

240-
#### Asynchronous Streaming
240+
### Asynchronous Streaming
241241

242242
This is a blocking example for streaming and printing directly to the console:
243243

@@ -254,12 +254,10 @@ try (Stream<String> stream = client.streamChatCompletion(prompt, config)) {
254254
}
255255
```
256256

257-
#### Spring Boot example
258-
259257
Please find [an example in our Spring Boot application](../../sample-code/spring-app/src/main/java/com/sap/ai/sdk/app/controllers/OrchestrationController.java).
260258
It shows the usage of Spring Boot's `ResponseBodyEmitter` to stream the chat completion delta messages to the frontend in real-time.
261259

262-
### Set model parameters
260+
## Set model parameters
263261

264262
Change your LLM configuration to add model parameters:
265263

@@ -273,7 +271,7 @@ OrchestrationAiModel customGPT4O =
273271
.withVersion("2024-05-13");
274272
```
275273

276-
### Spring AI Integration
274+
## Spring AI Integration
277275

278276
The Orchestration client is integrated in Spring AI classes:
279277

@@ -287,11 +285,9 @@ Prompt prompt = new Prompt("What is the capital of France?", opts);
287285
ChatResponse response = client.call(prompt);
288286
```
289287

290-
#### Spring Boot example
291-
292288
Please find [an example in our Spring AI application](../../sample-code/spring-ai-app/src/main/java/com/sap/ai/sdk/app/controllers/OrchestrationController.java).
293289

294-
### Using a Configuration from AI Launchpad
290+
## Using a Configuration from AI Launchpad
295291

296292
In case you have created a configuration in AI Launchpad, you can copy or download the configuration as JSON and use it directly in your code:
297293

0 commit comments

Comments
 (0)