Skip to content

Commit d9b7349

Browse files
committed
Update 2024-11-28_ai-server.md
1 parent 1354d3c commit d9b7349

File tree

1 file changed

+100
-0
lines changed

1 file changed

+100
-0
lines changed

MyApp/_posts/2024-11-28_ai-server.md

Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -196,8 +196,108 @@ The current release of AI Server supports a number of different modalities, incl
196196
- **Trim Video** - Trim a video to a specific length
197197

198198
### Managed File Storage
199+
199200
- Blob Storage - isolated and restricted by API Key
200201

202+
## AI Server API Examples
203+
204+
To simplify integrations with AI Server each API Request can be called with 3 different call styles to better
205+
support different use-cases and integration patterns.
206+
207+
### Synchronous Open AI Chat Example
208+
209+
The **Synchronous API** is the simplest API ideal for small workloads where the Response is returned in the same Request:
210+
211+
```csharp
212+
var client = new JsonApiClient(baseUrl);
213+
client.BearerToken = apiKey;
214+
215+
var api = await client.ApiAsync(new OpenAiChatCompletion {
216+
Model = "mixtral:8x22b",
217+
Messages = [
218+
new() {
219+
Role = "user",
220+
Content = "What's the capital of France?"
221+
}
222+
],
223+
MaxTokens = 50
224+
});
225+
226+
var answer = api.Response.Choices[0].Message;
227+
```
228+
229+
### Queued Open AI Chat Example
230+
231+
The **Queued API** immediately Returns a reference to the queued job executing the AI Request:
232+
233+
```csharp
234+
var api = await client.ApiAsync(new QueueOpenAiChatCompletion
235+
{
236+
Request = new()
237+
{
238+
Model = "gpt-4-turbo",
239+
Messages = new List<OpenAiMessage>
240+
{
241+
new() { Role = "system", Content = "You are a helpful AI assistant." },
242+
new() { Role = "user", Content = "How do LLMs work?" }
243+
},
244+
MaxTokens = 50
245+
}
246+
});
247+
248+
var uniqueRefId = api.Response.RefId;
249+
var jobId = api.Response.Id;
250+
```
251+
252+
Which can be used to poll for the API Response of any Job by calling `GetJobStatus`
253+
and then when `Completed` can call `GetOpenAiChat` to get the `OpenAiChatResponse`:
254+
255+
```csharp
256+
var apiStatus = await client.ApiAsync(new GetJobStatus
257+
{
258+
JobId = jobId,
259+
RefId = uniqueRefId,
260+
});
261+
262+
if (apiStatus.Response.JobState == BackgroundJobState.Completed)
263+
{
264+
var api = await client.ApiAsync(new GetOpenAiChat
265+
{
266+
Id = jobId,
267+
RefId = uniqueRefId,
268+
});
269+
}
270+
```
271+
272+
### Open AI Chat with Callback Example
273+
274+
The Queued API also accepts a **Reply to Web Callback** for a more reliable push-based App integration
275+
where responses are posted back to a custom URL Endpoint:
276+
277+
```csharp
278+
var api = await client.ApiAsync(new QueueOpenAiChatCompletion
279+
{
280+
//...
281+
ReplyTo = "https://example.org/api/OpenAiChatResponseCallback?MyId=1"
282+
});
283+
```
284+
285+
Your callback can add any additional metadata on the callback to assist your App in correlating the response with the
286+
initiating request which just needs to contain the properties of the `OpenAiChatResponse` you're interested in
287+
along with any metadata added to the callback URL, e.g:
288+
289+
```csharp
290+
public class OpenAiChatResponseCallback : OpenAiChatResponse, IReturnVoid
291+
{
292+
public int MyId { get; set; }
293+
}
294+
295+
public object Any(OpenAiChatResponseCallback request)
296+
{
297+
// Handle OpenAiChatResponse callabck
298+
}
299+
```
300+
201301
## Feedback
202302

203303
Feel free to reach us at [ServiceStack/Discuss](https://github.com/ServiceStack/Discuss/discussions)

0 commit comments

Comments
 (0)