Skip to content

Commit 42eefbc

Browse files
authored
Merge pull request #286352 from kevinguo-ed/kevin/open-ai
Added group chat tutorial with Open AI
2 parents ecf70c9 + 8096523 commit 42eefbc

File tree

4 files changed

+102
-0
lines changed

4 files changed

+102
-0
lines changed

articles/azure-signalr/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@
3131
href: signalr-quickstart-rest-api.md
3232
- name: Tutorials
3333
items:
34+
- name: Build a group chat app with OpenAI
35+
href: signalr-tutorial-group-chat-with-openai.md
3436
- name: Build a serverless real-time app with authentication
3537
href: signalr-tutorial-authenticate-azure-functions.md
3638
- name: Build a Blazor Server chat app
129 KB
Loading
42.2 KB
Loading
Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
---
2+
title: Build an AI-powered group chat with Azure SignalR and OpenAI Completion API
3+
author: kevinguo-ed
4+
description: A tutorial explaining how Azure SignalR and OpenAI Completion API are used together to build an AI-powered group chat
5+
ms.author: kevinguo
6+
ms.topic: tutorial
7+
ms.date: 09/09/2024
8+
uid: tutorials/ai-powered-group-chat
9+
---
10+
11+
# Build an AI-powered group chat with Azure SignalR and OpenAI Completion API
12+
13+
The integration of AI into applications is rapidly becoming a must-have for developers looking to help their users be more creative, productive, and achieve their health goals. AI-powered features, such as intelligent chatbots, personalized recommendations, and contextual responses, add significant value to modern apps. The AI-powered apps that came out since ChatGPT captured our imagination are primarily between one user and one AI assistant. As developers get more comfortable with the capabilities of AI, they're exploring AI-powered apps in a team's context. They ask "what value can AI add to a team of collaborators?"
14+
15+
This tutorial guides you through building a real-time group chat application. Among a group of human collaborators in a chat, there's an AI assistant, which has access to the chat history and can be invited to help out by any collaborator when they start the message with `@gpt`. The finished app looks like this.
16+
17+
:::image type="content" source="./media/signalr-tutorial-group-chat-with-openai/ai-powered-group-chat.png" alt-text="Screenshot of user interface for the AI-powered group chat.":::
18+
19+
We use OpenAI for generating intelligent, context-aware responses and SignalR for delivering the response to users in a group. You can find the complete code [in this repo](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/AIStreaming).
20+
21+
## Dependencies
22+
You can use either Azure OpenAI or OpenAI for this project. Make sure to update the `endpoint` and `key` in `appsetting.json`. `OpenAIExtensions` reads the configuration when the app starts and they're required to authenticate and use either service.
23+
24+
# [OpenAI](#tab/open-ai)
25+
To build this application, you need the following:
26+
* ASP.NET Core: To create the web application and host the SignalR hub
27+
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server
28+
* [Azure SignalR](./signalr-overview.md): For managing SignalR connections at scale
29+
* [OpenAI Client](https://www.nuget.org/packages/OpenAI/2.0.0-beta.10): To interact with OpenAI's API for generating AI responses
30+
31+
# [Azure OpenAI](#tab/azure-open-ai)
32+
To build this application, you need the following:
33+
* ASP.NET Core: To create the web application and host the SignalR hub
34+
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server
35+
* [Azure SignalR](./signalr-overview.md): For managing SignalR connections at scale
36+
* [Azure OpenAI](https://www.nuget.org/packages/Azure.AI.OpenAI/2.0.0-beta.3): Azure.AI.OpenAI
37+
---
38+
39+
## Implementation
40+
41+
In this section, we walk through the key parts of the code that integrate SignalR with OpenAI to create an AI-enhanced group chat experience.
42+
43+
### Data flow
44+
45+
:::image type="content" source="./media/signalr-tutorial-group-chat-with-openai/sequence-diagram-ai-powered-group-chat.png" alt-text="Sequence diagram of the AI-powered group chat.":::
46+
47+
### SignalR Hub integration
48+
49+
The `GroupChatHub` class manages user connections, message broadcasting, and AI interactions. When a user sends a message starting with `@gpt`, the hub forwards it to OpenAI, which generates a response. The AI's response is streamed back to the group in real-time.
50+
```csharp
51+
var chatClient = _openAI.GetChatClient(_options.Model);
52+
await foreach (var completion in chatClient.CompleteChatStreamingAsync(messagesInludeHistory))
53+
{
54+
// ...
55+
// Buffering and sending the AI's response in chunks
56+
await Clients.Group(groupName).SendAsync("newMessageWithId", "ChatGPT", id, totalCompletion.ToString());
57+
// ...
58+
}
59+
```
60+
61+
### Maintain context with history
62+
63+
Every request to [OpenAI's Chat Completions API](https://platform.openai.com/docs/guides/chat-completions) is stateless - OpenAI doesn't store past interactions. In a chat application, what a user or an assistant has said is important for generating a response that's contextually relevant. We can achieve this by including chat history in every request to the Completions API.
64+
65+
The `GroupHistoryStore` class manages chat history for each group. It stores messages posted by both the users and AI assistants, ensuring that the conversation context is preserved across interactions. This context is crucial for generating coherent AI responses.
66+
67+
```csharp
68+
// Store message generated by AI-assistant in memory
69+
public void UpdateGroupHistoryForAssistant(string groupName, string message)
70+
{
71+
var chatMessages = _store.GetOrAdd(groupName, _ => InitiateChatMessages());
72+
chatMessages.Add(new AssistantChatMessage(message));
73+
}
74+
```
75+
76+
```csharp
77+
// Store message generated by users in memory
78+
_history.GetOrAddGroupHistory(groupName, userName, message);
79+
```
80+
81+
### Stream AI responses
82+
83+
The `CompleteChatStreamingAsync()` method streams responses from OpenAI incrementally, which allows the application to send partial responses to the client as they're generated.
84+
85+
The code uses a `StringBuilder` to accumulate the AI's response. It checks the length of the buffered content and sends it to the clients when it exceeds a certain threshold (for example, 20 characters). This approach ensures that users see the AI’s response as it forms, mimicking a human-like typing effect.
86+
```csharp
87+
totalCompletion.Append(content);
88+
if (totalCompletion.Length - lastSentTokenLength > 20)
89+
{
90+
await Clients.Group(groupName).SendAsync("newMessageWithId", "ChatGPT", id, totalCompletion.ToString());
91+
lastSentTokenLength = totalCompletion.Length;
92+
}
93+
```
94+
95+
## Explore further
96+
97+
This project opens up exciting possibilities for further enhancement:
98+
- **Advanced AI features**: Use other OpenAI capabilities like sentiment analysis, translation, or summarization.
99+
- **Incorporating multiple AI agents**: You can introduce multiple AI agents with distinct roles or expertise areas within the same chat. For example, one agent might focus on text generation and the other provides image or audio generation. This interaction can create a richer and more dynamic user experience where different AI agents interact seamlessly with users and each other.
100+
- **Share chat history between server instances**: Implement a database layer to persist chat history across sessions, allowing conversations to resume even after a disconnect. Beyond SQL or NO SQL based solutions, you can also explore using a caching service like Redis. It can significantly improve performance by storing frequently accessed data, such as chat history or AI responses, in memory. This reduces latency and offloads database operations, leading to faster response times, particularly in high-traffic scenarios.

0 commit comments

Comments
 (0)