Skip to content

Commit 40b9e13

Browse files
authored
Added article: SignalR with Open AI
1 parent 24809ec commit 40b9e13

File tree

1 file changed

+89
-0
lines changed

1 file changed

+89
-0
lines changed
Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
---
2+
title: Building AI-Powered Group Chat with SignalR and OpenAI
3+
author: kevinguo-ed
4+
description: ...
5+
ms.author: ...
6+
ms.date: 08/27/2024
7+
---
8+
9+
## Overview
10+
11+
The integration of AI into applications is rapidly becoming a must-have for developers looking to help their users be more creative, productive and achieve their health goals. AI-powered features, such as intelligent chatbots, personalized recommendations, and contextual responses, add significant value to modern apps. The AI-powered apps that came out since Chat GPT captured our imagination are primarily between one user and one AI assistant. As developers get more comfortable with the capabilities of AI, they are exploring AI-powered apps in a team's context. They ask "what value can AI add to a team of collaborators"?
12+
13+
This tutorial guides you through building a real-time group chat application. Among a group of human collaborators in a chat, there's an AI assistant which has access to the chat history and can be invited to help out by any collaborator when they start the message with `@gpt`. The finished app looks like this.
14+
15+
{{Chat interface missing...}}
16+
17+
We use Open AI for generating intelligent, context-aware responses and SignalR for delivering the response to users in a group. You can find the complete code [in this repo](https://github.com/microsoft/SignalR-Samples-AI/tree/main/AIStreaming).
18+
19+
## Dependencies
20+
You can use either Azure Open AI or Open AI for this project. Make sure to update the `endpoint` and `key` in `appsetting.json`. `OpenAIExtensions` reads the configuration when the app starts and they are required to authenticate and use either service.
21+
22+
# [Open AI](#tab/open-ai)
23+
To build this application, you will need the following:
24+
* ASP.NET Core: To create the web application and host the SignalR hub.
25+
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
26+
* [OpenAI Client](https://www.nuget.org/packages/OpenAI/2.0.0-beta.10): To interact with OpenAI's API for generating AI responses.
27+
28+
# [Azure Open AI](#tab/azure-open-ai)
29+
To build this application, you will need the following:
30+
* ASP.NET Core: To create the web application and host the SignalR hub.
31+
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
32+
* [Azure Open AI](https://www.nuget.org/packages/Azure.AI.OpenAI/2.0.0-beta.3): Azure.AI.OpenAI
33+
---
34+
35+
## Implementation
36+
37+
In this section, we'll walk through the key parts of the code that integrate SignalR with OpenAI to create an AI-enhanced group chat experience.
38+
39+
### SignalR Hub integration**
40+
41+
The `GroupChatHub` class manages user connections, message broadcasting, and AI interactions. When a user sends a message starting with `@gpt`, the hub forwards it to OpenAI, which generates a response. The AI's response is streamed back to the group in real-time.
42+
```csharp
43+
var chatClient = _openAI.GetChatClient(_options.Model);
44+
await foreach (var completion in chatClient.CompleteChatStreamingAsync(messagesInludeHistory))
45+
{
46+
// ...
47+
// Buffering and sending the AI's response in chunks
48+
await Clients.Group(groupName).SendAsync("newMessageWithId", "ChatGPT", id, totalCompletion.ToString());
49+
// ...
50+
}
51+
```
52+
53+
The chat history is managed by `GroupHistoryStore`, which stores messages for context. It includes both user and assistant messages to maintain conversation continuity.
54+
```csharp
55+
public void UpdateGroupHistoryForAssistant(string groupName, string message) { ... }
56+
```
57+
58+
### Streaming AI responses
59+
60+
The `CompleteChatStreamingAsync()` method streams responses from OpenAI incrementally, which allows the application to send partial responses to the client as they are generated.
61+
62+
The code uses a `StringBuilder` to accumulate the AI's response. It checks the length of the buffered content and sends it to the clients when it exceeds a certain threshold (e.g., 20 characters). This approach ensures that users see the AI’s response as it forms, mimicking a human-like typing effect.
63+
```csharp
64+
totalCompletion.Append(content);
65+
if (totalCompletion.Length - lastSentTokenLength > 20)
66+
{
67+
await Clients.Group(groupName).SendAsync("newMessageWithId", "ChatGPT", id, totalCompletion.ToString());
68+
lastSentTokenLength = totalCompletion.Length;
69+
}
70+
```
71+
72+
### Maintaining context with history
73+
74+
The `GroupHistoryStore` class manages the chat history for each group. It stores both user and AI messages, ensuring that the conversation context is preserved across interactions. This context is crucial for generating coherent AI responses.
75+
```csharp
76+
public void UpdateGroupHistoryForAssistant(string groupName, string message)
77+
{
78+
var chatMessages = _store.GetOrAdd(groupName, _ => InitiateChatMessages());
79+
chatMessages.Add(new AssistantChatMessage(message));
80+
}
81+
```
82+
83+
### Explore further
84+
85+
This project opens up exciting possibilities for further enhancement:
86+
1. **Advanced AI features**: Leverage other OpenAI capabilities like sentiment analysis, translation, or summarization.
87+
2. **Incorporating multiple AI agents**: You can introduce multiple AI agents with distinct roles or expertise areas within the same chat. For example, one agent might focus on text generation, the other provides image or audio generation. This can create a richer and more dynamic user experience where different AI agents interact seamlessly with users and each other.
88+
3. **Share chat history between server instances**: Implement a database layer to persist chat history across sessions, allowing conversations to resume even after a disconnect. Beyond SQL or NO SQL based solutions, you can also explore using a caching service like Redis. It can significantly improve performance by storing frequently accessed data, such as chat history or AI responses, in memory. This reduces latency and offloads database operations, leading to faster response times, particularly in high-traffic scenarios.
89+
4. **Leveraging Azure SignalR Service**: [Azure SignalR Service](https://learn.microsoft.com/azure/azure-signalr/signalr-overview) provides scalable and reliable real-time messaging for your application. By offloading the SignalR backplane to Azure, you can scale out the chat application easily to support thousands of concurrent users across multiple servers. Azure SignalR also simplifies management and provides built-in features like automatic reconnections.

0 commit comments

Comments
 (0)