Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions aspnetcore/toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -874,6 +874,8 @@ items:
displayName: signalr
- name: Tutorials
items:
- name: AI-powered group chat
uid: tutorials/ai-powered-group-chat
- name: SignalR with JavaScript
uid: tutorials/signalr
- name: SignalR with TypeScript
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
110 changes: 110 additions & 0 deletions aspnetcore/tutorials/ai-powered-group-chat/ai-powered-group-chat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
title: Build an AI-Powered Group Chat with SignalR and OpenAI
author: kevinguo-ed
description: A tutorial explaining how SignalR and OpenAI are used together to build an AI-powered group chat
ms.author: kevinguo
ms.date: 08/27/2024
uid: tutorials/ai-powered-group-chat
---

## Overview

The integration of AI into applications is rapidly becoming a must-have for developers looking to help their users be more creative, productive and achieve their health goals. AI-powered features, such as intelligent chatbots, personalized recommendations, and contextual responses, add significant value to modern apps. The AI-powered apps that came out since ChatGPT captured our imagination are primarily between one user and one AI assistant. As developers get more comfortable with the capabilities of AI, they are exploring AI-powered apps in a team's context. They ask "what value can AI add to a team of collaborators"?

This tutorial guides you through building a real-time group chat application. Among a group of human collaborators in a chat, there's an AI assistant which has access to the chat history and can be invited to help out by any collaborator when they start the message with `@gpt`. The finished app looks like this.

:::image type="content" source="./ai-powered-group-chat.jpg" alt-text="user interface for the AI-powered group chat":::

We use OpenAI for generating intelligent, context-aware responses and SignalR for delivering the response to users in a group. You can find the complete code [in this repo](https://github.com/microsoft/SignalR-Samples-AI/tree/main/AIStreaming).

## Dependencies
You can use either Azure OpenAI or OpenAI for this project. Make sure to update the `endpoint` and `key` in `appsettings.json`. `OpenAIExtensions` reads the configuration when the app starts, and the configuration values for `endpoint` and `key` are required to authenticate and use either service.

# [OpenAI](#tab/open-ai)
To build this application, you will need the following:
* ASP.NET Core: To create the web application and host the SignalR hub.
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
* [OpenAI Client](https://www.nuget.org/packages/OpenAI/2.0.0-beta.10): To interact with OpenAI's API for generating AI responses.

# [Azure OpenAI](#tab/azure-open-ai)
To build this application, you will need the following:
* ASP.NET Core: To create the web application and host the SignalR hub.
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
* [Azure OpenAI](https://www.nuget.org/packages/Azure.AI.OpenAI/2.0.0-beta.3): Azure.AI.OpenAI
---
Comment on lines +23 to +34
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the added space helps with readability...and not that it really matters, but I've found myself trying to avoid tabs with H1's, you're free to use any heading value so long as the #tab bookmark/syntax is in place. Also, removed versions from the packages as stable releases are now available. See microsoft/SignalR-Samples-AI#5, where I just did a PR to upgrade these in the source.

Suggested change
# [OpenAI](#tab/open-ai)
To build this application, you will need the following:
* ASP.NET Core: To create the web application and host the SignalR hub.
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
* [OpenAI Client](https://www.nuget.org/packages/OpenAI/2.0.0-beta.10): To interact with OpenAI's API for generating AI responses.
# [Azure OpenAI](#tab/azure-open-ai)
To build this application, you will need the following:
* ASP.NET Core: To create the web application and host the SignalR hub.
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
* [Azure OpenAI](https://www.nuget.org/packages/Azure.AI.OpenAI/2.0.0-beta.3): Azure.AI.OpenAI
---
### [OpenAI](#tab/open-ai)
To build this application, you will need the following:
* ASP.NET Core: To create the web application and host the SignalR hub.
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
* [OpenAI Client](https://www.nuget.org/packages/OpenAI): To interact with OpenAI's API for generating AI responses.
### [Azure OpenAI](#tab/azure-open-ai)
To build this application, you will need the following:
* ASP.NET Core: To create the web application and host the SignalR hub.
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
* [Azure OpenAI](https://www.nuget.org/packages/Azure.AI.OpenAI): `Azure.AI.OpenAI`
---


## Implementation

In this section, we'll walk through the key parts of the code that integrate SignalR with OpenAI to create an AI-enhanced group chat experience.

### Data flow

:::image type="content" source="./sequence-diagram-ai-powered-group-chat.png" alt-text="sequence diagram for the AI-powered group chat":::
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an idea, not something that you need to do, but since we still do not have native support from the Learn platform for mermaid diagrams-I've been using mermaid.live and its share feature, that lets you encode/share diagrams. You can build this out, create the share, and embed the share as a comment in the Markdown (which hides it from the end user) but the content developer can use it as a source of truth for future edits, re-exporting it as an image to update the image itself.

For example:

<!--

https://mermaid.live/edit#pako:eNp1kk1P4zAQhv_KaK6botLQFnxAyhbt7mFRUQIXlMsQD6lFYmf9wVfV_752UyQQ4NPY7-N3xjPeYmMko0DH_wLrhi8UtZb6WkNcFLzRob9jO-4Hsl41aiDtoQJyUDUblqH7Si-S_lv5P-EOisYroz8zZWKuybYcYx7MZ2KdiLWNaZy35I2FX0F_47ZKbPEaLMPKuN44uPhZH7gKJufnP6AQcG1V27J143kxnpcCVhtuHhyY4EelM2aAS7IP0jwdTPYXJpPnUlQNabiP1dyUfw9OrOUHy7WAK3KO3TtmnaRJqqJkH6x2wI_UBUrvAcsudP5jXatoEmtVzjuQ5GkUJVNswSN5hgIz7Nn2pGQc4jbpNfoN91yjiKGMD6ix1rvIpWlWL7pB4W3gDK0J7QbFPXUu7sIQE7yN_w1hqWLPL8cvsv8pGcZeo9jiM4rl8uh0tjidneXzfDmbH88zfEGRnxxN48rzs3yaLxYn08Uuw1djounx_vbtPk4Jdv8Bo5TJ1A

-->

Check out this link.


### SignalR Hub integration

The `GroupChatHub` class manages user connections, message broadcasting, and AI interactions. When a user sends a message starting with `@gpt`, the hub forwards it to OpenAI, which generates a response. The AI's response is streamed back to the group in real-time.
```csharp
var chatClient = _openAI.GetChatClient(_options.Model);

await foreach (var completion in
chatClient.CompleteChatStreamingAsync(messagesInludeHistory))
{
// ...
// Buffering and sending the AI's response in chunks
await Clients.Group(groupName).SendAsync(
"newMessageWithId",
"ChatGPT",
id,
totalCompletion.ToString());
// ...
}
```

### Maintain context with history

Every request to [Open AI's Chat Completions API](https://platform.openai.com/docs/guides/chat-completions) is stateless—Open AI doesn't store past interactions. In a chat app, what a user or an assistant has said is important for generating a response that's contextually relevant. To achieve this, include chat history in every request to the Completions API.

The `GroupHistoryStore` class manages chat history for each group. It stores messages posted by both the users and AI assistants, ensuring that the conversation context is preserved across interactions. This context is crucial for generating coherent AI responses.

```csharp
// Store message generated by AI-assistant in memory
public void UpdateGroupHistoryForAssistant(string groupName, string message)
{
var chatMessages = _store.GetOrAdd(groupName, _ => InitiateChatMessages());
chatMessages.Add(new AssistantChatMessage(message));
}
```

```csharp
// Store message generated by users in memory
_history.GetOrAddGroupHistory(groupName, userName, message);
```

### Stream AI responses

The `CompleteChatStreamingAsync()` method streams responses from OpenAI incrementally, which allows the app to send partial responses to the client as they are generated.

The code uses a `StringBuilder` to accumulate the AI's response. It checks the length of the buffered content and sends it to the clients when it exceeds a certain threshold (e.g., 20 characters). This approach ensures that users see the AI’s response as it forms, mimicking a human-like typing effect.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The code uses a `StringBuilder` to accumulate the AI's response. It checks the length of the buffered content and sends it to the clients when it exceeds a certain threshold (e.g., 20 characters). This approach ensures that users see the AI’s response as it forms, mimicking a human-like typing effect.
The code uses a <xref:System.Text.StringBuilder> to accumulate the AI's response. It checks the length of the buffered content and sends it to the clients when it exceeds a certain threshold (e.g., 20 characters). This approach ensures that users see the AI’s response as it forms, mimicking a human-like typing effect.

```csharp
totalCompletion.Append(content);

if (totalCompletion.Length - lastSentTokenLength > 20)
{
await Clients.Group(groupName).SendAsync(
"newMessageWithId",
"ChatGPT",
id,
totalCompletion.ToString());

lastSentTokenLength = totalCompletion.Length;
Copy link
Member

@BrennanConroy BrennanConroy Mar 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This is something I don't like about bing chat, it sends the entire message every update. Really it should just append the new content on the client side, saves bandwidth and memory usage on the server.

}
```

## Explore further

This project opens up exciting possibilities for further enhancement:
1. **Advanced AI features**: Leverage other OpenAI capabilities like sentiment analysis, translation, or summarization.
1. **Incorporating multiple AI agents**: You can introduce multiple AI agents with distinct roles or expertise areas within the same chat. For example, one agent might focus on text generation, the other provides image or audio generation. This can create a richer and more dynamic user experience where different AI agents interact seamlessly with users and each other.
1. **Share chat history between server instances**: Implement a database layer to persist chat history across sessions, allowing conversations to resume even after a disconnect. Beyond SQL or NO SQL based solutions, you can also explore using a caching service like Redis. It can significantly improve performance by storing frequently accessed data, such as chat history or AI responses, in memory. This reduces latency and offloads database operations, leading to faster response times, particularly in high-traffic scenarios.
1. **Leveraging Azure SignalR Service**: [Azure SignalR Service](/azure/azure-signalr/signalr-overview) provides scalable and reliable real-time messaging for your application. By offloading the SignalR backplane to Azure, you can scale out the chat application easily to support thousands of concurrent users across multiple servers. Azure SignalR also simplifies management and provides built-in features like automatic reconnections.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading