Skip to content
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ It is generated from our [OpenAPI specification](https://github.com/openai/opena
- [Namespace organization](#namespace-organization)
- [Using the async API](#using-the-async-api)
- [Using the `OpenAIClient` class](#using-the-openaiclient-class)
- [How to use dependency injection](#how-to-use-dependency-injection)
- [How to use chat completions with streaming](#how-to-use-chat-completions-with-streaming)
- [How to use chat completions with tools and function calling](#how-to-use-chat-completions-with-tools-and-function-calling)
- [How to use chat completions with structured outputs](#how-to-use-chat-completions-with-structured-outputs)
Expand Down Expand Up @@ -138,6 +139,46 @@ AudioClient ttsClient = client.GetAudioClient("tts-1");
AudioClient whisperClient = client.GetAudioClient("whisper-1");
```

## How to use dependency injection

The OpenAI clients are **thread-safe** and can be safely registered as **singletons** in ASP.NET Core's Dependency Injection container. This maximizes resource efficiency and HTTP connection reuse.

Register the `OpenAIClient` as a singleton in your `Program.cs`:

```csharp
builder.Services.AddSingleton<OpenAIClient>(serviceProvider =>
{
var apiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");

return new OpenAIClient(apiKey);
});
```

Then inject and use the client in your controllers or services:

```csharp
[ApiController]
[Route("api/[controller]")]
public class ChatController : ControllerBase
{
private readonly OpenAIClient _openAIClient;

public ChatController(OpenAIClient openAIClient)
{
_openAIClient = openAIClient;
}

[HttpPost("complete")]
public async Task<IActionResult> CompleteChat([FromBody] string message)
{
ChatClient chatClient = _openAIClient.GetChatClient("gpt-4o");
ChatCompletion completion = await chatClient.CompleteChatAsync(message);

return Ok(new { response = completion.Content[0].Text });
}
}
```

## How to use chat completions with streaming

When you request a chat completion, the default behavior is for the server to generate it in its entirety before sending it back in a single response. Consequently, long chat completions can require waiting for several seconds before hearing back from the server. To mitigate this, the OpenAI REST API supports the ability to stream partial results back as they are being generated, allowing you to start processing the beginning of the completion before it is finished.
Expand Down
73 changes: 73 additions & 0 deletions examples/aspnet-core/Program.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
using System.ClientModel;
using OpenAI.Chat;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

builder.Services.AddSingleton<ChatClient>(serviceProvider => new ChatClient(builder.Configuration["OpenAI:Model"],
new ApiKeyCredential(builder.Configuration["OpenAI:ApiKey"]
?? Environment.GetEnvironmentVariable("OPENAI_API_KEY")
?? throw new InvalidOperationException("OpenAI API key not found")))
);
builder.Services.AddScoped<ChatHttpHandler>();


var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}

app.UseHttpsRedirection();

var chatHandler = app.Services.GetRequiredService<ChatHttpHandler>();

app.MapPost("/chat/complete", chatHandler.HandleChatRequest);

app.Run();

public class ChatHttpHandler
{
private readonly ChatClient _client;
private readonly ILogger<ChatHttpHandler> _logger;

Check failure on line 38 in examples/aspnet-core/Program.cs

View workflow job for this annotation

GitHub Actions / Build

The type or namespace name 'ILogger<>' could not be found (are you missing a using directive or an assembly reference?)

// Chat completion endpoint using injected ChatClient client
public ChatHttpHandler(ChatClient client, ILogger<ChatHttpHandler> logger)

Check failure on line 41 in examples/aspnet-core/Program.cs

View workflow job for this annotation

GitHub Actions / Build

The type or namespace name 'ILogger<>' could not be found (are you missing a using directive or an assembly reference?)
{
_client = client;
_logger = logger;
}

public async Task<ChatResponse> HandleChatRequest(ChatRequest request)

Check failure on line 47 in examples/aspnet-core/Program.cs

View workflow job for this annotation

GitHub Actions / Build

The type or namespace name 'Task<>' could not be found (are you missing a using directive or an assembly reference?)
{
_logger.LogInformation("Handling chat request: {Message}", request.Message);
var completion = await _client.CompleteChatAsync(request.Message);
return new ChatResponse(completion.Value.Content[0].Text);
}
}

public class ChatRequest
{
public string Message { get; set; }

public ChatRequest(string message)
{
Message = message;
}
}

public class ChatResponse
{
public string Response { get; set; }

public ChatResponse(string response)
{
Response = response;
}
}
125 changes: 125 additions & 0 deletions examples/aspnet-core/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
# OpenAI ASP.NET Core Example

This example demonstrates how to use the OpenAI .NET client library with ASP.NET Core's dependency injection container, registering a `ChatClient` as a singleton for optimal performance and resource usage.

## Features

- **Singleton Registration**: ChatClient registered as singleton in DI container
- **Thread-Safe**: Demonstrates concurrent usage for chat completion endpoints
- **Configurable Model**: Model selection via configuration (appsettings.json)
- **Modern ASP.NET Core**: Uses minimal APIs with async/await patterns

## Prerequisites

- .NET 8.0 or later
- OpenAI API key

## Setup

1. **Set your OpenAI API key** using one of these methods:

**Environment Variable (Recommended):**

```powershell
$env:OPENAI_API_KEY = "your-api-key-here"
```

**Configuration (appsettings.json):**

```json
{
"OpenAI": {
"Model": "gpt-4o-mini",
"ApiKey": "your-api-key-here"
}
}
```

2. **Install dependencies:**

```powershell
dotnet restore
```

3. **Run the application:**

```powershell
dotnet run
```

## API Endpoints

### Chat Completion

- **POST** `/chat/complete`
- **Request Body:**

```json
{
"message": "Hello, how are you?"
}
```

- **Response:**

```json
{
"response": "I'm doing well, thank you for asking! How can I help you today?"
}
```

## Testing with PowerShell

**Chat Completion:**

```powershell
Invoke-RestMethod -Uri "https://localhost:5000/chat/complete" `
-Method POST `
-ContentType "application/json" `
-Body '{"message": "What is the capital of France?"}'
```

## Key Implementation Details

### Singleton Registration

```csharp
builder.Services.AddSingleton<ChatClient>(serviceProvider => new ChatClient(
builder.Configuration["OpenAI:Model"],
new ApiKeyCredential(builder.Configuration["OpenAI:ApiKey"]
?? Environment.GetEnvironmentVariable("OPENAI_API_KEY")
?? throw new InvalidOperationException("OpenAI API key not found")))
);
```

### Dependency Injection Usage

```csharp
app.MapPost("/chat/complete", async (ChatRequest request, ChatClient client) =>
{
var completion = await client.CompleteChatAsync(request.Message);

return new ChatResponse(completion.Value.Content[0].Text);
});
```

## Why Singleton?

- **Thread-Safe**: `ChatClient` is thread-safe and can handle concurrent requests
- **Resource Efficient**: Reuses HTTP connections and avoids creating multiple instances
- **Performance**: Reduces object allocation overhead
- **Stateless**: Clients don't maintain per-request state

## Swagger UI

When running in development mode, you can access the Swagger UI at:

- `https://localhost:7071/swagger`

This provides an interactive interface to test the API endpoints.

## Additional Resources

- [Tutorial: Create a minimal API with ASP.NET Core](https://learn.microsoft.com/aspnet/core/tutorials/min-web-api)
- [.NET dependency injection](https://learn.microsoft.com/dotnet/core/extensions/dependency-injection)
- [Logging in C# and .NET](https://learn.microsoft.com/dotnet/core/extensions/logging)
8 changes: 8 additions & 0 deletions examples/aspnet-core/appsettings.Development.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
}
}
14 changes: 14 additions & 0 deletions examples/aspnet-core/appsettings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"OpenAI":
{
"Model": "gpt-4.1-mini",
"ApiKey": ""
}
}
15 changes: 15 additions & 0 deletions examples/aspnet-core/aspnet-core.csproj
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<RootNamespace>ASP.NET_Core</RootNamespace>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="OpenAI" Version="2.2.0" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="6.6.2" />
</ItemGroup>

</Project>
Loading