Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"name": "C# (.NET)",
"image": "mcr.microsoft.com/devcontainers/dotnet:latest"

// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},

// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [5000, 5001],
// "portsAttributes": {
// "5001": {
// "protocol": "https"
// }
// }

// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "dotnet restore",

// Configure tool-specific properties.
// "customizations": {},
}
2 changes: 1 addition & 1 deletion docs/ai/azure-ai-for-dotnet-developers.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Develop .NET apps that use Azure AI services
description: This article provides an organized list of resources about Azure AI scenarios for .NET developers, including documentation and code samples.
ms.date: 05/17/2024
ms.date: 12/19/2024
ms.topic: overview
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
---
Expand Down
4 changes: 2 additions & 2 deletions docs/ai/conceptual/embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "How Embeddings Extend Your AI Model's Reach"
description: "Learn how embeddings extend the limits and capabilities of AI models in .NET."
author: catbutler
ms.topic: concept-article #Don't change.
ms.date: 05/14/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want to understand how embeddings extend LLM limits and capabilities in .NET so that I have more semantic context and better outcomes for my AI apps.

Expand Down Expand Up @@ -34,7 +34,7 @@ Use embeddings to help a model understand the meaning and context of text, and t

Use audio embeddings to process audio files or inputs in your app.

For example, [Speech service](/azure/ai-services/speech-service/) supports a range of audio embeddings, including [speech to text](/azure/ai-services/speech-service/speech-to-text) and [text to speech](/azure/ai-services/speech-service/text-to-speech). You can process audio in real-time or in batches.
For example, [Azure AI Speech](/azure/ai-services/speech-service/) supports a range of audio embeddings, including [speech to text](/azure/ai-services/speech-service/speech-to-text) and [text to speech](/azure/ai-services/speech-service/text-to-speech). You can process audio in real-time or in batches.

### Turn text into images or images into text

Expand Down
2 changes: 1 addition & 1 deletion docs/ai/conceptual/understanding-openai-functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "Understanding OpenAI Function Calling"
description: "Understand how function calling enables you to integrate external tools with your OpenAI application."
author: haywoodsloan
ms.topic: concept-article
ms.date: 05/14/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want to understand OpenAI function calling so that I can integrate external tools with AI completions in my .NET project.

Expand Down
10 changes: 3 additions & 7 deletions docs/ai/conceptual/understanding-tokens.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ title: "Understanding tokens"
description: "Understand how large language models (LLMs) use tokens to analyze semantic relationships and generate natural language outputs"
author: haywoodsloan
ms.topic: concept-article
ms.date: 05/14/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want understand how large language models (LLMs) use tokens so I can add semantic analysis and text generation capabilities to my .NET projects.

---

# Understand tokens

Tokens are words, character sets, or combinations of words and punctuation that are used by large language models (LLMs) to decompose text into. Tokenization is the first step in training. The LLM analyzes the semantic relationships between tokens, such as how commonly they're used together or whether they're used in similar contexts. After training, the LLM uses those patterns and relationships to generate a sequence of output tokens based on the input sequence.
Tokens are words, character sets, or combinations of words and punctuation that are generated by large language models (LLMs) when they decompose text. Tokenization is the first step in training. The LLM analyzes the semantic relationships between tokens, such as how commonly they're used together or whether they're used in similar contexts. After training, the LLM uses those patterns and relationships to generate a sequence of output tokens based on the input sequence.

## Turning text into tokens

Expand Down Expand Up @@ -89,11 +89,7 @@ Output generation is an iterative operation. The model appends the predicted tok

### Token limits

LLMs have limitations regarding the maximum number of tokens that can be used as input or generated as output. This limitation often causes the input and output tokens to be combined into a maximum context window.

For example, GPT-4 supports up to 8,192 tokens of context. The combined size of the input and output tokens can't exceed 8,192.

Taken together, a model's token limit and tokenization method determine the maximum length of text that can be provided as input or generated as output.
LLMs have limitations regarding the maximum number of tokens that can be used as input or generated as output. This limitation often causes the input and output tokens to be combined into a maximum context window. Taken together, a model's token limit and tokenization method determine the maximum length of text that can be provided as input or generated as output.

For example, consider a model that has a maximum context window of 100 tokens. The model processes our example sentences as input text:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Scale Azure OpenAI for .NET chat sample using RAG
description: Learn how to add load balancing to your application to extend the chat app beyond the Azure OpenAI token and model quota limits.
ms.date: 05/16/2024
ms.date: 12/19/2024
ms.topic: get-started
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
# CustomerIntent: As a .NET developer new to Azure OpenAI, I want to scale my Azure OpenAI capacity to avoid rate limit errors with Azure Container Apps.
Expand Down
2 changes: 1 addition & 1 deletion docs/ai/get-started-app-chat-template.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Get started with the chat using your own data sample for .NET
description: Get started with .NET and search across your own data using a chat app sample implemented using Azure OpenAI Service and Retrieval Augmented Generation (RAG) in Azure AI Search. Easily deploy with Azure Developer CLI. This article uses the Azure AI Reference Template sample.
ms.date: 05/16/2024
ms.date: 12/19/2024
ms.topic: get-started
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
# CustomerIntent: As a .NET developer new to Azure OpenAI, I want deploy and use sample code to interact with app infused with my own business data so that learn from the sample code.
Expand Down
28 changes: 9 additions & 19 deletions docs/ai/how-to/content-filtering.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ ms.custom: devx-track-dotnet, devx-track-dotnet-ai
author: alexwolfmsft
ms.author: alexwolf
ms.topic: how-to
ms.date: 05/13/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want to manage OpenAI Content Filtering in a .NET app

---

# Work with OpenAI content filtering in a .NET app
# Work with Azure OpenAI content filtering in a .NET app

This article demonstrates how to handle content filtering concerns in a .NET app. Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.

Expand All @@ -27,33 +27,23 @@ The [Content Filtering](/azure/ai-services/openai/concepts/content-filter) docum

To use the sample code in this article, you need to create and assign a content filter to your OpenAI model.

1. [Create and assign a content filter](/azure/ai-services/openai/how-to/content-filters) to your provisioned GPT-35 or GPT-4 model.
1. [Create and assign a content filter](/azure/ai-services/openai/how-to/content-filters) to your provisioned model.

1. Add the [`Azure.AI.OpenAI`](https://www.nuget.org/packages/Azure.AI.OpenAI) NuGet package to your project.

```dotnetcli
dotnet add package Azure.AI.OpenAI
```

1. Create a simple chat completion flow in your .NET app using the `OpenAiClient`. Replace the `YOUR_OPENAI_ENDPOINT`, `YOUR_OPENAI_KEY`, and `YOUR_OPENAI_DEPLOYMENT` values with your own.
1. Create a simple chat completion flow in your .NET app using the `AzureOpenAiClient`. Replace the `YOUR_MODEL_ENDPOINT` and `YOUR_MODEL_DEPLOYMENT_NAME` values with your own.

:::code language="csharp" source="./snippets/content-filtering/program.cs" id="chatCompletionFlow":::
:::code language="csharp" source="./snippets/content-filtering/program.cs" :::

1. Print out the content filtering results for each category.
1. Replace the `YOUR_PROMPT` placeholder with your own message and run the app to experiment with content filtering results. If you enter a prompt the AI considers unsafe, Azure OpenAI returns a `400 Bad Request` code. The app prints a message in the console similar to the following:

:::code language="csharp" source="./snippets/content-filtering/program.cs" id="printContentFilteringResult":::

1. Replace the `YOUR_PROMPT` placeholder with your own message and run the app to experiment with content filtering results. The following output shows an example of a prompt that triggers a low severity content filtering result:

```output
I am sorry if I have done anything to upset you.
Is there anything I can do to assist you and make things better?

Hate category is filtered: False with low severity.
SelfHarm category is filtered: False with safe severity.
Sexual category is filtered: False with safe severity.
Violence category is filtered: False with low severity.
```
```output
The response was filtered due to the prompt triggering Azure OpenAI's content management policy...
```

## Related content

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,10 @@
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Azure.AI.OpenAI" Version="1.0.0-beta.17" />
<PackageReference Include="Azure.Core" Version="1.43.0" />
<PackageReference Include="Azure.AI.OpenAI" />
<PackageReference Include="Azure.Identity" />
<PackageReference Include="Microsoft.Extensions.AI" Version="9.0.1-preview.1.24570.5" />
<PackageReference Include="Microsoft.Extensions.AI.OpenAI" Version="9.0.1-preview.1.24570.5" />
</ItemGroup>

</Project>
45 changes: 13 additions & 32 deletions docs/ai/how-to/snippets/content-filtering/Program.cs
Original file line number Diff line number Diff line change
@@ -1,38 +1,19 @@
// <chatCompletionFlow>
using Azure;
using Azure.AI.OpenAI;
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Extensions.AI;

string endpoint = "YOUR_OPENAI_ENDPOINT";
string key = "YOUR_OPENAI_KEY";
IChatClient client =
new AzureOpenAIClient(
new Uri("YOUR_MODEL_ENDPOINT"),
new DefaultAzureCredential()).AsChatClient("YOUR_MODEL_DEPLOYMENT_NAME");

OpenAIClient client = new(new Uri(endpoint), new AzureKeyCredential(key));

var chatCompletionsOptions = new ChatCompletionsOptions()
try
{
DeploymentName = "YOUR_DEPLOYMENT_NAME",
Messages =
{
new ChatRequestSystemMessage("You are a helpful assistant."),
new ChatRequestUserMessage("YOUR_PROMPT")
}
};

Response<ChatCompletions> response = client.GetChatCompletions(chatCompletionsOptions);
Console.WriteLine(response.Value.Choices[0].Message.Content);
Console.WriteLine();
// </chatCompletionFlow>
ChatCompletion completion = await client.CompleteAsync("YOUR_PROMPT");

// <printContentFilteringResult>
foreach (var promptFilterResult in response.Value.PromptFilterResults)
Console.WriteLine(completion.Message);
}
catch (Exception e)
{
var results = promptFilterResult.ContentFilterResults;
Console.WriteLine(@$"Hate category is filtered:
{results.Hate.Filtered} with {results.Hate.Severity} severity.");
Console.WriteLine(@$"Self-harm category is filtered:
{results.SelfHarm.Filtered} with {results.SelfHarm.Severity} severity.");
Console.WriteLine(@$"Sexual category is filtered:
{results.Sexual.Filtered} with {results.Sexual.Severity} severity.");
Console.WriteLine(@$"Violence category is filtered:
{results.Violence.Filtered} with {results.Violence.Severity} severity.");
Console.WriteLine(e.Message);
}
// </printContentFilteringResult>
2 changes: 1 addition & 1 deletion docs/ai/index.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ metadata:
description: Samples, tutorials, and education for using AI with .NET
ms.topic: hub-page
ms.service: dotnet
ms.date: 05/13/2024
ms.date: 12/19/2024
author: alexwolfmsft
ms.author: alexwolf

Expand Down
2 changes: 1 addition & 1 deletion docs/ai/quickstarts/quickstart-local-ai.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Quickstart - Connect to and chat with a local AI using .NET
description: Set up a local AI model and chat with it using a .NET console app and the Microsoft.Extensions.AI libraries
ms.date: 11/24/2024
ms.date: 12/19/2024
ms.topic: quickstart
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
author: alexwolfmsft
Expand Down
61 changes: 61 additions & 0 deletions docs/csharp/language-reference/operators/deconstruction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
title: "Deconstruction expression - extract properties or fields from a tuple or other type"
description: "Learn about deconstruction expressions: expressions that extract individual properties or fields from a tuple or user defined type into discrete expressions."
ms.date: 12/17/2024
---
# Deconstruction expression - Extract properties of fields from a tuple or other user-defined type

A *deconstruction expression* extracts data fields from an instance of an object. Each discrete data element is written to a distinct variable, as shown in the following example:

:::code language="csharp" source="./snippets/shared/Deconstruction.cs" id="TupleDeconstruction":::

The preceding code snippet creates a [tuple](../builtin-types/value-tuples.md) that has two integer values, `X` and `Y`. The second statement *deconstructs* that tuple and stores the tuple elements in discrete variables `x` and `y`.

## Tuple deconstruction

All [tuple types](../builtin-types/value-tuples.md) support deconstruction expressions. Tuple deconstruction extracts all the tuple's elements. If you only want some of the tuple elements, use a [discard](../tokens/discard.md) for the unused tuple members, as shown in the following example:

:::code language="csharp" source="./snippets/shared/Deconstruction.cs" id="TupleDeconstructionWithDiscard":::

In the preceding example, the `Y` and `label` members are discarded. You can specify multiple discards in the same deconstruction expression. You can use discards for all the members of the tuple. The following example is legal, although not useful:

:::code language="csharp" source="./snippets/shared/Deconstruction.cs" id="AllDiscards":::

## Record deconstruction

[Record](../builtin-types/record.md) types that have a [primary constructor](../builtin-types/record.md#positional-syntax-for-property-definition) support deconstruction for positional parameters. The compiler synthesizes a `Deconstruct` method that extracts the properties synthesized from positional parameters in the primary constructor. The compiler-synthesized `Deconstruction` method doesn't extract properties declared as properties in the record type.

The `record` shown in the following code declares two positional properties, `SquareFeet` and `Address`, along with another property, `RealtorNotes`:

:::code language="csharp" source="./snippets/shared/Deconstruction.cs" id="RecordDeconstruction":::

When you deconstruct a `House` object, all positional properties, and only positional properties, are deconstructed, as shown in the following example:

:::code language="csharp" source="./snippets/shared/Deconstruction.cs" id="RecordDeconstructionUsage":::

You can make use of this behavior to specify which properties of your record types are part of the compiler-synthesized `Deconstruct` method.

## Declare `Deconstruct` methods

You can add deconstruction support to any class, struct, or interface you declare. You declare one or `Deconstruct` methods in your type, or as extension methods on that type. A deconstruction expression calls a method `void Deconstruct(out var p1, ..., out var pn)`. The `Deconstruct` method can be either an instance method or an extension method. The type of each parameter in the `Deconstruct` method must match the type of the corresponding argument in the deconstruction expression. The deconstruction expression assigns the value of each argument to the value of the corresponding `out` parameter in the `Deconstruct` method. If multiple `Deconstruct` methods match the deconstruction expression, the compiler reports an error for the ambiguity.

The following code declares a `Point3D` struct that has two `Deconstruct` methods:

:::code language="csharp" source="./snippets/shared/Deconstruction.cs" id="StructDeconstruction":::

The first method supports deconstruction expressions that extract all three axis values: `X`, `Y`, and `Z`. The second method supports deconstructing only the planar values: `X` and `Y`. The first method has an *arity* of 3; the second has an arity of 2.

The preceding section described the compiler-synthesized `Deconstruct` method for `record` types with a primary constructor. You can declare more `Deconstruct` methods in record types. These methods can either add other properties, remove some of the default properties, or both. You can also declare a `Deconstruct` that matches the compiler-synthesized signature. If you declare such a `Deconstruct` method, the compiler doesn't synthesize one.

Multiple `Deconstruct` methods are allowed as long as the compiler can determine one unique `Deconstruct` method for a deconstruction expression. Typically, multiple `Deconstruct` methods for the same type have different numbers of parameters. You can also create multiple `Deconstruct` methods that differ by parameter types. However, in many cases, too many `Deconstruct` methods can lead to ambiguity errors and misleading results.

## C# language specification

For more information, see the deconstruction section of the [C# Standard](~/_csharpstandard/standard/expressions.md#127-deconstruction).

## See also

- [C# operators and expressions](index.md)
- [Tuple types](../builtin-types/value-tuples.md)
- [Records](../builtin-types/record.md)
- [Structure types](../builtin-types/struct.md)
Loading
Loading