You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai/azure-ai-for-dotnet-developers.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Develop .NET apps that use Azure AI services
3
3
description: This article provides an organized list of resources about Azure AI scenarios for .NET developers, including documentation and code samples.
Copy file name to clipboardExpand all lines: docs/ai/conceptual/embeddings.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: "How Embeddings Extend Your AI Model's Reach"
3
3
description: "Learn how embeddings extend the limits and capabilities of AI models in .NET."
4
4
author: catbutler
5
5
ms.topic: concept-article #Don't change.
6
-
ms.date: 05/14/2024
6
+
ms.date: 12/19/2024
7
7
8
8
#customer intent: As a .NET developer, I want to understand how embeddings extend LLM limits and capabilities in .NET so that I have more semantic context and better outcomes for my AI apps.
9
9
@@ -34,7 +34,7 @@ Use embeddings to help a model understand the meaning and context of text, and t
34
34
35
35
Use audio embeddings to process audio files or inputs in your app.
36
36
37
-
For example, [Speech service](/azure/ai-services/speech-service/) supports a range of audio embeddings, including [speech to text](/azure/ai-services/speech-service/speech-to-text) and [text to speech](/azure/ai-services/speech-service/text-to-speech). You can process audio in real-time or in batches.
37
+
For example, [Azure AI Speech](/azure/ai-services/speech-service/) supports a range of audio embeddings, including [speech to text](/azure/ai-services/speech-service/speech-to-text) and [text to speech](/azure/ai-services/speech-service/text-to-speech). You can process audio in real-time or in batches.
Copy file name to clipboardExpand all lines: docs/ai/conceptual/understanding-openai-functions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: "Understanding OpenAI Function Calling"
3
3
description: "Understand how function calling enables you to integrate external tools with your OpenAI application."
4
4
author: haywoodsloan
5
5
ms.topic: concept-article
6
-
ms.date: 05/14/2024
6
+
ms.date: 12/19/2024
7
7
8
8
#customer intent: As a .NET developer, I want to understand OpenAI function calling so that I can integrate external tools with AI completions in my .NET project.
Copy file name to clipboardExpand all lines: docs/ai/conceptual/understanding-tokens.md
+3-7Lines changed: 3 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,15 +3,15 @@ title: "Understanding tokens"
3
3
description: "Understand how large language models (LLMs) use tokens to analyze semantic relationships and generate natural language outputs"
4
4
author: haywoodsloan
5
5
ms.topic: concept-article
6
-
ms.date: 05/14/2024
6
+
ms.date: 12/19/2024
7
7
8
8
#customer intent: As a .NET developer, I want understand how large language models (LLMs) use tokens so I can add semantic analysis and text generation capabilities to my .NET projects.
9
9
10
10
---
11
11
12
12
# Understand tokens
13
13
14
-
Tokens are words, character sets, or combinations of words and punctuation that are used by large language models (LLMs) to decompose text into. Tokenization is the first step in training. The LLM analyzes the semantic relationships between tokens, such as how commonly they're used together or whether they're used in similar contexts. After training, the LLM uses those patterns and relationships to generate a sequence of output tokens based on the input sequence.
14
+
Tokens are words, character sets, or combinations of words and punctuation that are generated by large language models (LLMs) when they decompose text. Tokenization is the first step in training. The LLM analyzes the semantic relationships between tokens, such as how commonly they're used together or whether they're used in similar contexts. After training, the LLM uses those patterns and relationships to generate a sequence of output tokens based on the input sequence.
15
15
16
16
## Turning text into tokens
17
17
@@ -89,11 +89,7 @@ Output generation is an iterative operation. The model appends the predicted tok
89
89
90
90
### Token limits
91
91
92
-
LLMs have limitations regarding the maximum number of tokens that can be used as input or generated as output. This limitation often causes the input and output tokens to be combined into a maximum context window.
93
-
94
-
For example, GPT-4 supports up to 8,192 tokens of context. The combined size of the input and output tokens can't exceed 8,192.
95
-
96
-
Taken together, a model's token limit and tokenization method determine the maximum length of text that can be provided as input or generated as output.
92
+
LLMs have limitations regarding the maximum number of tokens that can be used as input or generated as output. This limitation often causes the input and output tokens to be combined into a maximum context window. Taken together, a model's token limit and tokenization method determine the maximum length of text that can be provided as input or generated as output.
97
93
98
94
For example, consider a model that has a maximum context window of 100 tokens. The model processes our example sentences as input text:
# CustomerIntent: As a .NET developer new to Azure OpenAI, I want to scale my Azure OpenAI capacity to avoid rate limit errors with Azure Container Apps.
Copy file name to clipboardExpand all lines: docs/ai/get-started-app-chat-template.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Get started with the chat using your own data sample for .NET
3
3
description: Get started with .NET and search across your own data using a chat app sample implemented using Azure OpenAI Service and Retrieval Augmented Generation (RAG) in Azure AI Search. Easily deploy with Azure Developer CLI. This article uses the Azure AI Reference Template sample.
# CustomerIntent: As a .NET developer new to Azure OpenAI, I want deploy and use sample code to interact with app infused with my own business data so that learn from the sample code.
#customer intent: As a .NET developer, I want to manage OpenAI Content Filtering in a .NET app
11
11
12
12
---
13
13
14
-
# Work with OpenAI content filtering in a .NET app
14
+
# Work with Azure OpenAI content filtering in a .NET app
15
15
16
16
This article demonstrates how to handle content filtering concerns in a .NET app. Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
17
17
@@ -27,33 +27,23 @@ The [Content Filtering](/azure/ai-services/openai/concepts/content-filter) docum
27
27
28
28
To use the sample code in this article, you need to create and assign a content filter to your OpenAI model.
29
29
30
-
1.[Create and assign a content filter](/azure/ai-services/openai/how-to/content-filters) to your provisioned GPT-35 or GPT-4 model.
30
+
1.[Create and assign a content filter](/azure/ai-services/openai/how-to/content-filters) to your provisioned model.
31
31
32
32
1. Add the [`Azure.AI.OpenAI`](https://www.nuget.org/packages/Azure.AI.OpenAI) NuGet package to your project.
33
33
34
34
```dotnetcli
35
35
dotnet add package Azure.AI.OpenAI
36
36
```
37
37
38
-
1. Create a simple chat completion flow in your .NET app using the `OpenAiClient`. Replace the `YOUR_OPENAI_ENDPOINT`, `YOUR_OPENAI_KEY`, and `YOUR_OPENAI_DEPLOYMENT` values with your own.
38
+
1. Create a simple chat completion flow in your .NET app using the `AzureOpenAiClient`. Replace the `YOUR_MODEL_ENDPOINT` and `YOUR_MODEL_DEPLOYMENT_NAME` values with your own.
1. Print out the content filtering results for each category.
42
+
1. Replace the `YOUR_PROMPT` placeholder with your own message and run the app to experiment with content filtering results. If you enter a prompt the AI considers unsafe, Azure OpenAI returns a `400 Bad Request` code. The app prints a message in the console similar to the following:
1. Replace the `YOUR_PROMPT` placeholder with your own message and run the app to experiment with content filtering results. The following output shows an example of a prompt that triggers a low severity content filtering result:
47
-
48
-
```output
49
-
I am sorry if I have done anything to upset you.
50
-
Is there anything I can do to assist you and make things better?
51
-
52
-
Hate category is filtered: False with low severity.
53
-
SelfHarm category is filtered: False with safe severity.
54
-
Sexual category is filtered: False with safe severity.
55
-
Violence category is filtered: False with low severity.
56
-
```
44
+
```output
45
+
The response was filtered due to the prompt triggering Azure OpenAI's content management policy...
0 commit comments