You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/content-filter.md
+52-14Lines changed: 52 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,13 +43,15 @@ Text and image models support Drugs as an additional classification. This catego
43
43
| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships and sexual acts, acts portrayed in erotic or affectionate terms, including those portrayed as an assault or a forced sexual violent act against one’s will. <br><br> This includes but is not limited to:<ul><li>Vulgar content</li><li>Prostitution</li><li>Nudity and Pornography</li><li>Abuse</li><li>Child exploitation, child abuse, child grooming</li></ul> |
44
44
| Violence | Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities. <br><br>This includes, but isn't limited to: <ul><li>Weapons</li><li>Bullying and intimidation</li><li>Terrorist and violent extremism</li><li>Stalking</li></ul> |
45
45
| Self-Harm | Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one’s body or kill oneself. <br><br> This includes, but isn't limited to: <ul><li>Eating Disorders</li><li>Bullying and intimidation</li></ul> |
46
-
| Protected Material for Text<sup>*</sup> | Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models.
46
+
| Protected Material for Text<sup>1</sup> | Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models.
47
47
| Protected Material for Code | Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories.
48
48
|User Prompt Attacks |User prompt attacks are User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message. Such attacks can vary from intricate roleplay to subtle subversion of the safety objective. |
49
49
|Indirect Attacks |Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, are a potential vulnerability where third parties place malicious instructions inside of documents that the Generative AI system can access and process. Requires [document embedding and formatting](#embedding-documents-in-your-prompt). |
50
+
| Groundedness<sup>2</sup> | Groundedness detection flags whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungrounded material refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials. Requires [document embedding and formatting](#embedding-documents-in-your-prompt). |
50
51
51
-
<sup>*</sup> If you're an owner of text material and want to submit text content for protection, [file a request](https://aka.ms/protectedmaterialsform).
52
+
<sup>1</sup> If you're an owner of text material and want to submit text content for protection, [file a request](https://aka.ms/protectedmaterialsform).
52
53
54
+
<sup>2</sup> Not available in non-streaming scenarios; only available for streaming scenarios. The following regions support Groundedness Detection: Central US, East US, France Central, and Canada East
@@ -328,24 +330,27 @@ When annotations are enabled as shown in the code snippets below, the following
328
330
|indirect attacks|detected (true or false), </br>filtered (true or false)|
329
331
|protected material text|detected (true or false), </br>filtered (true or false)|
330
332
|protected material code|detected (true or false), </br>filtered (true or false), </br>Example citation of public GitHub repository where code snippet was found, </br>The license of the repository|
333
+
|Groundedness | detected (true or false)</br>filtered (true or false) </br>details (`completion_end_offset`, `completion_start_offset`) |
331
334
332
335
When displaying code in your application, we strongly recommend that the application also displays the example citation from the annotations. Compliance with the cited license may also be required for Customer Copyright Commitment coverage.
333
336
334
337
See the following table for the annotation availability in each API version:
| Prompt Shield for user prompt attacks|✅ |✅ |✅ |✅ |
343
-
|Prompt Shield for indirect attacks|| ✅ |||
344
-
|Protected material text|✅ |✅ |✅ |✅ |
345
-
|Protected material code|✅ |✅ |✅ |✅ |
346
-
|Profanity blocklist|✅ |✅ |✅ |✅ |
347
-
|Custom blocklist|| ✅ |✅ |✅ |
348
-
341
+
| Hate | ✅|✅ |✅ |✅ |✅ |
342
+
| Violence | ✅|✅ |✅ |✅ |✅ |
343
+
| Sexual |✅ |✅|✅ |✅ |✅ |
344
+
| Self-harm |✅|✅|✅ |✅ |✅ |
345
+
| Prompt Shield for user prompt attacks|✅|✅|✅ |✅ |✅ |
346
+
|Prompt Shield for indirect attacks||| ✅ |||
347
+
|Protected material text|✅|✅ |✅ |✅ |✅ |
348
+
|Protected material code|✅|✅ |✅ |✅ |✅ |
349
+
|Profanity blocklist|✅|✅ |✅ |✅ |✅ |
350
+
|Custom blocklist|✅|| ✅ |✅ |✅ |
351
+
|Groundedness<sup>1</sup>|✅|||||
352
+
353
+
<sup>1</sup> Not available in non-streaming scenarios; only available for streaming scenarios. The following regions support Groundedness Detection: Central US, East US, France Central, and Canada East
For details on the inference RESTAPI endpoints for Azure OpenAI and how to create Chat and Completions, follow [Azure OpenAI Service RESTAPI reference guidance](../reference.md). Annotations are returned for all scenarios when using any preview API version starting from `2023-06-01-preview`, as well as the GAAPI version `2024-02-01`.
712
717
718
+
### Groundedness
719
+
720
+
#### Annotate only
721
+
722
+
Returns offsets referencing the ungrounded completion content.
723
+
724
+
```json
725
+
{
726
+
"ungrounded_material": {
727
+
"details": [
728
+
{
729
+
"completion_end_offset": 127,
730
+
"completion_start_offset": 27
731
+
}
732
+
],
733
+
"detected": true,
734
+
"filtered": false
735
+
}
736
+
}
737
+
```
738
+
739
+
#### Annotate and filter
740
+
741
+
Blocks completion content when ungrounded completion content was detected.
742
+
743
+
```json
744
+
{ "ungrounded_material": {
745
+
"detected": true,
746
+
"filtered": true
747
+
}
748
+
}
749
+
```
750
+
713
751
### Example scenario: An input prompt containing content that is classified at a filtered category and severity level is sent to the completions API
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -69,11 +69,11 @@ Sample code for text to speech avatar is available on [GitHub](https://github.co
69
69
70
70
- Throughout an avatar real-time session or batch content creation, the text-to-speech, speech-to-text, Azure OpenAI, or other Azure services are charged separately.
71
71
- Refer to [text to speech avatar pricing note](../text-to-speech.md#text-to-speech-avatar) to learn how billing works for the text-to-speech avatar feature.
72
-
- For the detailed pricing, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Note that avatar pricing will only be visible for service regions where the feature is available, including Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, and West US 2.
72
+
- For the detailed pricing, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Note that avatar pricing will only be visible for service regions where the feature is available, including Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, East US 2, and West US 2.
73
73
74
74
## Available locations
75
75
76
-
The text to speech avatar feature is only available in the following service regions: Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, and West US 2.
76
+
The text to speech avatar feature is only available in the following service regions: Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, East US 2, and West US 2.
> Using the [Azure AI model inference service](https://aka.ms/aiservices/inference) requires version `0.2.4`for`llama-index-llms-azure-inference` or `llama-index-embeddings-azure-inference`.
Once configured, create a client to connect to the endpoint. The parameter `model_name` in the constructor is not required for endpoints serving a single model, like serverless endpoints.
73
+
Once configured, create a client to connect to the endpoint.
71
74
72
75
```python
73
76
import os
@@ -80,7 +83,20 @@ llm = AzureAICompletionsModel(
80
83
```
81
84
82
85
> [!TIP]
83
-
> If your model is an OpenAI model deployed to Azure OpenAI service or AI services resource, configure the client as indicated at [Azure OpenAI models](#azure-openai-models).
86
+
> If your model is an OpenAI model deployed to Azure OpenAI service or AI services resource, configure the client as indicated at [Azure OpenAI models and Azure AI model inference service](#azure-openai-models-and-azure-ai-model-infernece-service).
87
+
88
+
If your endpoint is serving more than one model, like with the [Azure AI model inference service](../../ai-services/model-inference.md) or [GitHub Models](https://github.com/marketplace/models), you have to indicate `model_name` parameter:
89
+
90
+
```python
91
+
import os
92
+
from llama_index.llms.azure_inference import AzureAICompletionsModel
### Azure OpenAI models and Azure AI model infernece service
116
132
117
-
If you are using Azure OpenAI models with key-based authentication, you need to pass the authentication key in the header `api-key`, which is the one expected in the Azure OpenAI service and inAzure AI Services. This configuration is not required if you are using Microsoft Entra ID (formerly known as Azure AD). The following example shows how to configure the client:
133
+
If you are using Azure OpenAI models or [Azure AI model inference service](../../ai-services/model-inference.md), ensure you have at least version `0.2.4` of the LlamaIndex integration. Use `api_version` parameter incase you need to selecta specific `api_version`. For the [Azure AI model inference service](../../ai-services/model-inference.md), you need to pass `model_name` parameter:
118
134
119
135
```python
120
-
import os
121
136
from llama_index.llms.azure_inference import AzureAICompletionsModel
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-customer-managed-keys.md
+5-8Lines changed: 5 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -124,20 +124,17 @@ Data that previously was stored in Azure Cosmos DB in your subscription, is stor
124
124
125
125
Pipelines metadata that previously was stored in a storage account in a managed resource group, is now stored on the storage account in your subscription that is associated to the Azure Machine Learning workspace. Since this Azure Storage resource is managed separately in your subscription, you're responsible to configure encryption settings on it.
126
126
127
-
To opt in for this preview, set the `enableServiceSideCMKEncryption` on a REST API or in your Bicep or Resource Manager template. You can also use Azure portal. Preview availability varies by [workspace kind](concept-workspace.md):
128
-
129
-
| Kind | Supported |
130
-
| ----- | ----- |
131
-
| Default | Yes |
132
-
| Hub | No |
133
-
| Project | No |
127
+
To opt in for this preview, set the `enableServiceSideCMKEncryption` on a REST API or in your Bicep or Resource Manager template. You can also use Azure portal.
134
128
135
129
:::image type="content" source="./media/concept-customer-managed-keys/cmk-service-side-encryption.png" alt-text="Screenshot of the encryption tab with the option for server side encryption selected." lightbox="./media/concept-customer-managed-keys/cmk-service-side-encryption.png":::
136
130
137
131
> [!NOTE]
138
132
> During this preview key rotation and data labeling capabilities are not supported. Server-side encryption is currently not supported in reference to an Azure Key Vault for storing your encryption key that has public network access disabled.
139
133
140
-
For a template that creates a workspace with service-side encryption of metadata, see [https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-cmk-service-side-encryption](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-cmk-service-side-encryption).
134
+
For templates that create a workspace with service-side encryption of metadata, see
135
+
136
+
-[Bicep template for creating default workspace](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-cmk-service-side-encryption).
137
+
-[Bicep template for creating hub workspace](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/aistudio-cmk-service-side-encryption).
Copy file name to clipboardExpand all lines: articles/search/includes/quickstarts/java.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-ai-search
5
5
ms.custom:
6
6
- ignite-2023
7
7
ms.topic: include
8
-
ms.date: 10/07/2024
8
+
ms.date: 11/01/2024
9
9
---
10
10
11
11
Build a Java console application using the [Azure.Search.Documents](/java/api/overview/azure/search) library to create, load, and query a search index.
@@ -65,12 +65,12 @@ Use the following tools to create this quickstart.
Local development without keyless includes these steps:
215
+
Local development using roles includes these steps:
218
216
219
-
- Assign your personal identity with RBAC roles on the specific resource.
220
-
- Use a tool to authenticate with Azure.
217
+
- Assign your personal identity to RBAC roles on the specific resource.
218
+
- Use a tool like the Azure CLI or Azure PowerShell to authenticate with Azure.
221
219
- Establish environment variables for your resource.
222
220
223
221
### Roles for local development
224
222
225
-
As a local developer, your Azure identity needs full control of your service. This control is provided with RBAC roles. To manage your resource during development, these are the suggested roles:
223
+
As a local developer, your Azure identity needs full control over data plane operations. These are the suggested roles:
226
224
227
-
- Search Service Contributor
228
-
- Search Index Data Contributor
229
-
- Search Index Data Reader
225
+
- Search Service Contributor, create and manage objects
226
+
- Search Index Data Contributor, load an index
227
+
- Search Index Data Reader, query an index
230
228
231
229
Find your personal identity with one of the following tools. Use that identity as the `<identity-id>` value.
232
230
@@ -253,7 +251,7 @@ Find your personal identity with one of the following tools. Use that identity a
0 commit comments