Skip to content

Commit 8dedabf

Browse files
authored
Merge pull request #1612 from MicrosoftDocs/main
11/19/2024 AM Publish
2 parents 4189658 + 8daaa1b commit 8dedabf

File tree

158 files changed

+5771
-641
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

158 files changed

+5771
-641
lines changed

.openpublishing.publish.config.json

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,12 @@
110110
"branch": "dantaylo/nov2024",
111111
"branch_mapping": {}
112112
},
113+
{
114+
"path_to_root": "azureai-samples-csharp",
115+
"url": "https://github.com/Azure-Samples/azureai-samples",
116+
"branch": "dantaylo/csharp",
117+
"branch_mapping": {}
118+
},
113119
{
114120
"path_to_root": "azureml-examples-main",
115121
"url": "https://github.com/azure/azureml-examples",

articles/ai-services/computer-vision/concept-tag-images-40.md

Lines changed: 10 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,22 @@
11
---
22
title: Content tags - Image Analysis 4.0
33
titleSuffix: Azure AI services
4-
description: Learn concepts related to the images tagging feature of the Image Analysis 4.0 API.
4+
description: Learn how to generate image tags for a wide variety of objects by using the Image Analysis 4.0 API.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 01/19/2024
11+
ms.date: 11/01/2024
1212
ms.author: pafarley
1313
---
1414

15-
# Image tagging (version 4.0)
15+
# Image tagging with Image Analysis version 4.0
1616

17-
Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Tags aren't organized as a taxonomy and don't have inheritance hierarchies. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
17+
Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Tags aren't organized as a taxonomy and don't have inheritance hierarchies. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in the context of a known setting.
1818

19-
Try out the image tagging features quickly and easily in your browser using Vision Studio.
19+
Try out the image tagging feature quickly and easily in your browser by using Azure AI Vision Studio.
2020

2121
> [!div class="nextstepaction"]
2222
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
@@ -25,8 +25,7 @@ Try out the image tagging features quickly and easily in your browser using Visi
2525

2626
The following JSON response illustrates what Azure AI Vision returns when tagging visual features detected in the example image.
2727

28-
![A blue house and the front yard](./Images/house_yard.png).
29-
28+
:::image type="content" source="images/house_yard.png" alt-text="Photograph of a blue house and the front yard.":::
3029

3130
```json
3231
{
@@ -138,12 +137,11 @@ The following JSON response illustrates what Azure AI Vision returns when taggin
138137

139138
## Use the API
140139

141-
The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
142-
140+
The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the `features` query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
143141

144-
* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)
142+
* [Quickstart: Image Analysis 4.0](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)
145143

146-
## Next steps
144+
## Related content
147145

148-
* Learn the related concept of [describing images](concept-describe-images-40.md).
146+
* Learn the related concept of [describing images](concept-describe-images-40.md)
149147
* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)

articles/ai-services/computer-vision/how-to/call-analyze-image.md

Lines changed: 75 additions & 41 deletions
Large diffs are not rendered by default.

articles/ai-services/computer-vision/overview-vision-studio.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -7,59 +7,59 @@ manager: nitinme
77

88
ms.service: azure-ai-vision
99
ms.topic: overview
10-
ms.date: 01/19/2024
10+
ms.date: 11/04/2024
1111
ms.author: pafarley
1212
---
1313

1414
# What is Vision Studio?
1515

16-
[Vision Studio](https://portal.vision.cognitive.azure.com/) is a set of UI-based tools that lets you explore, build, and integrate features from Azure AI Vision.
16+
[Vision Studio](https://portal.vision.cognitive.azure.com) is a set of UI-based tools that lets you explore, build, and integrate features from Azure AI Vision.
1717

18-
Vision Studio lets you try several service features and sample their returned data in a quick, straightforward manner. Using Vision Studio, you can start experimenting with the services and learning what they offer without needing to write any code. Then, use the available client libraries and REST APIs to get started embedding these services into your own applications.
18+
Vision Studio lets you try several service features and sample their returned data in a quick, straightforward manner. Using Vision Studio, you can start experimenting with the services and learning what they offer without needing to write any code. Then, use the available client libraries and REST APIs to embed these services into your own applications.
1919

2020
## Get started using Vision Studio
2121

22-
To use Vision Studio, you'll need an Azure subscription and a resource for Azure AI services for authentication. You can also use this resource to call the services in the try-it-out experiences. Follow these steps to get started.
22+
To use Vision Studio, you need an Azure subscription and a resource for Azure AI services for authentication. You can also use this resource to call the services in the try-it-out experiences. Follow these steps to get started.
2323

24-
1. Create an Azure Subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/).
24+
1. Create an Azure subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/).
2525

26-
1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com/). If it's your first time logging in, you'll see a popup window that prompts you to sign in to Azure and then choose or create a Computer Vision resource. You can skip this step and do it later.
26+
1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com). If it's your first visit, a popup window prompts you to sign in to Azure and then choose or create a Vision resource. You can skip this step and do it later.
2727

2828
:::image type="content" source="./Images/vision-studio-wizard-1.png" alt-text="Screenshot of Vision Studio startup wizard.":::
2929

30-
1. Select **Choose resource**, then select an existing resource within your subscription. If you'd like to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
30+
1. Select **Choose resource**, then select an existing resource within your subscription. If you want to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
3131

3232
:::image type="content" source="./Images/vision-studio-wizard-2.png" alt-text="Screenshot of Vision Studio resource selection panel.":::
3333

3434
> [!TIP]
35-
> * When you select a location for your Azure resource, choose one that's closest to you for lower latency.
35+
> * For lower latency, select a location for your Azure resource that's closest to you.
3636
> * If you use the free pricing tier, you can keep using the Vision service even after your Azure free trial or service credit expires.
3737
38-
1. Select **Create resource**. Your resource will be created, and you'll be able to try the different features offered by Vision Studio.
38+
1. Select **Create resource**. After your resource is created, you can try the different features offered by Vision Studio.
3939

4040
:::image type="content" source="./Images/vision-studio-home-page.png" alt-text="Screenshot of Vision Studio home page.":::
4141

4242
1. From here, you can select any of the different features offered by Vision Studio. Some of them are outlined in the service quickstarts:
43-
* [OCR quickstart](quickstarts-sdk/client-library.md?pivots=vision-studio)
43+
* [Optical character recognition (OCR) quickstart](quickstarts-sdk/client-library.md?pivots=vision-studio)
4444
* Image Analysis [4.0 quickstart](quickstarts-sdk/image-analysis-client-library-40.md?pivots=vision-studio) and [3.2 quickstart](quickstarts-sdk/image-analysis-client-library.md?pivots=vision-studio)
4545
* [Face quickstart](quickstarts-sdk/identity-client-library.md?pivots=vision-studio)
4646

47-
## Pre-configured features
47+
## Preconfigured features
4848

49-
Azure AI Vision offers multiple features that use prebuilt, pre-configured models for performing various tasks, such as: understanding how people move through a space, detecting faces in images, and extracting text from images. See the [Azure AI Vision overview](overview.md) for a list of features offered by the Vision service.
49+
Azure AI Vision offers multiple features that use prebuilt, preconfigured models for performing various tasks, such as: understanding how people move through a space, detecting faces in images, and extracting text from images. See the [Azure AI Vision overview](overview.md) for a list of features offered by the Vision service.
5050

5151
Each of these features has one or more try-it-out experiences in Vision Studio that allow you to upload images and receive JSON and text responses. These experiences help you quickly test the features using a no-code approach.
5252

53-
## Cleaning up resources
53+
## Clean up resources
5454

5555
If you want to remove an Azure AI services resource after using Vision Studio, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can't delete your resource directly from Vision Studio, so use one of the following methods:
56-
* [Using the Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
57-
* [Using the Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
56+
* [Use the Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
57+
* [Use the Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
5858

5959
> [!TIP]
6060
> In Vision Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by selecting the Settings icon in the top-right corner of the Vision Studio screen.
6161
62-
## Next steps
62+
## Related content
6363

64-
* Go to [Vision Studio](https://portal.vision.cognitive.azure.com/) to begin using features offered by the service.
64+
* Go to [Vision Studio](https://portal.vision.cognitive.azure.com) to begin using features offered by the service.
6565
* For more information on the features offered, see the [Azure AI Vision overview](overview.md).

articles/ai-services/content-safety/whats-new.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Learn what's new in the service. These items might be release notes, videos, blo
2121
### Upcoming deprecations
2222

2323
To align with Content Safety versioning and lifecycle management policies, the following versions are scheduled for deprecation:
24-
* **Effective January 28, 2024**: All versions except `2024-09-01`, `2024-09-15-preview`, and `2024-09-30-preview` will be deprecated and no longer supported. We encourage users to transition to the latest available versions to continue receiving full support and updates. If you have any questions about this process or need assistance with the transition, please reach out to our support team.
24+
* **Effective October 28, 2024**: All versions except `2024-09-01`, `2024-09-15-preview`, and `2024-09-30-preview` will be deprecated and no longer supported. We encourage users to transition to the latest available versions to continue receiving full support and updates. If you have any questions about this process or need assistance with the transition, please reach out to our support team.
2525

2626
## September 2024
2727

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
---
2+
title: Azure AI Content Understanding audio overview
3+
titleSuffix: Azure AI services
4+
description: Learn about Azure AI Content Understanding audio solutions
5+
author: laujan
6+
ms.author: lajanuar
7+
manager: nitinme
8+
ms.service: azure
9+
ms.topic: overview
10+
ms.date: 11/19/2024
11+
ms.custom: ignite-2024-understanding-release
12+
---
13+
14+
15+
# Content Understanding audio solutions (preview)
16+
17+
> [!IMPORTANT]
18+
>
19+
> * Azure AI Content Understanding is available in preview. Public preview releases provide early access to features that are in active development.
20+
> * Features, approaches, and processes may change or have constrained capabilities, prior to General Availability (GA).
21+
> * For more information, *see* [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
22+
23+
Content Understanding audio analyzers enable transcription and diarization of conversational audio, extracting structured fields such as summaries, sentiments, and key topics. Customize an audio analyzer template to your business needs using [Azure AI Foundry](https://ai.azure.com/) to start generating results.
24+
25+
Here are common scenarios for using Content Understanding with conversational audio data:
26+
27+
* Gain customer insights through summarization and sentiment analysis.
28+
* Assess and verify call quality and compliance in call centers.
29+
* Create automated summaries and metadata for podcast publishing.
30+
31+
## Audio analyzer capabilities
32+
33+
:::image type="content" source="../media/audio/overview/workflow-diagram.png" lightbox="../media/audio/overview/workflow-diagram.png" alt-text="Illustration of Content Understanding audio workflow.":::
34+
35+
Content Understanding serves as a cornerstone for Media Asset Management solutions, enabling the following capabilities for audio files:
36+
37+
### Content extraction
38+
39+
* **Transcription**. Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.
40+
41+
* **`Diarization`**. Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers.
42+
43+
* **Speaker role detection**. Identifies agent and customer roles within contact center call data.
44+
45+
* **Language detection**. Automatically detects the language in the audio or uses specified language/locale hints.
46+
47+
### Field extraction
48+
49+
Field extraction allows you to extract structured data from audio files, such as summaries, sentiments, and mentioned entities from call logs. You can begin by customizing a suggested analyzer template or creating one from scratch.
50+
51+
## Key Benefits
52+
Content Understanding offers advanced audio capabilities, including:
53+
54+
* **Customizable data extraction**. Tailor the output to your specific needs by modifying the field schema, allowing for precise data generation and extraction.
55+
56+
* **Generative models**. Utilize generative AI models to specify in natural language the content you want to extract, and the service generates the desired output.
57+
58+
* **Integrated pre-processing**. Benefit from built-in preprocessing steps like transcription, diarization, and role detection, providing rich context for generative models.
59+
60+
* **Scenario adaptability**. Adapt the service to your requirements by generating custom fields and extract relevant data.
61+
62+
## Content Understanding audio analyzer templates
63+
64+
Content Understanding offers customizable audio analyzer templates:
65+
66+
* **Post-call analytics**. Analyze call recordings to generate conversation transcripts, call summaries, sentiment assessments, and more.
67+
68+
* **Conversation summarization**. Generate transcriptions, summaries, and sentiment assessments from conversation audio recordings.
69+
70+
Start with a template or create a custom analyzer to meet your specific business needs.
71+
72+
## Input requirements
73+
For a detailed list of supported audio formats, refer to our [Service limits and codecs](../service-limits.md) page.
74+
75+
## Supported languages and regions
76+
77+
For a complete list of supported regions, languages, and locales, see our [Language and region support](../language-region-support.md)) page.
78+
79+
## Data privacy and security
80+
81+
Developers using Content Understanding should review Microsoft's policies on customer data. For more information, visit our [Data, protection, and privacy](https://www.microsoft.com/trust-center/privacy) page.
82+
83+
## Next steps
84+
85+
* Try processing your audio content using Content Understanding in [Azure AI Foundry](https://ai.azure.com/).
86+
* Learn more about audio [**analyzer templates**](../quickstart/use-ai-foundry.md).

0 commit comments

Comments
 (0)