You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/concept-tag-images-40.md
+10-12Lines changed: 10 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,22 @@
1
1
---
2
2
title: Content tags - Image Analysis 4.0
3
3
titleSuffix: Azure AI services
4
-
description: Learn concepts related to the images tagging feature of the Image Analysis 4.0 API.
4
+
description: Learn how to generate image tags for a wide variety of objects by using the Image Analysis 4.0 API.
5
5
#services: cognitive-services
6
6
author: PatrickFarley
7
7
manager: nitinme
8
8
9
9
ms.service: azure-ai-vision
10
10
ms.topic: conceptual
11
-
ms.date: 01/19/2024
11
+
ms.date: 11/01/2024
12
12
ms.author: pafarley
13
13
---
14
14
15
-
# Image tagging (version 4.0)
15
+
# Image tagging with Image Analysis version 4.0
16
16
17
-
Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Tags aren't organized as a taxonomy and don't have inheritance hierarchies. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
17
+
Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Tags aren't organized as a taxonomy and don't have inheritance hierarchies. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in the context of a known setting.
18
18
19
-
Try out the image tagging features quickly and easily in your browser using Vision Studio.
19
+
Try out the image tagging feature quickly and easily in your browser by using Azure AI Vision Studio.
@@ -25,8 +25,7 @@ Try out the image tagging features quickly and easily in your browser using Visi
25
25
26
26
The following JSON response illustrates what Azure AI Vision returns when tagging visual features detected in the example image.
27
27
28
-
.
29
-
28
+
:::image type="content" source="images/house_yard.png" alt-text="Photograph of a blue house and the front yard.":::
30
29
31
30
```json
32
31
{
@@ -138,12 +137,11 @@ The following JSON response illustrates what Azure AI Vision returns when taggin
138
137
139
138
## Use the API
140
139
141
-
The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
142
-
140
+
The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the `features` query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
143
141
144
-
*[Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/overview-vision-studio.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,59 +7,59 @@ manager: nitinme
7
7
8
8
ms.service: azure-ai-vision
9
9
ms.topic: overview
10
-
ms.date: 01/19/2024
10
+
ms.date: 11/04/2024
11
11
ms.author: pafarley
12
12
---
13
13
14
14
# What is Vision Studio?
15
15
16
-
[Vision Studio](https://portal.vision.cognitive.azure.com/) is a set of UI-based tools that lets you explore, build, and integrate features from Azure AI Vision.
16
+
[Vision Studio](https://portal.vision.cognitive.azure.com) is a set of UI-based tools that lets you explore, build, and integrate features from Azure AI Vision.
17
17
18
-
Vision Studio lets you try several service features and sample their returned data in a quick, straightforward manner. Using Vision Studio, you can start experimenting with the services and learning what they offer without needing to write any code. Then, use the available client libraries and REST APIs to get started embedding these services into your own applications.
18
+
Vision Studio lets you try several service features and sample their returned data in a quick, straightforward manner. Using Vision Studio, you can start experimenting with the services and learning what they offer without needing to write any code. Then, use the available client libraries and REST APIs to embed these services into your own applications.
19
19
20
20
## Get started using Vision Studio
21
21
22
-
To use Vision Studio, you'll need an Azure subscription and a resource for Azure AI services for authentication. You can also use this resource to call the services in the try-it-out experiences. Follow these steps to get started.
22
+
To use Vision Studio, you need an Azure subscription and a resource for Azure AI services for authentication. You can also use this resource to call the services in the try-it-out experiences. Follow these steps to get started.
23
23
24
-
1. Create an Azure Subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/).
24
+
1. Create an Azure subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/).
25
25
26
-
1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com/). If it's your first time logging in, you'll see a popup window that prompts you to sign in to Azure and then choose or create a Computer Vision resource. You can skip this step and do it later.
26
+
1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com). If it's your first visit, a popup window prompts you to sign in to Azure and then choose or create a Vision resource. You can skip this step and do it later.
27
27
28
28
:::image type="content" source="./Images/vision-studio-wizard-1.png" alt-text="Screenshot of Vision Studio startup wizard.":::
29
29
30
-
1.Select **Choose resource**, then select an existing resource within your subscription. If you'd like to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
30
+
1.Select **Choose resource**, then select an existing resource within your subscription. If you want to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
31
31
32
32
:::image type="content" source="./Images/vision-studio-wizard-2.png" alt-text="Screenshot of Vision Studio resource selection panel.":::
33
33
34
34
> [!TIP]
35
-
> * When you select a location for your Azure resource, choose one that's closest to you for lower latency.
35
+
> *For lower latency, select a location for your Azure resourcethat's closest to you.
36
36
> * If you use the free pricing tier, you can keep using the Vision service even after your Azure free trial or service credit expires.
37
37
38
-
1.Select **Create resource**. Your resource will be created, and you'll be able to try the different features offered by Vision Studio.
38
+
1.Select **Create resource**. After your resource is created, you can try the different features offered by Vision Studio.
39
39
40
40
:::image type="content" source="./Images/vision-studio-home-page.png" alt-text="Screenshot of Vision Studio home page.":::
41
41
42
42
1. From here, you can select any of the different features offered by Vision Studio. Some of them are outlined in the service quickstarts:
Azure AI Vision offers multiple features that use prebuilt, pre-configured models for performing various tasks, such as: understanding how people move through a space, detecting faces in images, and extracting text from images. See the [Azure AI Vision overview](overview.md) for a list of features offered by the Vision service.
49
+
Azure AI Vision offers multiple features that use prebuilt, preconfigured models for performing various tasks, such as: understanding how people move through a space, detecting faces in images, and extracting text from images. See the [Azure AI Vision overview](overview.md) for a list of features offered by the Vision service.
50
50
51
51
Each of these features has one or more try-it-out experiences in Vision Studio that allow you to upload images and receive JSON and text responses. These experiences help you quickly test the features using a no-code approach.
52
52
53
-
## Cleaning up resources
53
+
## Clean up resources
54
54
55
55
If you want to remove an Azure AI services resource after using Vision Studio, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can't delete your resource directly from Vision Studio, so use one of the following methods:
56
-
*[Using the Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
57
-
*[Using the Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
56
+
*[Use the Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
57
+
*[Use the Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
58
58
59
59
> [!TIP]
60
60
> In Vision Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by selecting the Settings icon in the top-right corner of the Vision Studio screen.
61
61
62
-
## Next steps
62
+
## Related content
63
63
64
-
* Go to [Vision Studio](https://portal.vision.cognitive.azure.com/) to begin using features offered by the service.
64
+
* Go to [Vision Studio](https://portal.vision.cognitive.azure.com) to begin using features offered by the service.
65
65
* For more information on the features offered, see the [Azure AI Vision overview](overview.md).
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/whats-new.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Learn what's new in the service. These items might be release notes, videos, blo
21
21
### Upcoming deprecations
22
22
23
23
To align with Content Safety versioning and lifecycle management policies, the following versions are scheduled for deprecation:
24
-
***Effective January 28, 2024**: All versions except `2024-09-01`, `2024-09-15-preview`, and `2024-09-30-preview` will be deprecated and no longer supported. We encourage users to transition to the latest available versions to continue receiving full support and updates. If you have any questions about this process or need assistance with the transition, please reach out to our support team.
24
+
***Effective October 28, 2024**: All versions except `2024-09-01`, `2024-09-15-preview`, and `2024-09-30-preview` will be deprecated and no longer supported. We encourage users to transition to the latest available versions to continue receiving full support and updates. If you have any questions about this process or need assistance with the transition, please reach out to our support team.
title: Azure AI Content Understanding audio overview
3
+
titleSuffix: Azure AI services
4
+
description: Learn about Azure AI Content Understanding audio solutions
5
+
author: laujan
6
+
ms.author: lajanuar
7
+
manager: nitinme
8
+
ms.service: azure
9
+
ms.topic: overview
10
+
ms.date: 11/19/2024
11
+
ms.custom: ignite-2024-understanding-release
12
+
---
13
+
14
+
15
+
# Content Understanding audio solutions (preview)
16
+
17
+
> [!IMPORTANT]
18
+
>
19
+
> * Azure AI Content Understanding is available in preview. Public preview releases provide early access to features that are in active development.
20
+
> * Features, approaches, and processes may change or have constrained capabilities, prior to General Availability (GA).
21
+
> * For more information, *see*[**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
22
+
23
+
Content Understanding audio analyzers enable transcription and diarization of conversational audio, extracting structured fields such as summaries, sentiments, and key topics. Customize an audio analyzer template to your business needs using [Azure AI Foundry](https://ai.azure.com/) to start generating results.
24
+
25
+
Here are common scenarios for using Content Understanding with conversational audio data:
26
+
27
+
* Gain customer insights through summarization and sentiment analysis.
28
+
* Assess and verify call quality and compliance in call centers.
29
+
* Create automated summaries and metadata for podcast publishing.
30
+
31
+
## Audio analyzer capabilities
32
+
33
+
:::image type="content" source="../media/audio/overview/workflow-diagram.png" lightbox="../media/audio/overview/workflow-diagram.png" alt-text="Illustration of Content Understanding audio workflow.":::
34
+
35
+
Content Understanding serves as a cornerstone for Media Asset Management solutions, enabling the following capabilities for audio files:
36
+
37
+
### Content extraction
38
+
39
+
***Transcription**. Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.
40
+
41
+
***`Diarization`**. Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers.
42
+
43
+
***Speaker role detection**. Identifies agent and customer roles within contact center call data.
44
+
45
+
***Language detection**. Automatically detects the language in the audio or uses specified language/locale hints.
46
+
47
+
### Field extraction
48
+
49
+
Field extraction allows you to extract structured data from audio files, such as summaries, sentiments, and mentioned entities from call logs. You can begin by customizing a suggested analyzer template or creating one from scratch.
***Customizable data extraction**. Tailor the output to your specific needs by modifying the field schema, allowing for precise data generation and extraction.
55
+
56
+
***Generative models**. Utilize generative AI models to specify in natural language the content you want to extract, and the service generates the desired output.
57
+
58
+
***Integrated pre-processing**. Benefit from built-in preprocessing steps like transcription, diarization, and role detection, providing rich context for generative models.
59
+
60
+
***Scenario adaptability**. Adapt the service to your requirements by generating custom fields and extract relevant data.
***Post-call analytics**. Analyze call recordings to generate conversation transcripts, call summaries, sentiment assessments, and more.
67
+
68
+
***Conversation summarization**. Generate transcriptions, summaries, and sentiment assessments from conversation audio recordings.
69
+
70
+
Start with a template or create a custom analyzer to meet your specific business needs.
71
+
72
+
## Input requirements
73
+
For a detailed list of supported audio formats, refer to our [Service limits and codecs](../service-limits.md) page.
74
+
75
+
## Supported languages and regions
76
+
77
+
For a complete list of supported regions, languages, and locales, see our [Language and region support](../language-region-support.md)) page.
78
+
79
+
## Data privacy and security
80
+
81
+
Developers using Content Understanding should review Microsoft's policies on customer data. For more information, visit our [Data, protection, and privacy](https://www.microsoft.com/trust-center/privacy) page.
82
+
83
+
## Next steps
84
+
85
+
* Try processing your audio content using Content Understanding in [Azure AI Foundry](https://ai.azure.com/).
86
+
* Learn more about audio [**analyzer templates**](../quickstart/use-ai-foundry.md).
0 commit comments