Skip to content

Commit 016eedc

Browse files
committed
fixing merge conflict
2 parents ea779f7 + d2c4967 commit 016eedc

File tree

632 files changed

+11531
-10899
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

632 files changed

+11531
-10899
lines changed

.openpublishing.publish.config.json

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,14 @@
33
{
44
"docset_name": "azure-ai",
55
"build_source_folder": ".",
6+
"build_output_subfolder": "azure-ai",
7+
"locale": "en-us",
8+
"monikers": [],
9+
"moniker_ranges": [],
610
"xref_query_tags": [
711
"/dotnet",
812
"/python"
913
],
10-
"build_output_subfolder": "azure-ai",
11-
"locale": "en-us",
12-
"monikers": [],
1314
"open_to_public_contributors": true,
1415
"type_mapping": {
1516
"Conceptual": "Content",
@@ -172,4 +173,4 @@
172173
],
173174
"branch_target_mapping": {},
174175
"targets": {}
175-
}
176+
}

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,11 @@
3030
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
3131
"redirect_document_id": false
3232
},
33+
{
34+
"source_path_from_root": "/articles/ai-services/luis/luis-concept-data-conversion.md",
35+
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
36+
"redirect_document_id": false
37+
},
3338
{
3439
"source_path_from_root": "/articles/ai-services/custom-vision-service/update-application-to-3.0-sdk.md",
3540
"redirect_url": "/azure/ai-services/custom-vision-service/overview",
@@ -40,6 +45,11 @@
4045
"redirect_url": "/azure/ai-services/custom-vision-service/whats-new",
4146
"redirect_document_id": false
4247
},
48+
{
49+
"source_path_from_root": "/articles/ai-services/custom-vision-service/concepts/compare-alternatives.md",
50+
"redirect_url": "/azure/ai-services/custom-vision-service/overview",
51+
"redirect_document_id": false
52+
},
4353
{
4454
"source_path_from_root": "/articles/ai-services/luis/luis-migration-authoring.md",
4555
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
@@ -405,6 +415,21 @@
405415
"redirect_url": "/azure/ai-services/speech-service/release-notes",
406416
"redirect_document_id": false
407417
},
418+
{
419+
"source_path_from_root": "/articles/ai-services/speech-service/get-started-speaker-recognition.md",
420+
"redirect_url": "/azure/ai-services/speech-service/speaker-recognition-overview",
421+
"redirect_document_id": false
422+
},
423+
{
424+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md",
425+
"redirect_url": "/azure/ai-services/speech-service/intent-recognition",
426+
"redirect_document_id": false
427+
},
428+
{
429+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md",
430+
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle",
431+
"redirect_document_id": false
432+
},
408433
{
409434
"source_path_from_root": "/articles/ai-services/anomaly-detector/how-to/postman.md",
410435
"redirect_url": "/azure/ai-services/anomaly-detector/overview",
@@ -414,6 +439,26 @@
414439
"source_path_from_root": "/articles/ai-services/anomaly-detector//tutorials/multivariate-anomaly-detection-synapse.md",
415440
"redirect_url": "/azure/ai-services/anomaly-detector/overview",
416441
"redirect_document_id": false
442+
},
443+
{
444+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/data-formats.md",
445+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
446+
"redirect_document_id": false
447+
},
448+
{
449+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/deploy-model.md",
450+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
451+
"redirect_document_id": false
452+
},
453+
{
454+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/test-evaluate.md",
455+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
456+
"redirect_document_id": false
457+
},
458+
{
459+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/quickstart.md",
460+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
461+
"redirect_document_id": false
417462
}
418463
]
419464
}

articles/ai-services/anomaly-detector/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ metadata:
1212
manager: nitinme
1313
ms.service: azure-ai-anomaly-detector
1414
ms.topic: landing-page
15-
ms.date: 01/18/2024
15+
ms.date: 09/20/2024
1616
ms.author: mbullwin
1717

1818

articles/ai-services/anomaly-detector/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-anomaly-detector
99
ms.topic: overview
10-
ms.date: 01/18/2024
10+
ms.date: 09/20/2024
1111
ms.author: mbullwin
1212
keywords: anomaly detection, machine learning, algorithms
1313
---

articles/ai-services/cognitive-services-container-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: aahill
77
manager: nitinme
88
ms.service: azure-ai-services
99
ms.topic: overview
10-
ms.date: 09/17/2024
10+
ms.date: 09/25/2024
1111
ms.author: aahi
1212
keywords: on-premises, Docker, container, Kubernetes
1313
#Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.

articles/ai-services/computer-vision/concept-describe-images-40.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,35 +8,37 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 01/19/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Image captions (version 4.0)
16-
Image captions in Image Analysis 4.0 are available through the **Caption** and **Dense Captions** features.
1716

18-
Caption generates a one-sentence description for all image contents. Dense Captions provides more detail by generating one-sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both these features use the latest groundbreaking Florence-based AI models.
17+
Image captions in Image Analysis 4.0 are available through the **Caption** and **Dense Captions** features.
1918

20-
At this time, image captioning is available in English only.
19+
The Caption feature generates a one-sentence description of all the image contents. Dense Captions provides more detail by generating one-sentence descriptions of up to 10 different regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both of these features use the latest Florence-based AI models.
20+
21+
Image captioning is available in English only.
2122

2223
> [!IMPORTANT]
23-
> Image captioning in Image Analysis 4.0 is only available in certain Azure data center regions: see [Region availability](./overview-image-analysis.md#region-availability). You must use a Vision resource located in one of these regions to get results from Caption and Dense Captions features.
24+
> Image captioning in Image Analysis 4.0 is only available in certain Azure data center regions: see [Region availability](./overview-image-analysis.md#region-availability). You must use an Azure AI Vision resource located in one of these regions to get results from Caption and Dense Captions features.
2425
>
25-
> If you have to use a Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Azure AI Vision regions.
26+
> If you need to use a Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Azure AI Vision regions.
2627
2728
Try out the image captioning features quickly and easily in your browser using Vision Studio.
2829

2930
> [!div class="nextstepaction"]
3031
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
3132
32-
### Gender-neutral captions
33-
Captions contain gender terms ("man", "woman", "boy" and "girl") by default. You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter, **gender-neutral-caption** to `true` in the request URL.
33+
## Gender-neutral captions
34+
35+
By default, captions contain gender terms ("man", "woman", "boy" and "girl"). You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter `gender-neutral-caption` to `true` in the request URL.
3436

3537
## Caption and Dense Captions examples
3638

3739
#### [Caption](#tab/image)
3840

39-
The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features.
41+
The following JSON response illustrates what the Image Analysis 4.0 API returns when describing the example image based on its visual features.
4042

4143
![Photo of a man pointing at a screen](./Media/quickstarts/presentation.png)
4244

@@ -51,7 +53,7 @@ The following JSON response illustrates what the Analysis 4.0 API returns when d
5153

5254
#### [Dense Captions](#tab/dense)
5355

54-
The following JSON response illustrates what the Analysis 4.0 API returns when generating dense captions for the example image.
56+
The following JSON response illustrates what the Image Analysis 4.0 API returns when generating dense captions for the example image.
5557

5658
![Photo of a tractor on a farm](./Images/farm.png)
5759

articles/ai-services/computer-vision/concept-describing-images.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,15 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 04/30/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Image descriptions
1616

17-
Azure AI Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
17+
Azure AI Vision can analyze an image and generate a human-readable phrase that describes its contents. The service returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
1818

19-
At this time, English is the only supported language for image description.
19+
English is the only supported language for image descriptions.
2020

2121
Try out the image captioning features quickly and easily in your browser using Vision Studio.
2222

@@ -25,7 +25,7 @@ Try out the image captioning features quickly and easily in your browser using V
2525
2626
## Image description example
2727

28-
The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
28+
The following JSON response illustrates what the Analyze Image API returns when describing the example image based on its visual features.
2929

3030
![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
3131

articles/ai-services/computer-vision/concept-image-retrieval.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,24 @@
11
---
22
title: Multimodal embeddings concepts - Image Analysis 4.0
33
titleSuffix: Azure AI services
4-
description: Concepts related to image vectorization using the Image Analysis 4.0 API.
4+
description: Learn about concepts related to image vectorization and search/retrieval using the Image Analysis 4.0 API.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 02/20/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Multimodal embeddings (version 4.0)
1616

17-
Multimodal embedding is the process of generating a numerical representation of an image that captures its features and characteristics in a vector format. These vectors encode the content and context of an image in a way that is compatible with text search over the same vector space.
17+
Multimodal embedding is the process of generating a vector representation of an image that captures its features and characteristics. These vectors encode the content and context of an image in a way that is compatible with text search over the same vector space.
1818

19-
Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
19+
Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search offers a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
2020

21-
## What's the difference between vector search and keyword-based search?
21+
## Differences between vector search and keyword search
2222

2323
Keyword search is the most basic and traditional method of information retrieval. In that approach, the search engine looks for the exact match of the keywords or phrases entered by the user in the search query and compares it with the labels and tags provided for the images. The search engine then returns images that contain those exact keywords as content tags and image labels. Keyword search relies heavily on the user's ability to use relevant and specific search terms.
2424

@@ -50,18 +50,17 @@ Each dimension of the vector corresponds to a different feature or attribute of
5050

5151
The following are the main steps of the image retrieval process using Multimodal embeddings.
5252

53-
:::image type="content" source="media/image-retrieval.png" alt-text="Diagram of image retrieval process.":::
53+
:::image type="content" source="media/image-retrieval.png" alt-text="Diagram of the multimodal embedding / image retrieval process.":::
5454

5555
1. Vectorize Images and Text: the Multimodal embeddings APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
5656
> [!NOTE]
5757
> Multimodal embedding does not do any biometric processing of human faces. For face detection and identification, see the [Azure AI Face service](./overview-identity.md).
58-
5958
1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity.
6059
1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
6160

6261
### Relevance score
6362

64-
The image and video retrieval services return a field called "relevance." The term "relevance" denotes a measure of similarity score between a query and image or video frame embeddings. The relevance score is composed of two parts:
63+
The image and video retrieval services return a field called "relevance." The term "relevance" denotes a measure of similarity between a query and image or video frame embeddings. The relevance score is composed of two parts:
6564
1. The cosine similarity (that falls in the range of [0,1]) between the query and image or video frame embeddings.
6665
1. A metadata score, which reflects the similarity between the query and the metadata associated with the image or video frame.
6766

articles/ai-services/computer-vision/concept-object-detection-40.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,6 @@ Try out the capabilities of object detection quickly and easily in your browser
2323
> [!div class="nextstepaction"]
2424
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
2525
26-
> [!TIP]
27-
> You can use the Object detection feature through the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to provide the AI assistance with more details (readable text and object locations) about the image. For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart).
2826

2927
## Object detection example
3028

articles/ai-services/computer-vision/concept-ocr.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,6 @@ OCR is a machine-learning-based technique for extracting text from in-the-wild a
2323

2424
The new Azure AI Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md).
2525

26-
> [!TIP]
27-
> You can also use the OCR feature in conjunction with the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistant more details (readable text and object locations) about the image. For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart).
2826

2927
## Text extraction example
3028

0 commit comments

Comments
 (0)