Skip to content

Commit 24c8c75

Browse files
committed
Merge branch 'main' into release-2024-openai-oct
2 parents df2e805 + 5cfea4a commit 24c8c75

File tree

406 files changed

+8891
-10064
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

406 files changed

+8891
-10064
lines changed

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,11 @@
4545
"redirect_url": "/azure/ai-services/custom-vision-service/whats-new",
4646
"redirect_document_id": false
4747
},
48+
{
49+
"source_path_from_root": "/articles/ai-services/custom-vision-service/concepts/compare-alternatives.md",
50+
"redirect_url": "/azure/ai-services/custom-vision-service/overview",
51+
"redirect_document_id": false
52+
},
4853
{
4954
"source_path_from_root": "/articles/ai-services/luis/luis-migration-authoring.md",
5055
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
@@ -325,6 +330,26 @@
325330
"redirect_url": "/azure/ai-services/computer-vision/how-to/model-customization",
326331
"redirect_document_id": false
327332
},
333+
{
334+
"source_path_from_root": "/articles/ai-services/computer-vision/deploy-computer-vision-on-premises.md",
335+
"redirect_url": "/azure/ai-services/computer-vision/",
336+
"redirect_document_id": false
337+
},
338+
{
339+
"source_path_from_root": "/articles/ai-services/computer-vision/spatial-analysis-web-app.md",
340+
"redirect_url": "/azure/ai-services/computer-vision/",
341+
"redirect_document_id": false
342+
},
343+
{
344+
"source_path_from_root": "/articles/ai-services/computer-vision/tutorials/storage-lab-tutorial.md",
345+
"redirect_url": "/azure/ai-services/computer-vision/",
346+
"redirect_document_id": false
347+
},
348+
{
349+
"source_path_from_root": "/articles/ai-services/custom-vision-service/iot-visual-alerts-tutorial.md",
350+
"redirect_url": "/azure/ai-services/computer-vision/",
351+
"redirect_document_id": false
352+
},
328353
{
329354
"source_path_from_root": "/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md",
330355
"redirect_url": "/azure/ai-services/document-intelligence/studio-overview",
@@ -410,6 +435,11 @@
410435
"redirect_url": "/azure/ai-services/speech-service/release-notes",
411436
"redirect_document_id": false
412437
},
438+
{
439+
"source_path_from_root": "/articles/ai-services/speech-service/get-started-speaker-recognition.md",
440+
"redirect_url": "/azure/ai-services/speech-service/speaker-recognition-overview",
441+
"redirect_document_id": false
442+
},
413443
{
414444
"source_path_from_root": "/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md",
415445
"redirect_url": "/azure/ai-services/speech-service/intent-recognition",
@@ -429,6 +459,26 @@
429459
"source_path_from_root": "/articles/ai-services/anomaly-detector//tutorials/multivariate-anomaly-detection-synapse.md",
430460
"redirect_url": "/azure/ai-services/anomaly-detector/overview",
431461
"redirect_document_id": false
462+
},
463+
{
464+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/data-formats.md",
465+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
466+
"redirect_document_id": false
467+
},
468+
{
469+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/deploy-model.md",
470+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
471+
"redirect_document_id": false
472+
},
473+
{
474+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/test-evaluate.md",
475+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
476+
"redirect_document_id": false
477+
},
478+
{
479+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/quickstart.md",
480+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
481+
"redirect_document_id": false
432482
}
433483
]
434484
}

articles/ai-services/anomaly-detector/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ metadata:
1212
manager: nitinme
1313
ms.service: azure-ai-anomaly-detector
1414
ms.topic: landing-page
15-
ms.date: 01/18/2024
15+
ms.date: 09/20/2024
1616
ms.author: mbullwin
1717

1818

articles/ai-services/anomaly-detector/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-anomaly-detector
99
ms.topic: overview
10-
ms.date: 01/18/2024
10+
ms.date: 09/20/2024
1111
ms.author: mbullwin
1212
keywords: anomaly detection, machine learning, algorithms
1313
---

articles/ai-services/cognitive-services-container-support.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,15 @@ author: aahill
77
manager: nitinme
88
ms.service: azure-ai-services
99
ms.topic: overview
10-
ms.date: 08/23/2024
10+
ms.date: 09/25/2024
1111
ms.author: aahi
1212
keywords: on-premises, Docker, container, Kubernetes
1313
#Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
1414
---
1515

1616
# What are Azure AI containers?
1717

18-
Azure AI services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services.
18+
Azure AI services provide several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services.
1919

2020
> [!VIDEO https://www.youtube.com/embed/hdfbn4Q8jbo]
2121
@@ -48,7 +48,7 @@ Azure AI containers provide the following set of Docker containers, each of whic
4848
| Service | Container | Description | Availability |
4949
|--|--|--|--|
5050
| [LUIS][lu-containers] | **LUIS** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/language/luis/about)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
51-
| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
51+
| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff," the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
5252
| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/about)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
5353
| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/about)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
5454
| [Language service][ta-containers-health] | **Text Analytics for health** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/about))| Extract and label medical information from unstructured clinical text. | Generally available |

articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ More [examples](./computer-vision-resource-container-config.md#example-docker-ru
134134
> [!IMPORTANT]
135135
> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
136136
137-
If you need higher throughput (for example, when processing multi-page files), consider deploying multiple containers [on a Kubernetes cluster](deploy-computer-vision-on-premises.md), using [Azure Storage](/azure/storage/common/storage-account-create) and [Azure Queue](/azure/storage/queues/storage-queues-introduction).
137+
<!--If you need higher throughput (for example, when processing multi-page files), consider deploying multiple containers [on a Kubernetes cluster](deploy-computer-vision-on-premises.md), using [Azure Storage](/azure/storage/common/storage-account-create) and [Azure Queue](/azure/storage/queues/storage-queues-introduction).-->
138138

139139
If you're using Azure Storage to store images for processing, you can create a [connection string](/azure/storage/common/storage-configure-connection-string) to use when calling the container.
140140

articles/ai-services/computer-vision/concept-describe-images-40.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,35 +8,37 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 01/19/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Image captions (version 4.0)
16-
Image captions in Image Analysis 4.0 are available through the **Caption** and **Dense Captions** features.
1716

18-
Caption generates a one-sentence description for all image contents. Dense Captions provides more detail by generating one-sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both these features use the latest groundbreaking Florence-based AI models.
17+
Image captions in Image Analysis 4.0 are available through the **Caption** and **Dense Captions** features.
1918

20-
At this time, image captioning is available in English only.
19+
The Caption feature generates a one-sentence description of all the image contents. Dense Captions provides more detail by generating one-sentence descriptions of up to 10 different regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both of these features use the latest Florence-based AI models.
20+
21+
Image captioning is available in English only.
2122

2223
> [!IMPORTANT]
23-
> Image captioning in Image Analysis 4.0 is only available in certain Azure data center regions: see [Region availability](./overview-image-analysis.md#region-availability). You must use a Vision resource located in one of these regions to get results from Caption and Dense Captions features.
24+
> Image captioning in Image Analysis 4.0 is only available in certain Azure data center regions: see [Region availability](./overview-image-analysis.md#region-availability). You must use an Azure AI Vision resource located in one of these regions to get results from Caption and Dense Captions features.
2425
>
25-
> If you have to use a Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Azure AI Vision regions.
26+
> If you need to use a Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Azure AI Vision regions.
2627
2728
Try out the image captioning features quickly and easily in your browser using Vision Studio.
2829

2930
> [!div class="nextstepaction"]
3031
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
3132
32-
### Gender-neutral captions
33-
Captions contain gender terms ("man", "woman", "boy" and "girl") by default. You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter, **gender-neutral-caption** to `true` in the request URL.
33+
## Gender-neutral captions
34+
35+
By default, captions contain gender terms ("man", "woman", "boy" and "girl"). You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter `gender-neutral-caption` to `true` in the request URL.
3436

3537
## Caption and Dense Captions examples
3638

3739
#### [Caption](#tab/image)
3840

39-
The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features.
41+
The following JSON response illustrates what the Image Analysis 4.0 API returns when describing the example image based on its visual features.
4042

4143
![Photo of a man pointing at a screen](./Media/quickstarts/presentation.png)
4244

@@ -51,7 +53,7 @@ The following JSON response illustrates what the Analysis 4.0 API returns when d
5153

5254
#### [Dense Captions](#tab/dense)
5355

54-
The following JSON response illustrates what the Analysis 4.0 API returns when generating dense captions for the example image.
56+
The following JSON response illustrates what the Image Analysis 4.0 API returns when generating dense captions for the example image.
5557

5658
![Photo of a tractor on a farm](./Images/farm.png)
5759

articles/ai-services/computer-vision/concept-describing-images.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,15 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 04/30/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Image descriptions
1616

17-
Azure AI Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
17+
Azure AI Vision can analyze an image and generate a human-readable phrase that describes its contents. The service returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
1818

19-
At this time, English is the only supported language for image description.
19+
English is the only supported language for image descriptions.
2020

2121
Try out the image captioning features quickly and easily in your browser using Vision Studio.
2222

@@ -25,7 +25,7 @@ Try out the image captioning features quickly and easily in your browser using V
2525
2626
## Image description example
2727

28-
The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
28+
The following JSON response illustrates what the Analyze Image API returns when describing the example image based on its visual features.
2929

3030
![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
3131

articles/ai-services/computer-vision/concept-image-retrieval.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,24 @@
11
---
22
title: Multimodal embeddings concepts - Image Analysis 4.0
33
titleSuffix: Azure AI services
4-
description: Concepts related to image vectorization using the Image Analysis 4.0 API.
4+
description: Learn about concepts related to image vectorization and search/retrieval using the Image Analysis 4.0 API.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 02/20/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Multimodal embeddings (version 4.0)
1616

17-
Multimodal embedding is the process of generating a numerical representation of an image that captures its features and characteristics in a vector format. These vectors encode the content and context of an image in a way that is compatible with text search over the same vector space.
17+
Multimodal embedding is the process of generating a vector representation of an image that captures its features and characteristics. These vectors encode the content and context of an image in a way that is compatible with text search over the same vector space.
1818

19-
Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
19+
Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search offers a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
2020

21-
## What's the difference between vector search and keyword-based search?
21+
## Differences between vector search and keyword search
2222

2323
Keyword search is the most basic and traditional method of information retrieval. In that approach, the search engine looks for the exact match of the keywords or phrases entered by the user in the search query and compares it with the labels and tags provided for the images. The search engine then returns images that contain those exact keywords as content tags and image labels. Keyword search relies heavily on the user's ability to use relevant and specific search terms.
2424

@@ -50,18 +50,17 @@ Each dimension of the vector corresponds to a different feature or attribute of
5050

5151
The following are the main steps of the image retrieval process using Multimodal embeddings.
5252

53-
:::image type="content" source="media/image-retrieval.png" alt-text="Diagram of image retrieval process.":::
53+
:::image type="content" source="media/image-retrieval.png" alt-text="Diagram of the multimodal embedding / image retrieval process.":::
5454

5555
1. Vectorize Images and Text: the Multimodal embeddings APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
5656
> [!NOTE]
5757
> Multimodal embedding does not do any biometric processing of human faces. For face detection and identification, see the [Azure AI Face service](./overview-identity.md).
58-
5958
1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity.
6059
1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
6160

6261
### Relevance score
6362

64-
The image and video retrieval services return a field called "relevance." The term "relevance" denotes a measure of similarity score between a query and image or video frame embeddings. The relevance score is composed of two parts:
63+
The image and video retrieval services return a field called "relevance." The term "relevance" denotes a measure of similarity between a query and image or video frame embeddings. The relevance score is composed of two parts:
6564
1. The cosine similarity (that falls in the range of [0,1]) between the query and image or video frame embeddings.
6665
1. A metadata score, which reflects the similarity between the query and the metadata associated with the image or video frame.
6766

0 commit comments

Comments
 (0)