Skip to content

Commit 1baaa90

Browse files
committed
Merge branch 'main' of github.com:MicrosoftDocs/azure-ai-docs-pr into sdg-freshness
2 parents 960b60e + f2ab553 commit 1baaa90

File tree

630 files changed

+13773
-12897
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

630 files changed

+13773
-12897
lines changed

.openpublishing.publish.config.json

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,14 @@
33
{
44
"docset_name": "azure-ai",
55
"build_source_folder": ".",
6+
"build_output_subfolder": "azure-ai",
7+
"locale": "en-us",
8+
"monikers": [],
9+
"moniker_ranges": [],
610
"xref_query_tags": [
711
"/dotnet",
812
"/python"
913
],
10-
"build_output_subfolder": "azure-ai",
11-
"locale": "en-us",
12-
"monikers": [],
1314
"open_to_public_contributors": true,
1415
"type_mapping": {
1516
"Conceptual": "Content",
@@ -172,4 +173,4 @@
172173
],
173174
"branch_target_mapping": {},
174175
"targets": {}
175-
}
176+
}

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,11 @@
3030
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
3131
"redirect_document_id": false
3232
},
33+
{
34+
"source_path_from_root": "/articles/ai-services/luis/luis-concept-data-conversion.md",
35+
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
36+
"redirect_document_id": false
37+
},
3338
{
3439
"source_path_from_root": "/articles/ai-services/custom-vision-service/update-application-to-3.0-sdk.md",
3540
"redirect_url": "/azure/ai-services/custom-vision-service/overview",
@@ -40,6 +45,11 @@
4045
"redirect_url": "/azure/ai-services/custom-vision-service/whats-new",
4146
"redirect_document_id": false
4247
},
48+
{
49+
"source_path_from_root": "/articles/ai-services/custom-vision-service/concepts/compare-alternatives.md",
50+
"redirect_url": "/azure/ai-services/custom-vision-service/overview",
51+
"redirect_document_id": false
52+
},
4353
{
4454
"source_path_from_root": "/articles/ai-services/luis/luis-migration-authoring.md",
4555
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
@@ -320,6 +330,26 @@
320330
"redirect_url": "/azure/ai-services/computer-vision/how-to/model-customization",
321331
"redirect_document_id": false
322332
},
333+
{
334+
"source_path_from_root": "/articles/ai-services/computer-vision/deploy-computer-vision-on-premises.md",
335+
"redirect_url": "/azure/ai-services/computer-vision/",
336+
"redirect_document_id": false
337+
},
338+
{
339+
"source_path_from_root": "/articles/ai-services/computer-vision/spatial-analysis-web-app.md",
340+
"redirect_url": "/azure/ai-services/computer-vision/",
341+
"redirect_document_id": false
342+
},
343+
{
344+
"source_path_from_root": "/articles/ai-services/computer-vision/tutorials/storage-lab-tutorial.md",
345+
"redirect_url": "/azure/ai-services/computer-vision/",
346+
"redirect_document_id": false
347+
},
348+
{
349+
"source_path_from_root": "/articles/ai-services/custom-vision-service/iot-visual-alerts-tutorial.md",
350+
"redirect_url": "/azure/ai-services/computer-vision/",
351+
"redirect_document_id": false
352+
},
323353
{
324354
"source_path_from_root": "/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md",
325355
"redirect_url": "/azure/ai-services/document-intelligence/studio-overview",
@@ -405,6 +435,21 @@
405435
"redirect_url": "/azure/ai-services/speech-service/release-notes",
406436
"redirect_document_id": false
407437
},
438+
{
439+
"source_path_from_root": "/articles/ai-services/speech-service/get-started-speaker-recognition.md",
440+
"redirect_url": "/azure/ai-services/speech-service/speaker-recognition-overview",
441+
"redirect_document_id": false
442+
},
443+
{
444+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md",
445+
"redirect_url": "/azure/ai-services/speech-service/intent-recognition",
446+
"redirect_document_id": false
447+
},
448+
{
449+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md",
450+
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle",
451+
"redirect_document_id": false
452+
},
408453
{
409454
"source_path_from_root": "/articles/ai-services/anomaly-detector/how-to/postman.md",
410455
"redirect_url": "/azure/ai-services/anomaly-detector/overview",
@@ -414,6 +459,26 @@
414459
"source_path_from_root": "/articles/ai-services/anomaly-detector//tutorials/multivariate-anomaly-detection-synapse.md",
415460
"redirect_url": "/azure/ai-services/anomaly-detector/overview",
416461
"redirect_document_id": false
462+
},
463+
{
464+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/data-formats.md",
465+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
466+
"redirect_document_id": false
467+
},
468+
{
469+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/deploy-model.md",
470+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
471+
"redirect_document_id": false
472+
},
473+
{
474+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/how-to/test-evaluate.md",
475+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
476+
"redirect_document_id": false
477+
},
478+
{
479+
"source_path_from_root": "/articles/ai-services/language-service/summarization/custom/quickstart.md",
480+
"redirect_url": "/azure/ai-services//language-service/summarization/overview",
481+
"redirect_document_id": false
417482
}
418483
]
419484
}

articles/ai-services/anomaly-detector/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ metadata:
1212
manager: nitinme
1313
ms.service: azure-ai-anomaly-detector
1414
ms.topic: landing-page
15-
ms.date: 01/18/2024
15+
ms.date: 09/20/2024
1616
ms.author: mbullwin
1717

1818

articles/ai-services/anomaly-detector/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-anomaly-detector
99
ms.topic: overview
10-
ms.date: 01/18/2024
10+
ms.date: 09/20/2024
1111
ms.author: mbullwin
1212
keywords: anomaly detection, machine learning, algorithms
1313
---

articles/ai-services/cognitive-services-container-support.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,15 @@ author: aahill
77
manager: nitinme
88
ms.service: azure-ai-services
99
ms.topic: overview
10-
ms.date: 08/23/2024
10+
ms.date: 09/25/2024
1111
ms.author: aahi
1212
keywords: on-premises, Docker, container, Kubernetes
1313
#Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
1414
---
1515

1616
# What are Azure AI containers?
1717

18-
Azure AI services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services.
18+
Azure AI services provide several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services.
1919

2020
> [!VIDEO https://www.youtube.com/embed/hdfbn4Q8jbo]
2121
@@ -48,7 +48,7 @@ Azure AI containers provide the following set of Docker containers, each of whic
4848
| Service | Container | Description | Availability |
4949
|--|--|--|--|
5050
| [LUIS][lu-containers] | **LUIS** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/language/luis/about)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
51-
| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
51+
| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff," the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
5252
| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/about)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
5353
| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/about)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
5454
| [Language service][ta-containers-health] | **Text Analytics for health** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/about))| Extract and label medical information from unstructured clinical text. | Generally available |

articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ More [examples](./computer-vision-resource-container-config.md#example-docker-ru
134134
> [!IMPORTANT]
135135
> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
136136
137-
If you need higher throughput (for example, when processing multi-page files), consider deploying multiple containers [on a Kubernetes cluster](deploy-computer-vision-on-premises.md), using [Azure Storage](/azure/storage/common/storage-account-create) and [Azure Queue](/azure/storage/queues/storage-queues-introduction).
137+
<!--If you need higher throughput (for example, when processing multi-page files), consider deploying multiple containers [on a Kubernetes cluster](deploy-computer-vision-on-premises.md), using [Azure Storage](/azure/storage/common/storage-account-create) and [Azure Queue](/azure/storage/queues/storage-queues-introduction).-->
138138

139139
If you're using Azure Storage to store images for processing, you can create a [connection string](/azure/storage/common/storage-configure-connection-string) to use when calling the container.
140140

articles/ai-services/computer-vision/concept-describe-images-40.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,35 +8,37 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 01/19/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Image captions (version 4.0)
16-
Image captions in Image Analysis 4.0 are available through the **Caption** and **Dense Captions** features.
1716

18-
Caption generates a one-sentence description for all image contents. Dense Captions provides more detail by generating one-sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both these features use the latest groundbreaking Florence-based AI models.
17+
Image captions in Image Analysis 4.0 are available through the **Caption** and **Dense Captions** features.
1918

20-
At this time, image captioning is available in English only.
19+
The Caption feature generates a one-sentence description of all the image contents. Dense Captions provides more detail by generating one-sentence descriptions of up to 10 different regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both of these features use the latest Florence-based AI models.
20+
21+
Image captioning is available in English only.
2122

2223
> [!IMPORTANT]
23-
> Image captioning in Image Analysis 4.0 is only available in certain Azure data center regions: see [Region availability](./overview-image-analysis.md#region-availability). You must use a Vision resource located in one of these regions to get results from Caption and Dense Captions features.
24+
> Image captioning in Image Analysis 4.0 is only available in certain Azure data center regions: see [Region availability](./overview-image-analysis.md#region-availability). You must use an Azure AI Vision resource located in one of these regions to get results from Caption and Dense Captions features.
2425
>
25-
> If you have to use a Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Azure AI Vision regions.
26+
> If you need to use a Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Azure AI Vision regions.
2627
2728
Try out the image captioning features quickly and easily in your browser using Vision Studio.
2829

2930
> [!div class="nextstepaction"]
3031
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
3132
32-
### Gender-neutral captions
33-
Captions contain gender terms ("man", "woman", "boy" and "girl") by default. You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter, **gender-neutral-caption** to `true` in the request URL.
33+
## Gender-neutral captions
34+
35+
By default, captions contain gender terms ("man", "woman", "boy" and "girl"). You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter `gender-neutral-caption` to `true` in the request URL.
3436

3537
## Caption and Dense Captions examples
3638

3739
#### [Caption](#tab/image)
3840

39-
The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features.
41+
The following JSON response illustrates what the Image Analysis 4.0 API returns when describing the example image based on its visual features.
4042

4143
![Photo of a man pointing at a screen](./Media/quickstarts/presentation.png)
4244

@@ -51,7 +53,7 @@ The following JSON response illustrates what the Analysis 4.0 API returns when d
5153

5254
#### [Dense Captions](#tab/dense)
5355

54-
The following JSON response illustrates what the Analysis 4.0 API returns when generating dense captions for the example image.
56+
The following JSON response illustrates what the Image Analysis 4.0 API returns when generating dense captions for the example image.
5557

5658
![Photo of a tractor on a farm](./Images/farm.png)
5759

articles/ai-services/computer-vision/concept-describing-images.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,15 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 04/30/2024
11+
ms.date: 09/25/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Image descriptions
1616

17-
Azure AI Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
17+
Azure AI Vision can analyze an image and generate a human-readable phrase that describes its contents. The service returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
1818

19-
At this time, English is the only supported language for image description.
19+
English is the only supported language for image descriptions.
2020

2121
Try out the image captioning features quickly and easily in your browser using Vision Studio.
2222

@@ -25,7 +25,7 @@ Try out the image captioning features quickly and easily in your browser using V
2525
2626
## Image description example
2727

28-
The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
28+
The following JSON response illustrates what the Analyze Image API returns when describing the example image based on its visual features.
2929

3030
![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
3131

0 commit comments

Comments
 (0)