Skip to content

Commit 345b9b1

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-docs-pr (branch live)
2 parents 578638e + 609e7ac commit 345b9b1

File tree

446 files changed

+3459
-2744
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

446 files changed

+3459
-2744
lines changed

.openpublishing.redirection.json

Lines changed: 26 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3043,6 +3043,11 @@
30433043
"redirect_url": "/sql/sql-server/stretch-database/stretch-database-databases-and-tables-stretch-database-advisor",
30443044
"redirect_document_id": false
30453045
},
3046+
{
3047+
"source_path_from_root": "/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md",
3048+
"redirect_url": "/azure/postgresql/migrate/migration-service/tutorial-migration-service-single-to-flexible",
3049+
"redirect_document_id": false
3050+
},
30463051
{
30473052
"source_path_from_root": "/articles/vs-azure-tools-access-private-azure-clouds-with-visual-studio.md",
30483053
"redirect_url": "/visualstudio/azure/vs-azure-tools-access-private-azure-clouds-with-visual-studio",
@@ -4089,7 +4094,27 @@
40894094
"redirect_document_id": false
40904095
},
40914096
{
4092-
"source_path_from_root": "/articles/data-factory/continuous-integration-delivery-automate-github-actions.md",
4097+
"source_path_from_root": "/articles/openshift/tutorial-create-cluster.md",
4098+
"redirect_url": "/azure/openshift/create-cluster",
4099+
"redirect_document_id": false
4100+
},
4101+
{
4102+
"source_path_from_root": "/articles/openshift/tutorial-connect-cluster.md",
4103+
"redirect_url": "/azure/openshift/connect-cluster",
4104+
"redirect_document_id": false
4105+
},
4106+
{
4107+
"source_path_from_root": "/articles/openshift/tutorial-delete-cluster.md",
4108+
"redirect_url": "/azure/openshift/delete-cluster",
4109+
"redirect_document_id": false
4110+
},
4111+
{
4112+
"source_path_from_root": "/articles/openshift/quickstart-portal.md",
4113+
"redirect_url": "/azure/openshift/create-cluster",
4114+
"redirect_document_id": false
4115+
},
4116+
{
4117+
"source_path_from_root": "/articles/data-factory/continuous-integration-delivery-automate-github-actions.md",
40934118
"redirect_url": "/azure",
40944119
"redirect_document_id": false
40954120
}

articles/ai-services/computer-vision/how-to/shelf-analyze.md

Lines changed: 37 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Analyze a shelf image using pretrained models
33
titleSuffix: Azure AI services
4-
description: Use the Product Understanding API to analyze a shelf image and receive rich product data.
4+
description: Use the Product Recognition API to analyze a shelf image and receive rich product data.
55
author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-vision
@@ -13,7 +13,7 @@ ms.custom: build-2023, build-2023-dataai
1313

1414
# Shelf Product Recognition (preview): Analyze shelf images using pretrained model
1515

16-
The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Understanding API, you can upload a shelf image and get the locations of products and gaps.
16+
The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Recognition API, you can upload a shelf image and get the locations of products and gaps.
1717

1818
:::image type="content" source="../media/shelf/shelf-analysis-pretrained.png" alt-text="Photo of a retail shelf with products and gaps highlighted with rectangles.":::
1919

@@ -51,62 +51,50 @@ To analyze a shelf image, do the following steps:
5151

5252
## Examine the response
5353

54-
A successful response is returned in JSON. The product understanding API results are returned in a `ProductUnderstandingResultApiModel` JSON field:
54+
A successful response is returned in JSON. The product recognition API results are returned in a `ProductRecognitionResultApiModel` JSON field:
5555

5656
```json
57-
{
58-
"imageMetadata": {
59-
"width": 2000,
60-
"height": 1500
61-
},
62-
"products": [
63-
{
64-
"id": "string",
65-
"boundingBox": {
66-
"x": 1234,
67-
"y": 1234,
68-
"w": 12,
69-
"h": 12
70-
},
71-
"classifications": [
72-
{
73-
"confidence": 0.9,
74-
"label": "string"
75-
}
76-
]
77-
}
57+
"ProductRecognitionResultApiModel": {
58+
"description": "Results from the product understanding operation.",
59+
"required": [
60+
"gaps",
61+
"imageMetadata",
62+
"products"
7863
],
79-
"gaps": [
80-
{
81-
"id": "string",
82-
"boundingBox": {
83-
"x": 1234,
84-
"y": 1234,
85-
"w": 123,
86-
"h": 123
87-
},
88-
"classifications": [
89-
{
90-
"confidence": 0.8,
91-
"label": "string"
92-
}
93-
]
64+
"type": "object",
65+
"properties": {
66+
"imageMetadata": {
67+
"$ref": "#/definitions/ImageMetadataApiModel"
68+
},
69+
"products": {
70+
"description": "Products detected in the image.",
71+
"type": "array",
72+
"items": {
73+
"$ref": "#/definitions/DetectedObject"
74+
}
75+
},
76+
"gaps": {
77+
"description": "Gaps detected in the image.",
78+
"type": "array",
79+
"items": {
80+
"$ref": "#/definitions/DetectedObject"
81+
}
9482
}
95-
]
83+
}
9684
}
9785
```
9886

9987
See the following sections for definitions of each JSON field.
10088

101-
### Product Understanding Result API model
89+
### Product Recognition Result API model
10290

103-
Results from the product understanding operation.
91+
Results from the product recognition operation.
10492

10593
| Name | Type | Description | Required |
10694
| ---- | ---- | ----------- | -------- |
10795
| `imageMetadata` | [ImageMetadataApiModel](#image-metadata-api-model) | The image metadata information such as height, width and format. | Yes |
108-
| `products` |[DetectedObjectApiModel](#detected-object-api-model) | Products detected in the image. | Yes |
109-
| `gaps` | [DetectedObjectApiModel](#detected-object-api-model) | Gaps detected in the image. | Yes |
96+
| `products` |[DetectedObject](#detected-object-api-model) | Products detected in the image. | Yes |
97+
| `gaps` | [DetectedObject](#detected-object-api-model) | Gaps detected in the image. | Yes |
11098

11199
### Image Metadata API model
112100

@@ -124,8 +112,8 @@ Describes a detected object in an image.
124112
| Name | Type | Description | Required |
125113
| ---- | ---- | ----------- | -------- |
126114
| `id` | string | ID of the detected object. | No |
127-
| `boundingBox` | [BoundingBoxApiModel](#bounding-box-api-model) | A bounding box for an area inside an image. | Yes |
128-
| `classifications` | [ImageClassificationApiModel](#image-classification-api-model) | Classification confidences of the detected object. | Yes |
115+
| `boundingBox` | [BoundingBox](#bounding-box-api-model) | A bounding box for an area inside an image. | Yes |
116+
| `tags` | [TagsApiModel](#image-tags-api-model) | Classification confidences of the detected object. | Yes |
129117

130118
### Bounding Box API model
131119

@@ -138,18 +126,18 @@ A bounding box for an area inside an image.
138126
| `w` | integer | Width measured from the top-left point of the area, in pixels. | Yes |
139127
| `h` | integer | Height measured from the top-left point of the area, in pixels. | Yes |
140128

141-
### Image Classification API model
129+
### Image Tags API model
142130

143131
Describes the image classification confidence of a label.
144132

145133
| Name | Type | Description | Required |
146134
| ---- | ---- | ----------- | -------- |
147135
| `confidence` | float | Confidence of the classification prediction. | Yes |
148-
| `label` | string | Label of the classification prediction. | Yes |
136+
| `name` | string | Label of the classification prediction. | Yes |
149137

150138
## Next steps
151139

152-
In this guide, you learned how to make a basic analysis call using the pretrained Product Understanding REST API. Next, learn how to use a custom Product Recognition model to better meet your business needs.
140+
In this guide, you learned how to make a basic analysis call using the pretrained Product Recognition REST API. Next, learn how to use a custom Product Recognition model to better meet your business needs.
153141

154142
> [!div class="nextstepaction"]
155143
> [Train a custom model for Product Recognition](../how-to/shelf-model-customization.md)

articles/ai-services/computer-vision/overview-image-analysis.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -142,11 +142,6 @@ To use the Image Analysis APIs, you must create your Azure AI Vision resource in
142142
| Japan East || | || |
143143

144144

145-
146-
### Query rates
147-
148-
tbd
149-
150145
## Data privacy and security
151146

152147
As with all of the Azure AI services, developers using the Azure AI Vision service should be aware of Microsoft's policies on customer data. See the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.

articles/ai-services/openai/how-to/quota.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-openai
99
ms.topic: how-to
10-
ms.date: 05/31/2024
10+
ms.date: 06/18/2024
1111
ms.author: mbullwin
1212
---
1313

@@ -91,6 +91,9 @@ As each request is received, Azure OpenAI computes an estimated max processed-to
9191

9292
As requests come into the deployment endpoint, the estimated max-processed-token count is added to a running token count of all requests that is reset each minute. If at any time during that minute, the TPM rate limit value is reached, then further requests will receive a 429 response code until the counter resets.
9393

94+
> [!IMPORTANT]
95+
> The token count used in the rate limit calculation is an estimate based in part on the character count of the API request. The rate limit token estimate is not the same as the token calculation that is used for billing/determining that a request is below a model's input token limit. Due to the approximate nature of the rate limit token calculation, it is expected behavior that a rate limit can be triggered prior to what might be expected in comparison to an exact token count measurement for each request.
96+
9497
RPM rate limits are based on the number of requests received over time. The rate limit expects that requests be evenly distributed over a one-minute period. If this average flow isn't maintained, then requests may receive a 429 response even though the limit isn't met when measured over the course of a minute. To implement this behavior, Azure OpenAI Service evaluates the rate of incoming requests over a small period of time, typically 1 or 10 seconds. If the number of requests received during that time exceeds what would be expected at the set RPM limit, then new requests will receive a 429 response code until the next evaluation period. For example, if Azure OpenAI is monitoring request rate on 1-second intervals, then rate limiting will occur for a 600-RPM deployment if more than 10 requests are received during each 1-second period (600 requests per minute = 10 requests per second).
9598

9699
### Rate limit best practices

articles/ai-services/openai/includes/model-matrix/provisioned-models.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,15 @@ description: PTU-managed model availability by region.
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: include
8-
ms.date: 06/11/2024
8+
ms.date: 06/18/2024
99
---
1010

1111
| **Region** | **gpt-4**, **0613** | **gpt-4**, **1106-Preview** | **gpt-4**, **0125-Preview** | **gpt-4**, **turbo-2024-04-09** | **gpt-4o**, **2024-05-13** | **gpt-4-32k**, **0613** | **gpt-35-turbo**, **1106** | **gpt-35-turbo**, **0125** |
1212
|:-------------------|:-------------------:|:---------------------------:|:---------------------------:|:-------------------------------:|:--------------------------:|:-----------------------:|:--------------------------:|:--------------------------:|
13-
| australiaeast ||||| - ||||
13+
| australiaeast ||||| ||||
1414
| brazilsouth |||| - | - ||| - |
1515
| canadacentral || - | - | - | - || - ||
16-
| canadaeast ||| - || - | - || - |
16+
| canadaeast ||| - || | - || - |
1717
| eastus ||||| - ||||
1818
| eastus2 ||||| - ||||
1919
| francecentral |||| - | - || - ||

articles/ai-services/openai/whats-new.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.custom:
1010
- ignite-2023
1111
- references_regions
1212
ms.topic: whats-new
13-
ms.date: 06/13/2024
13+
ms.date: 06/18/2024
1414
recommendations: false
1515
---
1616

@@ -28,7 +28,7 @@ This article provides a summary of the latest releases and major documentation u
2828

2929
* GPT-4o is now also available in:
3030
- Sweden Central for standard regional deployment.
31-
- Japan East, Korea Central, Sweden Central, Switzerland North, & West US 3 for provisioned deployment.
31+
- Australia East, Canada East, Japan East, Korea Central, Sweden Central, Switzerland North, & West US 3 for provisioned deployment.
3232

3333
For the latest information on model availability, see the [models page](./concepts/models.md).
3434

articles/ai-studio/concepts/content-filtering.md

Lines changed: 10 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -35,42 +35,43 @@ The content filtering models have been trained and tested on the following langu
3535

3636
## Create a content filter
3737

38-
## How to create a content filter?
3938
For any model deployment in [Azure AI Studio](https://ai.azure.com), you can directly use the default content filter, but you might want to have more control. For example, you could make a filter stricter or more lenient, or enable more advanced capabilities like prompt shields and protected material detection.
4039

4140
Follow these steps to create a content filter:
4241

43-
1. Go to [AI Studio](https://ai.azure.com) and select a project.
44-
1. Select **Content filters** from the left pane and then select **+ New content filter**.
42+
1. Go to [AI Studio](https://ai.azure.com) and navigate to your hub. Then select the **Content filters** tab on the left nav, and select the **Create content filter** button.
4543

4644
:::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of the button to create a new content filter." lightbox="../media/content-safety/content-filter/create-content-filter.png":::
4745

4846
1. On the **Basic information** page, enter a name for your content filter. Select a connection to associate with the content filter. Then select **Next**.
4947

5048
:::image type="content" source="../media/content-safety/content-filter/create-content-filter-basic.png" alt-text="Screenshot of the option to select or enter basic information such as the filter name when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-basic.png":::
5149

52-
1. On the **Input filters** page, you can set the filter for the input prompt. For example, you can enable prompt shields for jailbreak attacks. Then select **Next**.
50+
1. On the **Input filters** page, you can set the filter for the input prompt. Set the action and severity level threshold for each filter type. You configure both the default filters and other filters (like Prompt Shields for jailbreak attacks) on this page. Then select **Next**.
5351

5452
:::image type="content" source="../media/content-safety/content-filter/configure-threshold.png" alt-text="Screenshot of the option to select input filters when creating a content filter." lightbox="../media/content-safety/content-filter/configure-threshold.png":::
5553

5654
Content will be annotated by category and blocked according to the threshold you set. For the violence, hate, sexual, and self-harm categories, adjust the slider to block content of high, medium, or low severity.
5755

58-
1. On the **Output filters** page, you can set the filter for the output completion. For example, you can enable filters for protected material detection. Then select **Next**.
56+
1. On the **Output filters** page, you can configure the output filter, which will be applied to all output content generated by your model. Configure the individual filters as before. This page also provides the Streaming mode option, which lets you filter content in near-real-time as it's generated by the model, reducing latency. When you're finished select **Next**.
5957

6058
Content will be annotated by each category and blocked according to the threshold. For violent content, hate content, sexual content, and self-harm content category, adjust the threshold to block harmful content with equal or higher severity levels.
6159

62-
1. Optionally, on the **Deployment** page, you can associate the content filter with a deployment. You can also associate the content filter with a deployment later. Then select **Create**.
60+
1. Optionally, on the **Deployment** page, you can associate the content filter with a deployment. If a selected deployment already has a filter attached, you must confirm that you want to replace it. You can also associate the content filter with a deployment later. Select **Create**.
6361

6462
:::image type="content" source="../media/content-safety/content-filter/create-content-filter-deployment.png" alt-text="Screenshot of the option to select a deployment when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-deployment.png":::
6563

6664
Content filtering configurations are created at the hub level in AI Studio. Learn more about configurability in the [Azure OpenAI docs](/azure/ai-services/openai/how-to/content-filters).
6765

6866
1. On the **Review** page, review the settings and then select **Create filter**.
6967

68+
### Use a blocklist as a filter
69+
70+
You can apply a blocklist as either an input or output filter, or both. Enable the **Blocklist** option on the **Input filter** and/or **Output filter** page. Select one or more blocklists from the dropdown, or use the built-in profanity blocklist. You can combine multiple blocklists into the same filter.
7071

71-
## How to apply a content filter?
72+
## Apply a content filter
7273

73-
A default content filter is set when you create a deployment. You can also apply your custom content filter to your deployment.
74+
The filter creation process gives you the option to apply the filter to the deployments you want. You can also change or remove content filters from your deployments at any time.
7475

7576
Follow these steps to apply a content filter to a deployment:
7677

@@ -83,11 +84,7 @@ Follow these steps to apply a content filter to a deployment:
8384

8485
:::image type="content" source="../media/content-safety/content-filter/apply-content-filter.png" alt-text="Screenshot of apply content filter." lightbox="../media/content-safety/content-filter/apply-content-filter.png":::
8586

86-
Now, you can go to the playground to test whether the content filter works as expected!
87-
88-
## Content filtering categories and configurability
89-
90-
You can apply a blocklist as either an input or output filter, or both. Enable the **Blocklist** option on the **Input filter** and/or **Output filter** page. Select one or more blocklists from the dropdown, or use the built-in profanity blocklist. You can combine multiple blocklists into the same filter.
87+
Now, you can go to the playground to test whether the content filter works as expected.
9188

9289
### Categories
9390

0 commit comments

Comments
 (0)