You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Shelf Product Recognition (preview): Analyze shelf images using pretrained model
15
15
16
-
The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Understanding API, you can upload a shelf image and get the locations of products and gaps.
16
+
The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Recognition API, you can upload a shelf image and get the locations of products and gaps.
17
17
18
18
:::image type="content" source="../media/shelf/shelf-analysis-pretrained.png" alt-text="Photo of a retail shelf with products and gaps highlighted with rectangles.":::
19
19
@@ -51,62 +51,50 @@ To analyze a shelf image, do the following steps:
51
51
52
52
## Examine the response
53
53
54
-
A successful response is returned in JSON. The product understanding API results are returned in a `ProductUnderstandingResultApiModel` JSON field:
54
+
A successful response is returned in JSON. The product recognition API results are returned in a `ProductRecognitionResultApiModel` JSON field:
55
55
56
56
```json
57
-
{
58
-
"imageMetadata": {
59
-
"width": 2000,
60
-
"height": 1500
61
-
},
62
-
"products": [
63
-
{
64
-
"id": "string",
65
-
"boundingBox": {
66
-
"x": 1234,
67
-
"y": 1234,
68
-
"w": 12,
69
-
"h": 12
70
-
},
71
-
"classifications": [
72
-
{
73
-
"confidence": 0.9,
74
-
"label": "string"
75
-
}
76
-
]
77
-
}
57
+
"ProductRecognitionResultApiModel": {
58
+
"description": "Results from the product understanding operation.",
59
+
"required": [
60
+
"gaps",
61
+
"imageMetadata",
62
+
"products"
78
63
],
79
-
"gaps": [
80
-
{
81
-
"id": "string",
82
-
"boundingBox": {
83
-
"x": 1234,
84
-
"y": 1234,
85
-
"w": 123,
86
-
"h": 123
87
-
},
88
-
"classifications": [
89
-
{
90
-
"confidence": 0.8,
91
-
"label": "string"
92
-
}
93
-
]
64
+
"type": "object",
65
+
"properties": {
66
+
"imageMetadata": {
67
+
"$ref": "#/definitions/ImageMetadataApiModel"
68
+
},
69
+
"products": {
70
+
"description": "Products detected in the image.",
71
+
"type": "array",
72
+
"items": {
73
+
"$ref": "#/definitions/DetectedObject"
74
+
}
75
+
},
76
+
"gaps": {
77
+
"description": "Gaps detected in the image.",
78
+
"type": "array",
79
+
"items": {
80
+
"$ref": "#/definitions/DetectedObject"
81
+
}
94
82
}
95
-
]
83
+
}
96
84
}
97
85
```
98
86
99
87
See the following sections for definitions of each JSON field.
100
88
101
-
### Product Understanding Result API model
89
+
### Product Recognition Result API model
102
90
103
-
Results from the product understanding operation.
91
+
Results from the product recognition operation.
104
92
105
93
| Name | Type | Description | Required |
106
94
| ---- | ---- | ----------- | -------- |
107
95
|`imageMetadata`| [ImageMetadataApiModel](#image-metadata-api-model) | The image metadata information such as height, width and format. | Yes |
108
-
|`products`|[DetectedObjectApiModel](#detected-object-api-model) | Products detected in the image. | Yes |
109
-
|`gaps`| [DetectedObjectApiModel](#detected-object-api-model) | Gaps detected in the image. | Yes |
96
+
|`products`|[DetectedObject](#detected-object-api-model) | Products detected in the image. | Yes |
97
+
|`gaps`| [DetectedObject](#detected-object-api-model) | Gaps detected in the image. | Yes |
110
98
111
99
### Image Metadata API model
112
100
@@ -124,8 +112,8 @@ Describes a detected object in an image.
124
112
| Name | Type | Description | Required |
125
113
| ---- | ---- | ----------- | -------- |
126
114
|`id`| string | ID of the detected object. | No |
127
-
|`boundingBox`| [BoundingBoxApiModel](#bounding-box-api-model) | A bounding box for an area inside an image. | Yes |
128
-
|`classifications`| [ImageClassificationApiModel](#image-classification-api-model) | Classification confidences of the detected object. | Yes |
115
+
|`boundingBox`| [BoundingBox](#bounding-box-api-model) | A bounding box for an area inside an image. | Yes |
116
+
|`tags`| [TagsApiModel](#image-tags-api-model) | Classification confidences of the detected object. | Yes |
129
117
130
118
### Bounding Box API model
131
119
@@ -138,18 +126,18 @@ A bounding box for an area inside an image.
138
126
|`w`| integer | Width measured from the top-left point of the area, in pixels. | Yes |
139
127
|`h`| integer | Height measured from the top-left point of the area, in pixels. | Yes |
140
128
141
-
### Image Classification API model
129
+
### Image Tags API model
142
130
143
131
Describes the image classification confidence of a label.
144
132
145
133
| Name | Type | Description | Required |
146
134
| ---- | ---- | ----------- | -------- |
147
135
|`confidence`| float | Confidence of the classification prediction. | Yes |
148
-
|`label`| string | Label of the classification prediction. | Yes |
136
+
|`name`| string | Label of the classification prediction. | Yes |
149
137
150
138
## Next steps
151
139
152
-
In this guide, you learned how to make a basic analysis call using the pretrained Product Understanding REST API. Next, learn how to use a custom Product Recognition model to better meet your business needs.
140
+
In this guide, you learned how to make a basic analysis call using the pretrained Product Recognition REST API. Next, learn how to use a custom Product Recognition model to better meet your business needs.
153
141
154
142
> [!div class="nextstepaction"]
155
143
> [Train a custom model for Product Recognition](../how-to/shelf-model-customization.md)
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/content-filtering.md
+10-13Lines changed: 10 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,42 +35,43 @@ The content filtering models have been trained and tested on the following langu
35
35
36
36
## Create a content filter
37
37
38
-
## How to create a content filter?
39
38
For any model deployment in [Azure AI Studio](https://ai.azure.com), you can directly use the default content filter, but you might want to have more control. For example, you could make a filter stricter or more lenient, or enable more advanced capabilities like prompt shields and protected material detection.
40
39
41
40
Follow these steps to create a content filter:
42
41
43
-
1. Go to [AI Studio](https://ai.azure.com) and select a project.
44
-
1. Select **Content filters** from the left pane and then select **+ New content filter**.
42
+
1. Go to [AI Studio](https://ai.azure.com) and navigate to your hub. Then select the **Content filters** tab on the left nav, and select the **Create content filter** button.
45
43
46
44
:::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of the button to create a new content filter." lightbox="../media/content-safety/content-filter/create-content-filter.png":::
47
45
48
46
1. On the **Basic information** page, enter a name for your content filter. Select a connection to associate with the content filter. Then select **Next**.
49
47
50
48
:::image type="content" source="../media/content-safety/content-filter/create-content-filter-basic.png" alt-text="Screenshot of the option to select or enter basic information such as the filter name when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-basic.png":::
51
49
52
-
1. On the **Input filters** page, you can set the filter for the input prompt. For example, you can enable prompt shields for jailbreak attacks. Then select **Next**.
50
+
1. On the **Input filters** page, you can set the filter for the input prompt. Set the action and severity level threshold for each filter type. You configure both the default filters and other filters (like Prompt Shields for jailbreak attacks) on this page. Then select **Next**.
53
51
54
52
:::image type="content" source="../media/content-safety/content-filter/configure-threshold.png" alt-text="Screenshot of the option to select input filters when creating a content filter." lightbox="../media/content-safety/content-filter/configure-threshold.png":::
55
53
56
54
Content will be annotated by category and blocked according to the threshold you set. For the violence, hate, sexual, and self-harm categories, adjust the slider to block content of high, medium, or low severity.
57
55
58
-
1. On the **Output filters** page, you can set the filter for the output completion. For example, you can enable filters for protected material detection. Then select **Next**.
56
+
1. On the **Output filters** page, you can configure the output filter, which will be applied to all output content generated by your model. Configure the individual filters as before. This page also provides the Streaming mode option, which lets you filter content in near-real-time as it's generated by the model, reducing latency. When you're finished select **Next**.
59
57
60
58
Content will be annotated by each category and blocked according to the threshold. For violent content, hate content, sexual content, and self-harm content category, adjust the threshold to block harmful content with equal or higher severity levels.
61
59
62
-
1. Optionally, on the **Deployment** page, you can associate the content filter with a deployment. You can also associate the content filter with a deployment later. Then select**Create**.
60
+
1. Optionally, on the **Deployment** page, you can associate the content filter with a deployment. If a selected deployment already has a filter attached, you must confirm that you want to replace it. You can also associate the content filter with a deployment later. Select**Create**.
63
61
64
62
:::image type="content" source="../media/content-safety/content-filter/create-content-filter-deployment.png" alt-text="Screenshot of the option to select a deployment when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-deployment.png":::
65
63
66
64
Content filtering configurations are created at the hub level in AI Studio. Learn more about configurability in the [Azure OpenAI docs](/azure/ai-services/openai/how-to/content-filters).
67
65
68
66
1. On the **Review** page, review the settings and then select **Create filter**.
69
67
68
+
### Use a blocklist as a filter
69
+
70
+
You can apply a blocklist as either an input or output filter, or both. Enable the **Blocklist** option on the **Input filter** and/or **Output filter** page. Select one or more blocklists from the dropdown, or use the built-in profanity blocklist. You can combine multiple blocklists into the same filter.
70
71
71
-
## How to apply a content filter?
72
+
## Apply a content filter
72
73
73
-
A default content filter is set when you create a deployment. You can also apply your custom content filter to your deployment.
74
+
The filter creation process gives you the option to apply the filter to the deployments you want. You can also change or remove content filters from your deployments at any time.
74
75
75
76
Follow these steps to apply a content filter to a deployment:
76
77
@@ -83,11 +84,7 @@ Follow these steps to apply a content filter to a deployment:
83
84
84
85
:::image type="content" source="../media/content-safety/content-filter/apply-content-filter.png" alt-text="Screenshot of apply content filter." lightbox="../media/content-safety/content-filter/apply-content-filter.png":::
85
86
86
-
Now, you can go to the playground to test whether the content filter works as expected!
87
-
88
-
## Content filtering categories and configurability
89
-
90
-
You can apply a blocklist as either an input or output filter, or both. Enable the **Blocklist** option on the **Input filter** and/or **Output filter** page. Select one or more blocklists from the dropdown, or use the built-in profanity blocklist. You can combine multiple blocklists into the same filter.
87
+
Now, you can go to the playground to test whether the content filter works as expected.
0 commit comments