You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-groundedness.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -354,7 +354,7 @@ The groundedness detection API includes a correction feature that automatically
354
354
### Connect your own GPT deployment
355
355
356
356
> [!TIP]
357
-
> Currently, the correction feature supports only **Azure OpenAI GPT4o (0513, 0806 version)** resources. To minimize latency and adhere to data privacy guidelines, it's recommended to deploy your Azure OpenAI GPT4o (0513, 0806 version) in the same region as your content safety resources. For more details on data privacy, please refer to the [Data, privacy and security guidelines for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context)
357
+
> Currently, the correction feature supports only **Azure OpenAI GPT4o (0513, 0806 version)** resources. To minimize latency and adhere to data privacy guidelines, it's recommended to deploy your Azure OpenAI GPT4o (0513, 0806 version) in the same region as your content safety resources. For more details on data privacy, please refer to the [Data, privacy and security guidelines for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context)
358
358
and [Data, privacy, and security for Azure AI Content Safety](/legal/cognitive-services/content-safety/data-privacy?context=/azure/ai-services/content-safety/context/context).
359
359
360
360
To use your Azure OpenAI GPT4o (0513, 0806 version) resource for enabling the correction feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource. Follow the steps in the [earlier section](#connect-your-own-gpt-deployment) to set up the Managed Identity.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/gpt-with-vision.md
+23-8Lines changed: 23 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -162,7 +162,29 @@ The following is a sample request body. The format is the same as the chat compl
162
162
>...
163
163
>```
164
164
165
-
### Output
165
+
### Detail parameter settings
166
+
167
+
You can optionally define a `"detail"` parameter in the `"image_url"` field. Choose one of three values, `low`, `high`, or`auto`, to adjust the way the model interprets and processes images.
168
+
-`auto` setting: The default setting. The model decides between low or high based on the size of the image input.
169
+
-`low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial.
170
+
-`high` setting: the model activates "high res" mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.
171
+
172
+
You set the value using the format shown in this example:
173
+
174
+
```json
175
+
{
176
+
"type": "image_url",
177
+
"image_url": {
178
+
"url": "<image URL>",
179
+
"detail": "high"
180
+
}
181
+
}
182
+
```
183
+
184
+
For details on how the image parameters impact tokens used and pricing please see - [What is Azure OpenAI? Image Tokens](../overview.md#image-tokens)
185
+
186
+
187
+
## Output
166
188
167
189
The API response should look like the following.
168
190
@@ -236,13 +258,6 @@ Every response includes a `"finish_reason"` field. It has the following possible
236
258
-`length`: Incomplete model output due to the `max_tokens`input parameter or model's token limit.
237
259
-`content_filter`: Omitted content due to a flag from our content filters.
238
260
239
-
### Detail parameter settings in image processing: Low, High, Auto
240
-
241
-
The _detail_ parameter in the model offers three choices: `low`, `high`, or `auto`, to adjust the way the model interprets and processes images. The default setting is auto, where the model decides between low or high based on the size of the image input.
242
-
-`low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial.
243
-
-`high` setting: the model activates "high res" mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.''
244
-
245
-
For details on how the image parameters impact tokens used and pricing please see - [What is Azure OpenAI? Image Tokens](../overview.md#image-tokens)
0 commit comments