Skip to content

Commit eae6f91

Browse files
Merge pull request #3124 from PatrickFarley/consaf-updates
UUF updates
2 parents 6754479 + 2b2e808 commit eae6f91

File tree

2 files changed

+24
-9
lines changed

2 files changed

+24
-9
lines changed

articles/ai-services/content-safety/quickstart-groundedness.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -354,7 +354,7 @@ The groundedness detection API includes a correction feature that automatically
354354
### Connect your own GPT deployment
355355
356356
> [!TIP]
357-
> Currently, the correction feature supports only **Azure OpenAI GPT4o (0513, 0806 version) ** resources. To minimize latency and adhere to data privacy guidelines, it's recommended to deploy your Azure OpenAI GPT4o (0513, 0806 version) in the same region as your content safety resources. For more details on data privacy, please refer to the [Data, privacy and security guidelines for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context)
357+
> Currently, the correction feature supports only **Azure OpenAI GPT4o (0513, 0806 version)** resources. To minimize latency and adhere to data privacy guidelines, it's recommended to deploy your Azure OpenAI GPT4o (0513, 0806 version) in the same region as your content safety resources. For more details on data privacy, please refer to the [Data, privacy and security guidelines for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context)
358358
and [Data, privacy, and security for Azure AI Content Safety](/legal/cognitive-services/content-safety/data-privacy?context=/azure/ai-services/content-safety/context/context).
359359
360360
To use your Azure OpenAI GPT4o (0513, 0806 version) resource for enabling the correction feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource. Follow the steps in the [earlier section](#connect-your-own-gpt-deployment) to set up the Managed Identity.

articles/ai-services/openai/how-to/gpt-with-vision.md

Lines changed: 23 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,29 @@ The following is a sample request body. The format is the same as the chat compl
162162
> ...
163163
> ```
164164

165-
### Output
165+
### Detail parameter settings
166+
167+
You can optionally define a `"detail"` parameter in the `"image_url"` field. Choose one of three values, `low`, `high`, or `auto`, to adjust the way the model interprets and processes images.
168+
- `auto` setting: The default setting. The model decides between low or high based on the size of the image input.
169+
- `low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial.
170+
- `high` setting: the model activates "high res" mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.
171+
172+
You set the value using the format shown in this example:
173+
174+
```json
175+
{
176+
"type": "image_url",
177+
"image_url": {
178+
"url": "<image URL>",
179+
"detail": "high"
180+
}
181+
}
182+
```
183+
184+
For details on how the image parameters impact tokens used and pricing please see - [What is Azure OpenAI? Image Tokens](../overview.md#image-tokens)
185+
186+
187+
## Output
166188

167189
The API response should look like the following.
168190

@@ -236,13 +258,6 @@ Every response includes a `"finish_reason"` field. It has the following possible
236258
- `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit.
237259
- `content_filter`: Omitted content due to a flag from our content filters.
238260

239-
### Detail parameter settings in image processing: Low, High, Auto
240-
241-
The _detail_ parameter in the model offers three choices: `low`, `high`, or `auto`, to adjust the way the model interprets and processes images. The default setting is auto, where the model decides between low or high based on the size of the image input.
242-
- `low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial.
243-
- `high` setting: the model activates "high res" mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.''
244-
245-
For details on how the image parameters impact tokens used and pricing please see - [What is Azure OpenAI? Image Tokens](../overview.md#image-tokens)
246261

247262

248263

0 commit comments

Comments
 (0)