You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/gpt-with-vision.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
37
37
-`api-key`: {API_KEY}
38
38
39
39
**Body**:
40
-
The following is a sample request body. The format is the same as the chat completions API for GPT-4, except that the message content can be an array containing text and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
40
+
The following is a sample request body. The format is the same as the chat completions API for GPT-4, except that the message content can be an array containing text and images (either a valid publicly accessible HTTP or HTTPS URL to an image, or a base-64-encoded image).
41
41
42
42
> [!IMPORTANT]
43
43
> Remember to set a `"max_tokens"` value, or the return output will be cut off.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/gpt-v-javascript.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,7 +68,7 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
68
68
69
69
## Create a new JavaScript application for image prompts
70
70
71
-
Select an image from the [azure-samples/cognitive-services-sample-data-files](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/Images). Use the image URL in the code below or set the `IMAGE_URL` environment variable to the image URL.
71
+
Select an image from the [azure-samples/cognitive-services-sample-data-files](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/Images). Enter your publicly accessible image URL in the code below or set the `IMAGE_URL` environment variable to it.
72
72
73
73
> [!IMPORTANT]
74
74
> If you use a SAS URL to an image stored in Azure blob storage, you need to enable Managed Identity and assign the **Storage Blob Reader** role to your Azure OpenAI resource (do this in the Azure portal). This allows the model to access the image in blob storage.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/gpt-v-python.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -87,7 +87,7 @@ Create a new Python file named _quickstart.py_. Open the new file in your prefer
87
87
88
88
1. Make the following changes:
89
89
1. Enter the name of your GPT-4 Turbo with Vision deployment in the appropriate field.
90
-
1. Change the value of the `"url"` field to the URL of your image.
90
+
1. Change the value of the `"url"` field to the publicly accessible URL of your image.
91
91
> [!TIP]
92
92
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/gpt-v-rest.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,7 +88,7 @@ Create a new Python file named _quickstart.py_. Open the new file in your prefer
88
88
1. Make the following changes:
89
89
1. Enter your endpoint URLand key in the appropriate fields.
90
90
1. Enter your GPT-4 Turbo with Vision deployment name in the appropriate field.
91
-
1. Change the value of the `"image"` field to the URL of your image.
91
+
1. Change the value of the `"image"` field to the publicly accessible URL of your image.
92
92
> [!TIP]
93
93
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
Copy file name to clipboardExpand all lines: articles/ai-services/translator/solutions/overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,10 +19,10 @@ ms.author: lajanuar
19
19
20
20
Azure AI Translator offers the following prebuilt solutions:
21
21
22
-
*[**Language Studio**](../document-translation/language-studio.md). Azure AI Translator in the [Azure AI Language Studio](https://language.cognitive.azure.com/home) is a no-code user interface that lets you interactively translate documents from local or Azure Blob Storage.
23
-
24
22
*[**Microsoft Translator Pro**](translator-pro/overview.md). Microsoft Translator Pro is an advanced mobile application, designed specifically for enterprises, that enables seamless speech-to-speech translation in real time.
25
23
24
+
*[**Language Studio**](../document-translation/language-studio.md). Azure AI Translator in the [Azure AI Language Studio](https://language.cognitive.azure.com/home) is a no-code user interface that lets you interactively translate documents from local or Azure Blob Storage.
25
+
26
26
*[**Translator v3 connector for documents**](../connector/document-translation-flow.md) and [**Translator v3 connector for text**](../solutions/connector/text-translator-flow.md). The Microsoft Translator v3 connectors create a link between your Translator Service instance and Microsoft Power Automate enabling you to incorporate one or more prebuilt operations into your apps and workflows.
0 commit comments