You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/dall-e.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ OpenAI's image generation models render images based on user-provided text promp
26
26
- Deploy a `dall-e-3` or `gpt-image-1` model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-services/openai/how-to/create-resource).
27
27
- GPT-image-1 is the newer model and features a number of improvements over DALL-E 3. It's available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
28
28
29
-
## Call the Image Generation APIs
29
+
## Call the Image Generation API
30
30
31
31
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, we recommend starting with the [quickstart](/azure/ai-services/openai/dall-e-quickstart).
32
32
@@ -322,6 +322,7 @@ DALL-E models don't support the Image Edit API.
322
322
*[What is Azure OpenAI Service?](../overview.md)
323
323
*[Quickstart: Generate images with Azure OpenAI Service](../dall-e-quickstart.md)
324
324
*[Image generation API reference](/azure/ai-services/openai/reference#image-generation)
325
+
*[Image generation API (preview) reference](/azure/ai-services/openai/reference-preview)
0 commit comments