Skip to content

Commit 932734c

Browse files
authored
Merge pull request #5523 from MicrosoftDocs/main
6/13/2025 AM Publish
2 parents da67188 + 57dc0fb commit 932734c

29 files changed

+288
-260
lines changed

articles/ai-services/openai/includes/dall-e-rest.md

Lines changed: 96 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Use this guide to get started calling the Azure OpenAI in Azure AI Foundry Model
1818
- <a href="https://www.python.org/" target="_blank">Python 3.8 or later version</a>.
1919
- The following Python libraries installed: `os`, `requests`, `json`.
2020
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability).
21-
- Then, you need to deploy a `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
21+
- Then, you need to deploy a `gpt-image-1` or `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
2222

2323
## Setup
2424

@@ -41,6 +41,98 @@ Go to your resource in the Azure portal. On the navigation pane, select **Keys a
4141

4242
Create a new Python file named _quickstart.py_. Open the new file in your preferred editor or IDE.
4343

44+
#### [GPT-image-1](#tab/gpt-image-1)
45+
46+
1. Replace the contents of _quickstart.py_ with the following code. Change the value of `prompt` to your preferred text. Also set `deployment` to the deployment name you chose when you deployed the GPT-image-1 model.
47+
48+
```python
49+
import os
50+
import requests
51+
import base64
52+
from PIL import Image
53+
from io import BytesIO
54+
55+
# set environment variables
56+
endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
57+
subscription_key = os.getenv("AZURE_OPENAI_API_KEY")
58+
59+
deployment = "gpt-image-1" # the name of your GPT-image-1 deployment
60+
api_version = "2025-04-01-preview" # or later version
61+
62+
def decode_and_save_image(b64_data, output_filename):
63+
image = Image.open(BytesIO(base64.b64decode(b64_data)))
64+
image.show()
65+
image.save(output_filename)
66+
67+
def save_all_images_from_response(response_data, filename_prefix):
68+
for idx, item in enumerate(response_data['data']):
69+
b64_img = item['b64_json']
70+
filename = f"{filename_prefix}_{idx+1}.png"
71+
decode_and_save_image(b64_img, filename)
72+
print(f"Image saved to: '{filename}'")
73+
74+
base_path = f'openai/deployments/{deployment}/images'
75+
params = f'?api-version={api_version}'
76+
77+
generation_url = f"{endpoint}{base_path}/generations{params}"
78+
generation_body = {
79+
"prompt": "girl falling asleep",
80+
"n": 1,
81+
"size": "1024x1024",
82+
"quality": "medium",
83+
"output_format": "png"
84+
}
85+
generation_response = requests.post(
86+
generation_url,
87+
headers={
88+
'Api-Key': subscription_key,
89+
'Content-Type': 'application/json',
90+
},
91+
json=generation_body
92+
).json()
93+
save_all_images_from_response(generation_response, "generated_image")
94+
95+
# In addition to generating images, you can edit them.
96+
edit_url = f"{endpoint}{base_path}/edits{params}"
97+
edit_body = {
98+
"prompt": "girl falling asleep",
99+
"n": 1,
100+
"size": "1024x1024",
101+
"quality": "medium"
102+
}
103+
files = {
104+
"image": ("generated_image_1.png", open("generated_image_1.png", "rb"), "image/png"),
105+
# You can use a mask to specify which parts of the image you want to edit.
106+
# The mask must be the same size as the input image.
107+
# "mask": ("mask.png", open("mask.png", "rb"), "image/png"),
108+
}
109+
edit_response = requests.post(
110+
edit_url,
111+
headers={'Api-Key': subscription_key},
112+
data=edit_body,
113+
files=files
114+
).json()
115+
save_all_images_from_response(edit_response, "edited_image")
116+
```
117+
118+
The script makes a synchronous image generation API call.
119+
120+
> [!IMPORTANT]
121+
> Remember to remove the key from your code when you're done, and never post your key publicly. For production, use a secure way of storing and accessing your credentials. For more information, see [Azure Key Vault](/azure/key-vault/general/overview).
122+
123+
1. Run the application with the `python` command:
124+
125+
```console
126+
python quickstart.py
127+
```
128+
129+
Wait a few moments to get the response.
130+
131+
132+
133+
#### [DALL-E](#tab/dall-e-3)
134+
135+
44136
1. Replace the contents of _quickstart.py_ with the following code. Change the value of `prompt` to your preferred text.
45137

46138
You also need to replace `<dalle3>` in the URL with the deployment name you chose when you deployed the DALL-E 3 model. Entering the model name will result in an error unless you chose a deployment name that is identical to the underlying model name. If you encounter an error, double check to make sure that you don't have a doubling of the `/` at the separation between your endpoint and `/openai/deployments`.
@@ -83,6 +175,8 @@ Create a new Python file named _quickstart.py_. Open the new file in your prefer
83175

84176
Wait a few moments to get the response.
85177

178+
---
179+
86180
## Output
87181

88182
The output from a successful image generation API call looks like the following example. The `url` field contains a URL where you can download the generated image. The URL stays active for 24 hours.
@@ -99,7 +193,7 @@ The output from a successful image generation API call looks like the following
99193
}
100194
```
101195

102-
The Image APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md). For examples of error responses, see the [DALL-E how-to guide](../how-to/dall-e.md).
196+
The Image APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md). For examples of error responses, see the [Image generation how-to guide](../how-to/dall-e.md).
103197

104198
The system returns an operation status of `Failed` and the `error.code` value in the message is set to `contentFilter`. Here's an example:
105199

articles/ai-services/speech-service/embedded-speech.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ embeddedSpeechConfig.setSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.
248248

249249
You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation:
250250

251-
- [C# (.NET 6.0)](https://aka.ms/embedded-speech-samples-csharp)
251+
- [C# (.NET 8.0)](https://aka.ms/embedded-speech-samples-csharp)
252252
- [C# (.NET MAUI)](https://aka.ms/embedded-speech-samples-csharp-maui)
253253
- [C# for Unity](https://aka.ms/embedded-speech-samples-csharp-unity)
254254
::: zone-end

articles/ai-services/speech-service/includes/quickstarts/voice-live-api/realtime-python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -353,7 +353,7 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
353353
354354
if event.get("type") == "session.created":
355355
session = event.get("session")
356-
logger.info(f"Session created: {session.get("id")}")
356+
logger.info(f"Session created: {session.get('id')}")
357357
358358
elif event.get("type") == "response.audio.delta":
359359
if event.get("item_id") != last_audio_item_id:

articles/ai-services/speech-service/includes/release-notes/release-notes-sdk.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -893,7 +893,7 @@ This table shows the previous and new object names for real-time diarization and
893893
### Speech SDK 1.16.0: 2021-March release
894894

895895
> [!NOTE]
896-
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
896+
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019.
897897
898898
#### New features
899899

@@ -938,7 +938,7 @@ This table shows the previous and new object names for real-time diarization and
938938
### Speech SDK 1.15.0: 2021-January release
939939

940940
> [!NOTE]
941-
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
941+
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019.
942942
943943
#### Highlights summary
944944
- Smaller memory and disk footprint making the SDK more efficient.
@@ -992,7 +992,7 @@ This table shows the previous and new object names for real-time diarization and
992992
### Speech SDK 1.14.0: 2020-October release
993993

994994
> [!NOTE]
995-
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
995+
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019.
996996
997997
#### New features
998998
- **Linux**: Added support for Debian 10 and Ubuntu 20.04 LTS.
@@ -1040,7 +1040,7 @@ Stay healthy!
10401040
### Speech SDK 1.13.0: 2020-July release
10411041

10421042
> [!NOTE]
1043-
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download and install it from [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
1043+
> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019.
10441044
10451045
#### New features
10461046
- **C#**: Added support for asynchronous conversation transcription. See documentation [here](../../get-started-stt-diarization.md).

articles/ai-services/speech-service/includes/spx-setup.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: eur
1212
Follow these steps to install the Speech CLI on Windows:
1313

1414
1. Install the [Microsoft Visual C++ Redistributable for Visual Studio](/cpp/windows/latest-supported-vc-redist#latest-microsoft-visual-c-redistributable-version) for your platform. Installing it for the first time might require a restart.
15-
1. Install [.NET 6](/dotnet/core/install/windows?tabs=net60#runtime-information).
15+
1. Install [.NET 8](/dotnet/core/install/windows?tabs=net60#runtime-information).
1616
1. Install the Speech CLI via the .NET CLI by entering this command:
1717

1818
```dotnetcli
@@ -44,7 +44,7 @@ The following Linux distributions are supported for x64 architectures that use t
4444
4545
Follow these steps to install the Speech CLI on Linux on an x64 CPU:
4646

47-
1. Install the [.NET 6](/dotnet/core/install/linux).
47+
1. Install the [.NET 8](/dotnet/core/install/linux).
4848
2. Install the Speech CLI via the .NET CLI by entering this command:
4949

5050
```dotnetcli
@@ -64,7 +64,7 @@ Enter `spx` to see help for the Speech CLI.
6464

6565
Follow these steps to install the Speech CLI on macOS 10.14 or later:
6666

67-
1. Install [.NET 6](/dotnet/core/install/macos#runtime-information).
67+
1. Install [.NET 8](/dotnet/core/install/macos#runtime-information).
6868
1. Install the Speech CLI via the .NET CLI by entering this command:
6969

7070
```dotnetcli

articles/machine-learning/concept-train-model-git-integration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.topic: concept-article
99
author: Blackmist
1010
ms.author: larryfr
1111
ms.reviewer: osiotugo
12-
ms.date: 06/12/2024
12+
ms.date: 06/13/2025
1313
ms.custom: sdkv2, build-2023
1414
---
1515
# Git integration for Azure Machine Learning
@@ -78,7 +78,7 @@ The command displays the contents of your public key file. Copy the output.
7878
> To copy and paste in the terminal window, use these keyboard shortcuts, depending on your operating system:
7979
>
8080
> - Windows: Ctrl+C or Ctrl+Insert to copy, Ctrl+V or Ctrl+Shift+V to paste.
81-
> - MacOS: Cmd+C to copy and Cmd+V to paste.
81+
> - macOS: Cmd+C to copy and Cmd+V to paste.
8282
>
8383
> Some browsers might not support clipboard permissions properly.
8484

articles/machine-learning/how-to-manage-workspace-cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.subservice: core
88
ms.author: larryfr
99
author: Blackmist
1010
ms.reviewer: deeikele
11-
ms.date: 06/17/2024
11+
ms.date: 06/13/2025
1212
ms.topic: how-to
1313
ms.custom: devx-track-azurecli, cliv2
1414
---

articles/machine-learning/how-to-manage-workspace-terraform.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom: devx-track-terraform
99
ms.author: larryfr
1010
author: Blackmist
1111
ms.reviewer: deeikele
12-
ms.date: 06/25/2024
12+
ms.date: 06/13/2025
1313
ms.topic: how-to
1414
ms.tool: terraform
1515
---

articles/machine-learning/how-to-secure-inferencing-vnet.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.topic: how-to
99
ms.reviewer: None
1010
ms.author: larryfr
1111
author: Blackmist
12-
ms.date: 05/31/2024
12+
ms.date: 06/13/2025
1313
---
1414

1515
# Secure an Azure Machine Learning inferencing environment with virtual networks
@@ -56,7 +56,7 @@ To use Azure Kubernetes Service cluster for secure inference, use the following
5656
1. Create or configure a [secure Kubernetes inferencing environment](how-to-secure-kubernetes-inferencing-environment.md).
5757
2. Deploy [Azure Machine Learning extension](how-to-deploy-kubernetes-extension.md).
5858
3. [Attach the Kubernetes cluster to the workspace](how-to-attach-kubernetes-anywhere.md).
59-
4. Model deployment with Kubernetes online endpoint can be done using CLI v2, Python SDK v2 and Studio UI.
59+
4. Model deployment with Kubernetes online endpoint can be done using CLI v2, Python SDK v2, and Studio UI.
6060

6161
* CLI v2 - https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes
6262
* Python SDK V2 - https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes

0 commit comments

Comments
 (0)