You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/concept-playgrounds.md
+58-56Lines changed: 58 additions & 56 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -113,62 +113,6 @@ Follow these steps to try the transcription capability:
113
113
1. Select **Generate transcription** to send the audio input to the model and receive a transcribed output in both text and JSON formats.
114
114
:::image type="content" source="../media/concept-playgrounds/audio-playground-transcribe.png" alt-text="Screenshot of the Audio playground interface demonstrating transcription output from audio input." lightbox="../media/concept-playgrounds/audio-playground-transcribe.png":::
115
115
116
-
## Images playground
117
-
118
-
The images playground is ideal for developers who build image generation flows. This playground is a full-featured, controlled environment for high-fidelity experiments designed for model-specific APIs to generate and edit images.
119
-
120
-
> [!TIP]
121
-
> See the [60-second reel of the Images playground for gpt-image-1](https://youtu.be/btA8njJjLXY) and our DevBlog for how to transform your [enterprise-ready use case by industry.](https://devblogs.microsoft.com/foundry/images-playground-may-2025/)
122
-
123
-
You can use the images playground with these models:
124
-
125
-
-[gpt-image-1](https://ai.azure.com/explore/models/gpt-image-1/version/2025-04-15/registry/azure-openai) from Azure OpenAI.
-[Bria 2.3 Fast](https://ai.azure.com/explore/models/Bria-2.3-Fast/version/1/registry/azureml-bria) from Bria AI.
128
-
129
-
Follow these steps to use the images playground:
130
-
131
-
1. Select **Try the Images playground** to open it.
132
-
1. If you don't have a deployment already, select **Create new deployment** and deploy a model such as `gpt-image-1`.
133
-
1.**Start with a pre-built text prompt**: Select an option to get started with a prebuilt text prompt that automatically fills the prompt bar.
134
-
1.**Explore the model API-specific generation controls after model deployment:** Adjust key controls (for example, number of variants, quality, strength) to deeply understand specific model responsiveness and constraints.
135
-
1.**Side-by-side observations in grid view:** Visually observe outputs across prompt tweaks or parameter changes.
136
-
1.**Transform with API tooling:** Inpainting with text transformation is available for gpt-image-1. Alter parts of your original image with inpainting selection. Use text prompts to specify the change.
137
-
1.**Port to production with multi-lingual code samples:** Use Python, Java, JavaScript, C# code samples with "View Code". Images playground is your launchpad to development work in VS Code.
138
-
139
-
### What to validate when experimenting in images playground
140
-
141
-
By using the images playground, you can explore and validate the following as you plan your production workload:
142
-
143
-
-**Prompt Effectiveness**
144
-
- What kind of visual output does this prompt generate for my enterprise use case?
145
-
- How specific or abstract can my language be and still get good results?
146
-
- Does the model understand style references like "surrealist" or "cyberpunk" accurately?
147
-
148
-
-**Stylistic Consistency**
149
-
- How do I maintain the same character, style, or theme across multiple images?
150
-
- Can I iterate on variations of the same base prompt with minimal drift?
151
-
152
-
-**Parameter Tuning**
153
-
- What's the effect of changing model parameters like guidance scale, seed, steps, etc.?
154
-
- How can I balance creativity vs. prompt fidelity?
155
-
156
-
-**Model Comparison**
157
-
- How do results differ between models (for example, SDXL vs. DALL·E)?
158
-
- Which model performs better for realistic faces vs. artistic compositions?
159
-
160
-
-**Composition Control**
161
-
- What happens when I use spatial constraints like bounding boxes or inpainting masks?
162
-
- Can I guide the model toward specific layouts or focal points?
163
-
164
-
-**Input Variation**
165
-
- How do slight changes in prompt wording or structure impact results?
166
-
- What's the best way to prompt for symmetry, specific camera angles, or emotions?
167
-
168
-
-**Integration Readiness**
169
-
- Will this image meet the constraints of my product's UI (aspect ratio, resolution, content safety)?
170
-
- Does the output conform to brand guidelines or customer expectations?
171
-
172
116
173
117
## Video playground
174
118
@@ -236,6 +180,64 @@ When using the video playground as you plan your production workload, you can ex
236
180
- How long does it take to generate video for different prompt types or resolutions?
237
181
- What's the cost-performance tradeoff of generating 5s vs. 15s clips?
238
182
183
+
184
+
## Images playground
185
+
186
+
The images playground is ideal for developers who build image generation flows. This playground is a full-featured, controlled environment for high-fidelity experiments designed for model-specific APIs to generate and edit images.
187
+
188
+
> [!TIP]
189
+
> See the [60-second reel of the Images playground for gpt-image-1](https://youtu.be/btA8njJjLXY) and our DevBlog for how to transform your [enterprise-ready use case by industry.](https://devblogs.microsoft.com/foundry/images-playground-may-2025/)
190
+
191
+
You can use the images playground with these models:
192
+
193
+
-[gpt-image-1](https://ai.azure.com/explore/models/gpt-image-1/version/2025-04-15/registry/azure-openai) from Azure OpenAI.
-[Bria 2.3 Fast](https://ai.azure.com/explore/models/Bria-2.3-Fast/version/1/registry/azureml-bria) from Bria AI.
196
+
197
+
Follow these steps to use the images playground:
198
+
199
+
1. Select **Try the Images playground** to open it.
200
+
1. If you don't have a deployment already, select **Create new deployment** and deploy a model such as `gpt-image-1`.
201
+
1.**Start with a pre-built text prompt**: Select an option to get started with a prebuilt text prompt that automatically fills the prompt bar.
202
+
1.**Explore the model API-specific generation controls after model deployment:** Adjust key controls (for example, number of variants, quality, strength) to deeply understand specific model responsiveness and constraints.
203
+
1.**Side-by-side observations in grid view:** Visually observe outputs across prompt tweaks or parameter changes.
204
+
1.**Transform with API tooling:** Inpainting with text transformation is available for gpt-image-1. Alter parts of your original image with inpainting selection. Use text prompts to specify the change.
205
+
1.**Port to production with multi-lingual code samples:** Use Python, Java, JavaScript, C# code samples with "View Code". Images playground is your launchpad to development work in VS Code.
206
+
207
+
### What to validate when experimenting in images playground
208
+
209
+
By using the images playground, you can explore and validate the following as you plan your production workload:
210
+
211
+
-**Prompt Effectiveness**
212
+
- What kind of visual output does this prompt generate for my enterprise use case?
213
+
- How specific or abstract can my language be and still get good results?
214
+
- Does the model understand style references like "surrealist" or "cyberpunk" accurately?
215
+
216
+
-**Stylistic Consistency**
217
+
- How do I maintain the same character, style, or theme across multiple images?
218
+
- Can I iterate on variations of the same base prompt with minimal drift?
219
+
220
+
-**Parameter Tuning**
221
+
- What's the effect of changing model parameters like guidance scale, seed, steps, etc.?
222
+
- How can I balance creativity vs. prompt fidelity?
223
+
224
+
-**Model Comparison**
225
+
- How do results differ between models (for example, SDXL vs. DALL·E)?
226
+
- Which model performs better for realistic faces vs. artistic compositions?
227
+
228
+
-**Composition Control**
229
+
- What happens when I use spatial constraints like bounding boxes or inpainting masks?
230
+
- Can I guide the model toward specific layouts or focal points?
231
+
232
+
-**Input Variation**
233
+
- How do slight changes in prompt wording or structure impact results?
234
+
- What's the best way to prompt for symmetry, specific camera angles, or emotions?
235
+
236
+
-**Integration Readiness**
237
+
- Will this image meet the constraints of my product's UI (aspect ratio, resolution, content safety)?
238
+
- Does the output conform to brand guidelines or customer expectations?
239
+
240
+
239
241
## Related content
240
242
241
243
-[Use the chat playground in Azure AI Foundry portal](../quickstarts/get-started-playground.md)
0 commit comments