You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/concept-playgrounds.md
+63-87Lines changed: 63 additions & 87 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -98,7 +98,7 @@ Test the latest reasoning models from Azure OpenAI, DeepSeek, and Meta with the
98
98
99
99
For all reasoning models, we introduce a chain-of-thought summary drop-down to see how the model was thinking through its response ahead of sharing the output.
100
100
101
-
:::image type="content" source="../media/concept-playgrounds/agents-playground.png" alt-text="Agents Playground interface for exploring, prototyping, and testing agents without code." lightbox="../media/concept-playgrounds/agents-playground.png":::
101
+
:::image type="content" source="../media/concept-playgrounds/chat-playground-cot-summary.png" alt-text="Chat Playground interface for exploring, prototyping, and testing chat models without code." lightbox="../media/concept-playgrounds/agents-playground.png":::
102
102
103
103
## Audio playground
104
104
@@ -129,17 +129,11 @@ We built the images playground for developers who build image generation flows.
129
129
### How to use images playground
130
130
131
131
1.**Start with a pre-built text prompt**: Select an option to get started with a prebuilt text prompt that automatically fills the prompt bar.
132
-
:::image type="content" source="../media/concept-playgrounds/images-playground-generation-controls.png" alt-text="Images playground interface with model-specific generation controls for image creation." lightbox="../media/concept-playgrounds/images-playground-generation-controls.png":::
133
-
2.**Explore the model API-specific generation controls after model deployment:** Adjust key controls (e.g. number of variants, quality, strength) to deeply understand specific model responsiveness and constraints.
4.**Side-by-side observations in grid view:** Visually observe outputs across prompt tweaks or parameter changes.
136
-
:::image type="content" source="../media/concept-playgrounds/images-playground-generation.png" alt-text="Images playground interface demonstrating image generation outputs based on prompts." lightbox="../media/concept-playgrounds/images-playground-generation.png":::
137
-
5.**Transform with API tooling:** Inpainting with text transformation is available for gpt-image-1. Alter parts of your original image with inpainting selection. Use text prompts to specify the change.
6.**Port to production with multi-lingual code samples:** Use Python, Java, JavaScript, C# code samples with "View Code". Images playground is your launchpad to development work in VS Code.
142
-
:::image type="content" source="../media/concept-playgrounds/images-playground-sample-code.png" alt-text="Images playground interface with sample code for porting experiments to production." lightbox="../media/concept-playgrounds/images-playground-sample-code.png":::
132
+
1.**Explore the model API-specific generation controls after model deployment:** Adjust key controls (e.g. number of variants, quality, strength) to deeply understand specific model responsiveness and constraints.
133
+
1.**Side-by-side observations in grid view:** Visually observe outputs across prompt tweaks or parameter changes.
134
+
1.**Transform with API tooling:** Inpainting with text transformation is available for gpt-image-1. Alter parts of your original image with inpainting selection. Use text prompts to specify the change.
135
+
1.**Port to production with multi-lingual code samples:** Use Python, Java, JavaScript, C# code samples with "View Code". Images playground is your launchpad to development work in VS Code.
136
+
143
137
144
138
### Applicable models
145
139
@@ -151,34 +145,34 @@ We built the images playground for developers who build image generation flows.
151
145
152
146
By using the images playground, you can explore and validate the following as you plan your production workload:
153
147
154
-
1.**Prompt Effectiveness**
155
-
- What kind of visual output does this prompt generate for my enterprise use case?
156
-
- How specific or abstract can my language be and still get good results?
157
-
- Does the model understand style references like "surrealist" or "cyberpunk" accurately?
148
+
-**Prompt Effectiveness**
149
+
- What kind of visual output does this prompt generate for my enterprise use case?
150
+
- How specific or abstract can my language be and still get good results?
151
+
- Does the model understand style references like "surrealist" or "cyberpunk" accurately?
158
152
159
-
2.**Stylistic Consistency**
160
-
- How do I maintain the same character, style, or theme across multiple images?
161
-
- Can I iterate on variations of the same base prompt with minimal drift?
153
+
-**Stylistic Consistency**
154
+
- How do I maintain the same character, style, or theme across multiple images?
155
+
- Can I iterate on variations of the same base prompt with minimal drift?
162
156
163
-
3.**Parameter Tuning**
164
-
- What's the effect of changing model parameters like guidance scale, seed, steps, etc.?
165
-
- How can I balance creativity vs. prompt fidelity?
157
+
-**Parameter Tuning**
158
+
- What's the effect of changing model parameters like guidance scale, seed, steps, etc.?
159
+
- How can I balance creativity vs. prompt fidelity?
166
160
167
-
4.**Model Comparison**
168
-
- How do results differ between models (e.g., SDXL vs. DALL·E)?
169
-
- Which model performs better for realistic faces vs. artistic compositions?
161
+
-**Model Comparison**
162
+
- How do results differ between models (e.g., SDXL vs. DALL·E)?
163
+
- Which model performs better for realistic faces vs. artistic compositions?
170
164
171
-
5.**Composition Control**
172
-
- What happens when I use spatial constraints like bounding boxes or inpainting masks?
173
-
- Can I guide the model toward specific layouts or focal points?
165
+
-**Composition Control**
166
+
- What happens when I use spatial constraints like bounding boxes or inpainting masks?
167
+
- Can I guide the model toward specific layouts or focal points?
174
168
175
-
6.**Input Variation**
176
-
- How do slight changes in prompt wording or structure impact results?
177
-
- What's the best way to prompt for symmetry, specific camera angles, or emotions?
169
+
-**Input Variation**
170
+
- How do slight changes in prompt wording or structure impact results?
171
+
- What's the best way to prompt for symmetry, specific camera angles, or emotions?
178
172
179
-
7.**Integration Readiness**
180
-
- Will this image meet the constraints of my product's UI (aspect ratio, resolution, content safety)?
181
-
- Does the output conform to brand guidelines or customer expectations?
173
+
-**Integration Readiness**
174
+
- Will this image meet the constraints of my product's UI (aspect ratio, resolution, content safety)?
175
+
- Does the output conform to brand guidelines or customer expectations?
182
176
183
177
## Video playground
184
178
@@ -187,7 +181,6 @@ By using the images playground, you can explore and validate the following as yo
187
181
188
182
The Video playground is your rapid iteration environment for exploring, refining, and validating generative video workflows—designed for developers who need to go from idea to prototype with precision, control, and speed. The playground gives you a low-friction interface to test prompt structures, assess motion fidelity, evaluate model consistency across frames, and compare outputs across models—without writing boilerplate or wasting compute cycles – and a great demo interface for your Chief Product Officer and Engineering VP.
189
183
190
-
:::image type="content" source="../media/concept-playgrounds/video-playground-home-page.png" alt-text="Video playground home page showcasing tools for generative video workflows." lightbox="../media/concept-playgrounds/video-playground-home-page.png":::
191
184
192
185
### Applicable models
193
186
@@ -199,63 +192,46 @@ The Video playground is your rapid iteration environment for exploring, refining
199
192
> Videos generated are retained for 24 hours due to data privacy. Download videos to local for longer retention.
200
193
201
194
1. Once your model is deployed, navigate to the Video playground and get inspired by **pre-built prompts sorted by industry filter**. From here, you can view the videos in full display and copy the prompt to build from it.
202
-
:::image type="content" source="../media/concept-playgrounds/video-playground-home-page.png" alt-text="Video playground home page showcasing tools for generative video workflows." lightbox="../media/concept-playgrounds/video-playground-home-page.png":::
203
-
204
-
:::image type="content" source="../media/concept-playgrounds/video-playground-industry-tab.png" alt-text="Video playground interface with prebuilt prompts sorted by industry filters." lightbox="../media/concept-playgrounds/video-playground-industry-tab.png":::
205
-
206
-
3.**Understand the model API specific generation controls in your prompt bar:** Enter your text prompt and adjust key controls (e.g. aspect ratio, resolution) to deeply understand specific model responsiveness and constraints.
207
-
:::image type="content" source="../media/concept-playgrounds/video-playground-prompt-rewriting.png" alt-text="Video playground interface demonstrating prompt rewriting with AI for industry-specific use cases." lightbox="../media/concept-playgrounds/video-playground-prompt-rewriting.png":::
208
-
209
-
4.**Rewrite your text prompt** syntax with gpt-4o using "Rewrite with AI" with industry based system prompts. Switch on the capability, select the industry and specify the change required for your original prompt.
210
-
211
-
:::image type="content" source="../media/concept-playgrounds/video-playground-rewrite-with-ai.png" alt-text="Video playground interface showcasing the 'Rewrite with AI' feature for prompt optimization." lightbox="../media/concept-playgrounds/video-playground-rewrite-with-ai.png":::
212
-
213
-
5. From the Generation history tab, review your generations as a Grid or List view. When you select the videos, open them in full screen mode for full immersion. Visually observe outputs across prompt tweaks or parameter changes.
214
-
:::image type="content" source="../media/concept-playgrounds/video-playground-generation-history.png" alt-text="Video playground interface showing generation history in grid view for comparison." lightbox="../media/concept-playgrounds/video-playground-generation-history.png":::
215
-
216
-
6. In Full Screen mode, edit the prompt and submit for regeneration.
217
-
:::image type="content" source="../media/concept-playgrounds/video-playground-edit-prompt.png" alt-text="Video playground interface in full-screen mode for editing prompts and regenerating videos." lightbox="../media/concept-playgrounds/video-playground-edit-prompt.png":::
218
-
219
-
8. Either in Full Screen mode or through the overflow button, download to local, view the information generation tag, or delete the video.
220
-
:::image type="content" source="../media/concept-playgrounds/video-playground-overflow-menu-full-screen.png" alt-text="Video playground interface with overflow menu options in full-screen mode." lightbox="../media/concept-playgrounds/video-playground-overflow-menu-full-screen.png":::
221
195
222
-
:::image type="content" source="../media/concept-playgrounds/video-playground-overflow-menu.png" alt-text="Video playground interface showing overflow menu options for managing videos." lightbox="../media/concept-playgrounds/video-playground-overflow-menu.png":::
196
+
1.**Understand the model API specific generation controls in your prompt bar:** Enter your text prompt and adjust key controls (e.g. aspect ratio, resolution) to deeply understand specific model responsiveness and constraints.
223
197
224
-
10.**Port to production with multi-lingual code samples:** Use Python, Java, JavaScript, C# contextual code samples with "View Code" that reflect your generations and copy into VS Code.
198
+
1.**Rewrite your text prompt** syntax with gpt-4o using "Rewrite with AI" with industry based system prompts. Switch on the capability, select the industry and specify the change required for your original prompt.
225
199
226
-
:::image type="content" source="../media/concept-playgrounds/video-playground-multi-lingual-code.png" alt-text="Video playground interface showcasing multi-lingual code samples for porting to production." lightbox="../media/concept-playgrounds/video-playground-multi-lingual-code.png":::
227
-
228
-
10.**Azure AI Content Safety integration:** With all model endpoints integrated with Azure AI Content Safety, harmful and unsafe images are filtered out prior to being surfaced in video playground. If your text prompt and video generation is flagged by content moderation policies, you get a warning notification appear.
229
-
:::image type="content" source="../media/concept-playgrounds/video-playground-content-moderation.png" alt-text="Video playground interface with Azure AI Content Safety integration for filtering harmful content." lightbox="../media/concept-playgrounds/video-playground-content-moderation.png":::
200
+
1. From the Generation history tab, review your generations as a Grid or List view. When you select the videos, open them in full screen mode for full immersion. Visually observe outputs across prompt tweaks or parameter changes.
201
+
1. In Full Screen mode, edit the prompt and submit for regeneration.
202
+
1. Either in Full Screen mode or through the overflow button, download to local, view the information generation tag, or delete the video.
203
+
1.**Port to production with multi-lingual code samples:** Use Python, Java, JavaScript, C# contextual code samples with "View Code" that reflect your generations and copy into VS Code.
204
+
1.**Azure AI Content Safety integration:** With all model endpoints integrated with Azure AI Content Safety, harmful and unsafe images are filtered out prior to being surfaced in video playground. If your text prompt and video generation is flagged by content moderation policies, you get a warning notification appear.
230
205
231
206
### Video generation: what you can validate or de-risk
232
207
233
208
When using the video playground as you plan your production workload, you can explore and validate the following:
234
209
235
-
1.**Prompt-to-Motion Translation**
236
-
- Does the video model interpret my prompt in a way that makes logical and temporal sense?
237
-
- Is motion coherent with the described action or scene?
238
-
239
-
2.**Frame Consistency**
240
-
- Do characters, objects, and styles remain consistent across frames?
241
-
- Are there visual artifacts, jitter, or unnatural transitions?
242
-
243
-
3.**Scene Control**
244
-
- How well can I control scene composition, subject behavior, or camera angles?
245
-
- Can I guide scene transitions or background environments?
246
-
247
-
4.**Length and Timing**
248
-
- How do different prompt structures affect video length and pacing?
249
-
- Does the video feel too fast, too slow, or too short?
250
-
251
-
5.**Multimodal Input Integration**
252
-
- What happens when I provide a reference image, pose data, or audio input?
253
-
- Can I generate video with lip-sync to a given voiceover?
254
-
255
-
6.**Post-Processing Needs**
256
-
- What level of raw fidelity can I expect before I need editing tools?
257
-
- Do I need to upscale, stabilize, or retouch the video before using it in production?
258
-
259
-
7.**Latency & Performance**
260
-
- How long does it take to generate video for different prompt types or resolutions?
261
-
- What's the cost-performance tradeoff of generating 5s vs. 15s clips?
210
+
-**Prompt-to-Motion Translation**
211
+
- Does the video model interpret my prompt in a way that makes logical and temporal sense?
212
+
- Is motion coherent with the described action or scene?
213
+
-**Frame Consistency**
214
+
- Do characters, objects, and styles remain consistent across frames?
215
+
- Are there visual artifacts, jitter, or unnatural transitions?
216
+
-**Scene Control**
217
+
- How well can I control scene composition, subject behavior, or camera angles?
218
+
- Can I guide scene transitions or background environments?
219
+
220
+
-**Length and Timing**
221
+
- How do different prompt structures affect video length and pacing?
222
+
- Does the video feel too fast, too slow, or too short?
223
+
224
+
-**Multimodal Input Integration**
225
+
- What happens when I provide a reference image, pose data, or audio input?
226
+
- Can I generate video with lip-sync to a given voiceover?
227
+
228
+
-**Post-Processing Needs**
229
+
- What level of raw fidelity can I expect before I need editing tools?
230
+
- Do I need to upscale, stabilize, or retouch the video before using it in production?
231
+
232
+
-**Latency & Performance**
233
+
- How long does it take to generate video for different prompt types or resolutions?
234
+
- What's the cost-performance tradeoff of generating 5s vs. 15s clips?
0 commit comments