Skip to content

Commit 6d96721

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-ai-docs-pr (branch live)
2 parents acdd843 + da843e2 commit 6d96721

File tree

6 files changed

+123
-10
lines changed

6 files changed

+123
-10
lines changed

articles/ai-foundry/openai/concepts/video-generation.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 5/29/2025
1111

1212
# Sora video generation (preview)
1313

14-
Sora is an AI model from OpenAI that can create realistic and imaginative video scenes from text instructions. The model is capable of generating a wide range of video content, including realistic scenes, animations, and special effects. Several video resolutions and durations are supported.
14+
Sora is an AI model from OpenAI that can create realistic and imaginative video scenes from text instructions and/or input images or video. The model is capable of generating a wide range of video content, including realistic scenes, animations, and special effects. Several video resolutions and durations are supported.
1515

1616
## Supported features
1717

@@ -21,7 +21,7 @@ Sora can generate complex scenes with multiple characters, diverse motions, and
2121

2222
**Image to video**: Sora can generate video content from a still image. You can specify where in the generated video the image appears (it doesn't need to be the first frame) and which region of the image to use.
2323

24-
24+
**Video to video**: Sora can generate new video content from an existing video clip. You can specify where in the generated video the input video appears (it doesn't need to be the beginning).
2525

2626
## How it works
2727

@@ -44,10 +44,12 @@ Sora has some technical limitations to be aware of:
4444

4545
- Sora supports the following output resolution dimensions:
4646
480x480, 480x854, 854x480, 720x720, 720x1280, 1280x720, 1080x1080, 1080x1920, 1920x1080.
47-
- Sora supports video durations between 1 and 20 seconds.
47+
- Sora can produce videos between 1 and 20 seconds long.
4848
- You can request multiple video variants in a single job: for 1080p resolutions, this feature is disabled; for 720p, the maximum is two variants; for other resolutions, the maximum is four variants.
4949
- You can have two video creation jobs running at the same time. You must wait for one of the jobs to finish before you can create another.
5050
- Jobs are available for up to 24 hours after they're created. After that, you must create a new job to generate the video again.
51+
- Up to two images can be used as input (the generated video interpolates content between them).
52+
- One video up to five seconds can be used as input.
5153

5254
## Responsible AI
5355

articles/ai-foundry/openai/how-to/reinforcement-fine-tuning.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -188,6 +188,18 @@ Models which we're supporting as grader models are:
188188

189189
To use a score model grader, the input is a list of chat messages, each containing a role, and content. The output of the grader will be truncated to the given range, and default to 0 for all non-numeric outputs.
190190

191+
### Custom Code Grader
192+
193+
Custom code grader allows you to execute arbitrary python code to grade the model output. The grader expects a grade function to be present that takes in two arguments and outputs a float value. Any other result (exception, invalid float value, etc.) will be marked as invalid and return a 0 grade.
194+
195+
```json
196+
{
197+
"type": "python",
198+
"source": "def grade(sample, item):\n return 1.0",
199+
"image_tag": "2025-05-08"
200+
}
201+
```
202+
191203
### Multi Grader
192204

193205
A multigrader object combines the output of multiple graders to produce a single score.
@@ -272,6 +284,19 @@ Models which we're supporting as grader models are `gpt-4o-2024-08-06`and `o3-mi
272284
}
273285
```
274286

287+
**Custom code grader** - This is python code grader where you can use any python code to grader the training output.
288+
289+
The python libraries which are supported by custom code grader are
290+
291+
```json
292+
{
293+
"type": "python",
294+
"image_tag": "alpha",
295+
"source": "import json\nimport re\n\ndef extract_numbers_from_expression(expression: str):\n return [int(num) for num in re.findall(r'-?\\d+', expression)]\n\ndef grade(sample, item) -> float:\n expression_str = sample['output_json']['expression']\n try:\n math_expr_eval = eval(expression_str)\n except Exception:\n return 0\n expr_nums_list = extract_numbers_from_expression(expression_str)\n input_nums_list = [int(x) for x in json.loads(item['nums'])]\n if sorted(expr_nums_list) != sorted(input_nums_list):\n return 0\n sample_result_int = int(sample['output_json']['result'])\n item_result_int = int(item['target'])\n if math_expr_eval != sample_result_int:\n return 1\n if sample_result_int == item_result_int:\n return 5\n if abs(sample_result_int - item_result_int) <= 1:\n return 4\n if abs(sample_result_int - item_result_int) <= 5:\n return 3\n return 2""
296+
}
297+
```
298+
If you don't want to manually put your grading function in a string, you can also load it from a Python file using importlib and inspect
299+
275300
**Multi Grader** - A multigrader object combines the output of multiple graders to produce a single score.
276301

277302
```json
@@ -294,9 +319,6 @@ Models which we're supporting as grader models are `gpt-4o-2024-08-06`and `o3-mi
294319
}
295320
```
296321

297-
> [!Note]
298-
> : Currently we don’t support `multi` with model grader as a sub grader. `Multi` grader is supported only with `text_Similarity` and `string_check`.
299-
300322
Example of response format which is an optional field:
301323

302324
If we need the response for the same puzzles problem used in training data example then can add the response format as shown below where fields ‘solution’ and ‘final answer’ are shared in structured outputs.

articles/ai-foundry/openai/includes/video-generation-intro.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,6 @@ ms.topic: include
77
ms.date: 5/29/2025
88
---
99

10-
In this quickstart, you generate video clips using the Azure OpenAI service. The example uses the Sora model, which is a video generation model that creates realistic and imaginative video scenes from text instructions and/or image inputs. This guide shows you how to create a video generation job, poll for its status, and retrieve the generated video.
10+
In this quickstart, you generate video clips using the Azure OpenAI service. The example uses the Sora model, which is a video generation model that creates realistic and imaginative video scenes from text instructions and/or image or video inputs. This guide shows you how to create a video generation job, poll for its status, and retrieve the generated video.
1111

1212
For more information on video generation, see [Video generation concepts](../concepts/video-generation.md).

articles/ai-foundry/openai/includes/video-generation-rest.md

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -244,6 +244,92 @@ You can generate a video with the Sora model by creating a video generation job,
244244
else:
245245
raise Exception(f"Job didn't succeed. Status: {status}")
246246
```
247+
248+
249+
250+
## [Video prompt](#tab/video-prompt)
251+
252+
Replace the `"file_name"` field in `"inpaint_items"` with the name of your input video file. Also replace the construction of the `files` array, which associates the path to the actual file with the filename that the API uses.
253+
254+
Use the `"crop_bounds"` data (image crop distances, from each direction, as a fraction of the total frame dimensions) to specify which part of the video frame should be used in video generation.
255+
256+
You can optionally set the `"frame_index"` to the frame in the generated video where your input video should start (the default is 0, the beginning).
257+
258+
259+
```python
260+
# 1. Create a video generation job with video inpainting (multipart upload)
261+
create_url = f"{endpoint}/openai/v1/video/generations/jobs?api-version=preview"
262+
263+
# Flatten the body for multipart/form-data
264+
data = {
265+
"prompt": "A serene forest scene transitioning into autumn",
266+
"height": str(1080),
267+
"width": str(1920),
268+
"n_seconds": str(10),
269+
"n_variants": str(1),
270+
"model": "sora",
271+
# inpaint_items must be JSON string
272+
"inpaint_items": json.dumps([
273+
{
274+
"frame_index": 0,
275+
"type": "video",
276+
"file_name": "dog_swimming.mp4",
277+
"crop_bounds": {
278+
"left_fraction": 0.1,
279+
"top_fraction": 0.1,
280+
"right_fraction": 0.9,
281+
"bottom_fraction": 0.9
282+
}
283+
}
284+
])
285+
}
286+
287+
# Replace with your own video file path
288+
with open("dog_swimming.mp4", "rb") as video_file:
289+
files = [
290+
("files", ("dog_swimming.mp4", video_file, "video/mp4"))
291+
]
292+
multipart_headers = {k: v for k, v in headers.items() if k.lower() != "content-type"}
293+
response = requests.post(
294+
create_url,
295+
headers=multipart_headers,
296+
data=data,
297+
files=files
298+
)
299+
300+
if not response.ok:
301+
print("Error response:", response.status_code, response.text)
302+
response.raise_for_status()
303+
print("Full response JSON:", response.json())
304+
job_id = response.json()["id"]
305+
print(f"Job created: {job_id}")
306+
307+
# 2. Poll for job status
308+
status_url = f"{endpoint}/openai/v1/video/generations/jobs/{job_id}?api-version=preview"
309+
status = None
310+
while status not in ("succeeded", "failed", "cancelled"):
311+
time.sleep(5)
312+
status_response = requests.get(status_url, headers=headers).json()
313+
status = status_response.get("status")
314+
print(f"Job status: {status}")
315+
316+
# 3. Retrieve generated video
317+
if status == "succeeded":
318+
generations = status_response.get("generations", [])
319+
if generations:
320+
generation_id = generations[0].get("id")
321+
video_url = f"{endpoint}/openai/v1/video/generations/{generation_id}/content/video?api-version=preview"
322+
video_response = requests.get(video_url, headers=headers)
323+
if video_response.ok:
324+
output_filename = "output.mp4"
325+
with open(output_filename, "wb") as file:
326+
file.write(video_response.content)
327+
print(f'✅ Generated video saved as "{output_filename}"')
328+
else:
329+
raise Exception("No generations found in job result.")
330+
else:
331+
raise Exception(f"Job didn't succeed. Status: {status}")
332+
```
247333
---
248334
249335

articles/ai-foundry/openai/whats-new.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,15 @@ ms.custom:
1818

1919
This article provides a summary of the latest releases and major documentation updates for Azure OpenAI.
2020

21+
## Sora video-to-video support
22+
23+
The Sora model from OpenAI now supports video-to-video generation. You can provide a short video as input to generate a new, longer video that incorporates the input video. See the [quickstart](./video-generation-quickstart.md) to get started.
24+
2125
## August 2025
2226

2327
### Sora image-to-video support
2428

25-
The Sora model from OpenAI now supports image-to-video generation. You can provide an image as input to the model to generate a video that incorporates the content of the image. You can also specify the frame of the video in which the image should appear: it doesn't need to be the beginning.
26-
29+
The Sora model from OpenAI now supports image-to-video generation. You can provide an image as input to the model to generate a video that incorporates the content of the image. You can also specify the frame of the video in which the image should appear: it doesn't need to be the beginning. See the [quickstart](./video-generation-quickstart.md) to get started.
2730

2831
Sora is now available in the Sweden Central region as well as East US 2.
2932

articles/machine-learning/how-to-secure-workspace-vnet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.subservice: enterprise-readiness
88
ms.reviewer: None
99
ms.author: scottpolly
1010
author: s-polly
11-
ms.date: 07/08/2024
11+
ms.date: 09/10/2025
1212
ms.topic: how-to
1313
ms.custom:
1414
- tracking-python

0 commit comments

Comments
 (0)