You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/concepts/video-generation.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.date: 5/29/2025
11
11
12
12
# Sora video generation (preview)
13
13
14
-
Sora is an AI model from OpenAI that can create realistic and imaginative video scenes from text instructions. The model is capable of generating a wide range of video content, including realistic scenes, animations, and special effects. Several video resolutions and durations are supported.
14
+
Sora is an AI model from OpenAI that can create realistic and imaginative video scenes from text instructions and/or input images or video. The model is capable of generating a wide range of video content, including realistic scenes, animations, and special effects. Several video resolutions and durations are supported.
15
15
16
16
## Supported features
17
17
@@ -21,7 +21,7 @@ Sora can generate complex scenes with multiple characters, diverse motions, and
21
21
22
22
**Image to video**: Sora can generate video content from a still image. You can specify where in the generated video the image appears (it doesn't need to be the first frame) and which region of the image to use.
23
23
24
-
24
+
**Video to video**: Sora can generate new video content from an existing video clip. You can specify where in the generated video the input video appears (it doesn't need to be the beginning).
25
25
26
26
## How it works
27
27
@@ -44,10 +44,12 @@ Sora has some technical limitations to be aware of:
44
44
45
45
- Sora supports the following output resolution dimensions:
- Sora supports video durations between 1 and 20 seconds.
47
+
- Sora can produce videos between 1 and 20 seconds long.
48
48
- You can request multiple video variants in a single job: for 1080p resolutions, this feature is disabled; for 720p, the maximum is two variants; for other resolutions, the maximum is four variants.
49
49
- You can have two video creation jobs running at the same time. You must wait for one of the jobs to finish before you can create another.
50
50
- Jobs are available for up to 24 hours after they're created. After that, you must create a new job to generate the video again.
51
+
- Up to two images can be used as input (the generated video interpolates content between them).
52
+
- One video up to five seconds can be used as input.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/video-generation-intro.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,6 @@ ms.topic: include
7
7
ms.date: 5/29/2025
8
8
---
9
9
10
-
In this quickstart, you generate video clips using the Azure OpenAI service. The example uses the Sora model, which is a video generation model that creates realistic and imaginative video scenes from text instructions and/or image inputs. This guide shows you how to create a video generation job, poll for its status, and retrieve the generated video.
10
+
In this quickstart, you generate video clips using the Azure OpenAI service. The example uses the Sora model, which is a video generation model that creates realistic and imaginative video scenes from text instructions and/or image or video inputs. This guide shows you how to create a video generation job, poll for its status, and retrieve the generated video.
11
11
12
12
For more information on video generation, see [Video generation concepts](../concepts/video-generation.md).
Replace the `"file_name"` field in `"inpaint_items"` with the name of your input video file. Also replace the construction of the `files` array, which associates the path to the actual file with the filename that the API uses.
253
+
254
+
Use the `"crop_bounds"` data (image crop distances, from each direction, as a fraction of the total frame dimensions) to specify which part of the video frame should be used in video generation.
255
+
256
+
You can optionally set the `"frame_index"` to the frame in the generated video where your input video should start (the default is 0, the beginning).
257
+
258
+
259
+
```python
260
+
# 1. Create a video generation job with video inpainting (multipart upload)
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/whats-new.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,12 +18,15 @@ ms.custom:
18
18
19
19
This article provides a summary of the latest releases and major documentation updates for Azure OpenAI.
20
20
21
+
## Sora video-to-video support
22
+
23
+
The Sora model from OpenAI now supports video-to-video generation. You can provide a short video as input to generate a new, longer video that incorporates the input video. See the [quickstart](./video-generation-quickstart.md) to get started.
24
+
21
25
## August 2025
22
26
23
27
### Sora image-to-video support
24
28
25
-
The Sora model from OpenAI now supports image-to-video generation. You can provide an image as input to the model to generate a video that incorporates the content of the image. You can also specify the frame of the video in which the image should appear: it doesn't need to be the beginning.
26
-
29
+
The Sora model from OpenAI now supports image-to-video generation. You can provide an image as input to the model to generate a video that incorporates the content of the image. You can also specify the frame of the video in which the image should appear: it doesn't need to be the beginning. See the [quickstart](./video-generation-quickstart.md) to get started.
27
30
28
31
Sora is now available in the Sweden Central region as well as East US 2.
0 commit comments