You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/concepts/video-generation.md
+50-11Lines changed: 50 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,16 +13,51 @@ ms.date: 09/16/2025
13
13
# Video generation with Sora (preview)
14
14
15
15
Sora is an AI model from OpenAI that creates realistic and imaginative video scenes from text instructions and/or input images or video. The model can generate a wide range of video content, including realistic scenes, animations, and special effects. It supports several video resolutions and durations.
16
+
Azure OpenAI supports two versions of Sora:
17
+
- Sora (or Sora 1): Azure OpenAI–specific implementation released as an API in early preview.
18
+
- Sora 2: The latest OpenAI-based API, now being adapted for Azure OpenAI
19
+
## Overview
20
+
- Modalities: text → video, image → video, video (generated) → video
21
+
- Audio: Sora 2 supports audio generation in output videos (similar to the Sora app).
22
+
- Remix: Sora 2 introduces the ability to remix existing videos by making targeted adjustments instead of regenerating from scratch.
|**Model type**| Azure-specific API implementation | Adapts OpenAI’s latest Sora API |
29
+
|**Availability**| Available exclusively on Azure OpenAI (Preview) | Rolling out on Azure; **Sora 2 Pro** coming later |
30
+
|**Modalities supported**| text → video, image → video, video → video | text → video, image → video, **video (generated) → video**|
31
+
|**Audio generation**| ❌ Not supported | ✅ Supported in outputs |
32
+
|**Remix capability**| ❌ Not supported | ✅ Supported — make targeted edits to existing videos |
33
+
|**API behavior**| Uses Azure-specific API schema | Aligns with OpenAI’s native Sora 2 schema |
34
+
|**Performance & fidelity**| Early preview; limited realism and motion range | Enhanced realism, physics, and temporal consistency |
35
+
|**Intended use**| Enterprise preview deployments | Broader developer availability with improved API parity |
36
+
37
+
38
+
## Sora 2 API
39
+
Provides 5 endpoints, each with distinct capabilities.
40
+
- Create Video: Start a new render job from a prompt, with optional reference inputs or a remix id.
41
+
- Get Video Status: Retrieve the current state of a render job and monitor its progress
42
+
- Download Video: Fetch the finished MP4 once the job is completed.
43
+
- List Videos: Enumerate your videos with pagination for history, dashboards, or housekeeping.
44
+
- Delete Videos: Delete an individual video id from Azure OpenAI’s storage
45
+
46
+
### API Parameters
47
+
48
+
| Parameter | Type |**Sora 2**|
49
+
|------------|------|------------|
50
+
|**Prompt**| String (required) | Natural-language description of the shot. Include shot type, subject, action, setting, lighting, and any desired camera motion to reduce ambiguity. Keep it *single-purpose* for best adherence. |
|**Input reference**| File (optional) | Single reference image used as a visual anchor for the first frame. <br> Accepted MIME types: `image/jpeg`, `image/png`, `image/webp`. Must match size exactly. |
55
+
|**Remix_video_id**| String (optional) | ID of a previously completed video (e.g., `video_...`) to reuse structure, motion, and framing. | Same as Sora 2 |
56
+
57
+
The API is the same as the [OAI API]([url](https://platform.openai.com/docs/guides/video-generation)) , minus the following two things:
58
+
- In AOAI API, you have to replace the model's name, by the name of the deployment. For example, "sora2-
59
+
test"
16
60
17
-
## Supported features
18
-
19
-
Sora can generate complex scenes with multiple characters, diverse motions, and detailed backgrounds.
20
-
21
-
**Text to video**: The model interprets prompts with contextual and physical world understanding, enabling accurate scene composition and character persistence across multiple shots. Sora demonstrates strong language comprehension for prompt interpretation and emotional character generation.
22
-
23
-
**Image to video**: Sora generates video content from a still image. You can specify where in the generated video the image appears (it doesn't need to be the first frame) and which region of the image to use.
24
-
25
-
**Video to video**: Sora generates new video content from an existing video clip. You can specify where in the generated video the input video appears (it doesn't need to be the beginning).
26
61
27
62
## How it works
28
63
@@ -39,9 +74,13 @@ Write text prompts in English or other Latin script languages for the best video
39
74
40
75
Sora might have difficulty with complex physics, causal relationships (for example, bite marks on a cookie), spatial reasoning (for example, knowing left from right), and precise time-based event sequencing such as camera movement.
41
76
42
-
### Technical limitations
77
+
### Sora 2 Technical Limitations
78
+
79
+
- Please see Sora 2 API details above
80
+
- Jobs are available for up to 24 hours after they're created. After that, you must create a new job to generate the video again.
81
+
- You can have two video creation jobs running at the same time. You must wait for one of the jobs to finish before you can create another.
43
82
44
-
Sora has some technical limitations to be aware of:
83
+
### Sora 1 Technical limitations
45
84
46
85
- Sora supports the following output resolution dimensions:
0 commit comments