Skip to content

Commit 9370234

Browse files
[feat] Add gradio local inference demo (#847)
Co-authored-by: ainsley <[email protected]>
1 parent 50da62e commit 9370234

22 files changed

+933
-47
lines changed

.github/workflows/pr-test.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -352,7 +352,7 @@ jobs:
352352
volume_size: 100
353353
disk_size: 100
354354
image: "ghcr.io/${{ github.repository }}/fastvideo-dev:py3.12-latest"
355-
test_command: "uv pip install -e .[test] && pytest ./fastvideo/dataset/ -vs && pytest ./fastvideo/workflow/ -vs"
355+
test_command: "uv pip install -e .[test] && pytest ./fastvideo/dataset/ -vs && pytest ./fastvideo/workflow/ -vs && pytest ./fastvideo/entrypoints/ -vs"
356356
timeout_minutes: 30
357357
secrets:
358358
RUNPOD_API_KEY: ${{ secrets.RUNPOD_API_KEY }}

assets/full.svg

Lines changed: 18 additions & 0 deletions
Loading

assets/icon-simple.svg

Lines changed: 6 additions & 0 deletions
Loading
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# FastVideo Gradio Local Demo
2+
3+
This is a Gradio-based web interface for generating videos using the FastVideo framework. The demo allows users to create videos from text prompts with various customization options.
4+
5+
## Overview
6+
7+
The demo uses the FastVideo framework to generate videos based on text prompts. It provides a simple web interface built with Gradio that allows users to:
8+
9+
- Enter text prompts to generate videos
10+
- Customize video parameters (dimensions, number of frames, etc.)
11+
- Use negative prompts to guide the generation process
12+
- Set or randomize seeds for reproducibility
13+
14+
---
15+
16+
## Usage
17+
18+
Run the demo with:
19+
20+
```bash
21+
python examples/inference/gradio/local/gradio_local_demo.py
22+
```
23+
24+
This will start a web server at `http://0.0.0.0:7860` where you can access the interface.
25+
26+
---
27+
28+
## Model Initialization
29+
30+
This demo initializes a `VideoGenerator` with the minimum required arguments for inference. Users can seamlessly adjust inference options between generations, including prompts, resolution, video length, *without ever needing to reload the model*.
31+
32+
## Video Generation
33+
34+
The core functionality is in the `generate_video` function, which:
35+
1. Processes user inputs
36+
2. Uses the FastVideo VideoGenerator from earlier to run inference (`generator.generate_video()`)
37+
38+
## Gradio Interface
39+
40+
The interface is built with several components:
41+
- A text input for the prompt
42+
- A video display for the result
43+
- Inference options in a collapsible accordion:
44+
- Height and width sliders
45+
- Number of frames slider
46+
- Guidance scale slider
47+
- Negative prompt options
48+
- Seed controls
49+
50+
### Inference Options
51+
52+
- **Height/Width**: Control the resolution of the generated video
53+
- **Number of Frames**: Set how many frames to generate
54+
- **Guidance Scale**: Control how closely the generation follows the prompt
55+
- **Negative Prompt**: Specify what you don't want to see in the video
56+
- **Seed**: Control randomness for reproducible results

0 commit comments

Comments
 (0)