Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 41 additions & 3 deletions docs/hub/spaces-zerogpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,24 @@ ZeroGPU Spaces are designed to be compatible with most PyTorch-based GPU Spaces.

### Supported Versions

- Gradio: 4+
- PyTorch: 2.1.2, 2.2.2, 2.4.0, 2.5.1 (Note: 2.3.x is not supported due to a [PyTorch bug](https://github.com/pytorch/pytorch/issues/122085))
- Python: 3.10.13
- **Gradio**: 4+
- **PyTorch**: Almost all versions from **2.1.0** to **latest** are supported
<details>
<summary>See full list</summary>

- 2.1.0
- 2.1.1
- 2.1.2
- 2.2.0
- 2.2.2
- 2.4.0
- 2.5.1
- 2.6.0
- 2.7.1
- 2.8.0

</details>
- **Python**: 3.10.13

## Getting started with ZeroGPU

Expand Down Expand Up @@ -81,6 +96,29 @@ def generate(prompt):

This sets the maximum function runtime to 120 seconds. Specifying shorter durations for quicker functions will improve queue priority for Space visitors.

### Dynamic duration

`@spaces.GPU` also supports dynamic durations.

Instead of directly passing a duration, simply pass a callable that takes the same inputs as your decorated function and returns a duration:

```python
def get_duration(prompt, steps):
step_duration = 3.75
return steps * step_duration

@spaces.GPU(duration=get_duration)
def generate(prompt, steps):
return pipe(prompt, num_inference_steps=steps).images
```


## Compilation

ZeroGPU does not support `torch.compile` but you can use PyTorch **ahead-of-time** compilation (requires torch `2.8+`)

Checkout this [blogpost](https://huggingface.co/blog/zerogpu-aoti) for a complete compilation guide on ZeroGPU

## Hosting Limitations

- **Personal accounts ([PRO subscribers](https://huggingface.co/subscribe/pro))**: Maximum of 10 ZeroGPU Spaces.
Expand Down