A Simple Design for Serving Video Generation Models with Distributed Inference — ROCm Blogs #145
Replies: 2 comments 4 replies
-
|
Beautiful architecture, but I don't think it's very practical, as it takes a long time to instantiate |
Beta Was this translation helpful? Give feedback.
3 replies
-
|
Wow, this blog post on serving video generation models is incredibly insightful! I'm curious about how the distributed inference setup scales with different workloads. Have you run any benchmarks that highlight its efficiency? Thanks for sharing such valuable information! Sprunki |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
A Simple Design for Serving Video Generation Models with Distributed Inference — ROCm Blogs
Minimalist FastAPI + Redis + Torchrun design for serving video generation models with distributed inference.
https://rocm.blogs.amd.com/artificial-intelligence/serving-videogen-v1/README.html
Beta Was this translation helpful? Give feedback.
All reactions