Replies: 3 comments
-
imagine if it was combined with meta's AITemplate (#2099) |
Beta Was this translation helpful? Give feedback.
0 replies
-
Prompt visualization as we type, lol. |
Beta Was this translation helpful? Give feedback.
0 replies
-
The twitter posts don't serve the research proper justice as paper mentions that it also is 256x times faster, not 256% but 256x times faster! But whether that's true, we can only find out one way |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Published a few hours ago and retweeted by Emad: This distillation approach allows for diffusion generations with just 5 steps at good quality (look at 4 steps on the right):
Paper here: https://arxiv.org/abs/2210.03142 | PDF
Beta Was this translation helpful? Give feedback.
All reactions