Replies: 2 comments 4 replies
-
What are the VRAM requirements? Does it stay the same? I saw that they only offer txt2img via TensorRT. Its still a very nice boost but I really want to use it for animation which requires img2img. It should be possible though? I believe inpainting, masks etc. all need to be re-programmed for TensorRT because the model changed a lot |
Beta Was this translation helpful? Give feedback.
-
Took ages to get running on windows, needing a precompiled version of TensorRT was a pain. The normal one doesn't work, and building yourself is not a task for everyone. It will want CUDA 11.8 but try to grab the lowest possible version leading to errors. In the end, for standard 1.5 based models it worked at 24 it/s, which is double from Automatic1111 standard, albeit, at 512x512 using the whole card (RTX 3070). It absolutely crapped its pants when I threw Anything v3.0 at it. It couldn't run it, or even compile it. The premade version also only runs on 40 series and above. It's good for generating a bunch of classifications images. The lack of dynamic shapes and high cost in every other area makes it off putting. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
About 2-3 days ago there was a reddit post about "Stable Diffusion Accelerated" API which uses TensorRT.
Today I actually got VoltaML working with TensorRT and for a 512x512 image at 25 steps I got:
[I] Running StableDiffusion pipeline
100%|█████████| 25/25 [00:00<00:00, 88.71it/s]
|------------|--------------|
| Module | Latency |
|------------|--------------|
| CLIP | 2.33 ms |
| UNet x 25 | 299.25 ms |
| VAE | 13.31 ms |
|------------|--------------|
Beta Was this translation helpful? Give feedback.
All reactions