x-stable-diffusion port for massive speed up? (TensorRT, nvFuser etc.) #4161
Replies: 4 comments 3 replies
-
The question is, can it be combined with xformers? If it is so much faster, we might ditch xformers entirely. There are also open questions about image quality and animation capabilities |
Beta Was this translation helpful? Give feedback.
-
The TensorRT kernels are the main speed booster, I have seen them in the GPT models, that is awesome they can be used here. https://colab.research.google.com/drive/1mT9CzFgZCCrakC0N-GkmvDgSkIbE2cWq#scrollTo=SaJmSjD0VXgu They transform the model to onnx, then to TensorRT. https://www.photoroom.com/tech/stable-diffusion-25-percent-faster-and-save-seconds/ |
Beta Was this translation helpful? Give feedback.
-
Maybe someone of devs will take a look on it :) |
Beta Was this translation helpful? Give feedback.
-
I suppose this is relevant |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I came across this implementation of SD that uses many optimizations, and most of them I think can be ported and (optionally) enabled by the user. The speed up shown is very significant.
https://github.com/stochasticai/x-stable-diffusion
Anyone familiar with this repo have any idea how easy/hard will it be to port over these optimizations? I can work on them in Dec if it is not excessively hard.
Beta Was this translation helpful? Give feedback.
All reactions