Transpose the code for more efficient VRAM and faster inference? #57
remybonnav
started this conversation in
Ideas
Replies: 1 comment
-
|
I think I wrote to fast, I did not know that SE3 Transformers was actually already a huge optimization and even provide a low memory mode. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I was wondering if it would be possible that someone transpose the code to use xformers/JAX/Transformers/diffusers in order to optimize VRAM necessary and to speed up inference? (as used in other generative models such as stable diffusion or else).
Maybe some people from https://github.com/OpenBioML could help?
Beta Was this translation helpful? Give feedback.
All reactions