Replies: 1 comment
-
There's an open PR for that: #134 Far from finished, and we're targetting LLMs with it (so multi node/multi process) but it could fit the use case you are mentionning ! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to convert a 7 GB pickle file to safetensors on a Colab notebook with 12 GB memory. Initially it took ~8G memory on
torch.load
and got another spike when runningsafetensors.torch.save_file
. Eventually the notebook crashed. Is it possible to avoid memory spiking by streaming it to disk?Beta Was this translation helpful? Give feedback.
All reactions