Replies: 1 comment 3 replies
-
Is the memory usage increased if you start a new recognition? If not, then it is expected. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello! Thank you very much for your work. I encountered an issue with releasing GPU memory when using https://github.com/k2-fsa/sherpa-onnx/blob/master/sherpa-onnx/python/sherpa_onnx/offline_recognizer.py. While performing speech recognition, the model requests additional GPU memory. However, after recognizing a new audio input, this additional memory is not released (as it would be with gc.collect() or torch.cuda.empty_cache()) even after deleting the stream object. Instead, it remains occupied by the model object. As a result, I can only free up the resource by deleting the model object, which is inconvenient because I have to reload the model for each new recognition. Could you suggest a way to free up the additional GPU memory without deleting the model object?
Beta Was this translation helpful? Give feedback.
All reactions