Reduce memory utilization during reader fine tuning #4992
Unanswered
explorer2024
asked this question in
Questions
Replies: 1 comment
-
I have had the same issues, with even small datasets consuming huge amounts of memory. Splitting up the dataset fixes the problem though. There is not really a difference in performance :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am using haystack reader training module to fine tune the reader model in GPU . During the pre-processing stage the process is getting killed due memory utilization.
How can I reduce the memory utilization here?
Is it possible to divide the dataset into different parts and pass that to the training part by part? Will it effect the performance of the model? Will there be any inaccuracy as the pre-processing inside training is done in different parts?
The training package taken is as below:
Beta Was this translation helpful? Give feedback.
All reactions