Memory error: Unable to allocate 18.4 MiB for an array with shape (50178, 96) and data type int32 #10626
-
How to reproduce the behaviourMy objective is to train a document classification model but I am facing memory issues. Data consists of The model failed to train even on Training :
Training Config
Error Logs
Your Environment
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
It looks like you're running into issues with the default corpus handling. By default all training data is read into memory to be shuffled. You can instead stream your data by setting A couple of other things to check / be aware of: You are using Python 3.6, which has reached end of life and is no longer supported. I don't think it's related to this issue at all but you should upgrade if possible. Reporting the size of your training data is helpful, but you might want to check how long your longest document is - that is more important than the average for out of memory errors. |
Beta Was this translation helpful? Give feedback.
It looks like you're running into issues with the default corpus handling. By default all training data is read into memory to be shuffled. You can instead stream your data by setting
max_epochs
to -1, see here. If that doesn't fix things, let us know.A couple of other things to check / be aware of:
You are using Python 3.6, which has reached end of life and is no longer supported. I don't think it's related to this issue at all but you should upgrade if possible.
Reporting the size of your training data is helpful, but you might want to check how long your longest document is - that is more important than the average for out of memory errors.