Ner + GPU + Accuracy = Out of memory (Nvidia 3090) #12342
-
Hello, I am trying to train a custom NER model with my own anotations, I have like 13000 texts annoted. Using https://spacy.io/usage/training I created the .cfg for the training but when I try to train it, I never can do it because it always go Out Of Memory. I decided to try it in colab and it works but it required 30 gb of VRAM to train the model (using the default model name = "roberta-base"). In the past I asked about this error and I receive the recomendation of reduce the [nlp] batch_size but it doesn't work. Someone could help me? The .spacy train file size is 40mb, the config that i am using is the following:
Commands used:
I trained the model using tok2vec but it doesn't work properly (It doesnt "extract" the entities that are not in the original list), I am a little bit desesperated about this (months working on it) so any help will be appreciated! Thank you :) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 7 replies
-
Please see this related discussion: #8600 (comment) |
Beta Was this translation helpful? Give feedback.
Please see this related discussion: #8600 (comment)