CUDA out of memory while training Entity Linking #11190
Unanswered
rdemorais
asked this question in
Help: Model Advice
Replies: 2 comments 2 replies
-
UPDATEI've changed the above |
Beta Was this translation helpful? Give feedback.
2 replies
-
For reference, note this is a continuation of part of #7892. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to train an Entity Linking model using the following configuration:
The transformer used can be replaced to anyone with MASK task training. The knowledge base has 26k entries, the train/dev sets (527 docs / 127 docs).
I'm able to train the Entity Linking using the script provided by the NEL Emerson example, by when I switch to the config approach, the error occurs.
What I have already tried:
[nlp] batch_size
to 32, 16, 8...[training.batcher.size]
.How to reproduce the behaviour
Execute
python -m spacy train ${vars.trf_config_final} --output training/ --gpu-id 0
Your Environment
ERROR Description:
Beta Was this translation helpful? Give feedback.
All reactions