Evaluation step for textcat using transformer doesn't use GPU #10224
Unanswered
samehraban
asked this question in
Help: Other Questions
Replies: 1 comment 11 replies
-
The evaluation does use GPU if you're using GPU for training, but if you have a large dev corpus, it can take a noticeable amount of time. You might want to raise The batch sizes and other settings that manage the length of the training and eval steps can be set separately in the config, see: #8600 (comment) |
Beta Was this translation helpful? Give feedback.
11 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
How to reproduce the behaviour
I noticed training a textcat using transformer is taking forever at the end of the first iteration. Turns out this loop takes about 40~50 seconds per iteration on my machine while the training itself takes about ~2.5 it/sec.
Your Environment
Beta Was this translation helpful? Give feedback.
All reactions