Fine tune on specific domain data? Is it needed? #4891
Replies: 1 comment 1 reply
-
Hi @agario8864! If you found a model that's already working good enough for your domain, you don't need to fine-tune the reader model. If you're still interested in fine-tuning and how to produce labeled training data, this guide in our documentation might be interesting for you.
Yes, that's right, you're not training a model but using the data as knowledge base for your question-answering system. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Currently I have scrapped some wikipedia articles, in political domain. Say that I am using FARMReader with a specific language model that have pretrained with SQuAD v2, and it do pretty well in answering my question about political domain, should I fine tune the FARMReader with the articles I found?
I also confuse with what meant by dataset in question answering. If I am not going to use the scarpped articles as a dataset (like SQuAD format) to train the Reader but just store as documents in a document store, that's mean I am not tranining a model, but using others' pretrained model to form a question answering system, right?
Apperciate for any answer.
Beta Was this translation helpful? Give feedback.
All reactions