Finetuning Generative Q&A Model (e.g. google/flan-t5-base) #5772
Replies: 1 comment
-
@demongolem-biz2 Finetuning generative models is something we currently do not support in Haystack. I'd recommend to have a look at the Seq2SeqTrainer class in HuggingFace transformers if you really want to finetune such a model. However, I wouldn't recommend it right away because it is really a lot of effort. Even the dataset preparation will take some time plus hyperparameters of the training, and so on. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have gone through the example for finetuning using distilibert and that all worked as it should have. However, when we turn to Generative Q&A and flan is used by default, we can't use that same process for finetuning. If I had Question and Answers from a very specific domain, how would I go about finetuning with Generative Q&A and how is it different than https://haystack.deepset.ai/tutorials/02_finetune_a_model_on_your_data?
Beta Was this translation helpful? Give feedback.
All reactions