Evaluating Table Question Answering System #4792
Replies: 3 comments
-
Hey @MariaDavid30, can you provide some more details? Some code snippets of what you're trying to do, some example data would be helpful too. |
Beta Was this translation helpful? Give feedback.
-
Hey @silvanocerza , I followed the Tutorial: Open-Domain QA on Tables (https://haystack.deepset.ai/tutorials/15_tableqa). I am using a PDF file as my data. These are the reader and retriver models used retriever = EmbeddingRetriever(document_store=document_store, embedding_model="deepset/all-mpnet-base-v2-table") This is the pipeline text_table_qa_pipeline = Pipeline() |
Beta Was this translation helpful? Give feedback.
-
I would probably suggest to following this guide from HuggingFace on how to fine-tune a TableQA model. That might be helpful, let us know. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am working on Table question answering system. I evaluated the texttableQA pipeline using the eval function and I got a recall value of 0.8 and precision value of 0.08.
What are the ways to improve the precision?
How to create better text and table labels manually?
What are the methods of evaluating a table question answering system apart from haystacks evaluation feature which uses the eval function?
Beta Was this translation helpful? Give feedback.
All reactions