-
Notifications
You must be signed in to change notification settings - Fork 0
Negation
Jacob Quintero edited this page Mar 12, 2025
·
4 revisions
-
CONDAQA: A Contrastive Reading Comprehension Dataset for
Reasoning about Negation
- Summary: Creates a dataset that has question-answer pairs focused on negation including varying negation cues. Data derived from wikipedia containing negation. Workers paraphrase, modify scope, and undo negation. Workers then create questions about implications then answer these to get the QA pairs.
- Future work: It is noted in Sec. 6 that the model is not sensitive to changes in scope edits (i.e. when the true label and the edit label are different then the model preforms well, if the true label and the scope edit label are different then the model does bad). Increase sensitivity to scope edits.
Contributions: CondaQA dataset, consistency metric, fine-tuned models
-
Making Language Models Robust Against Negation
- Summary: Introduces two pre-training tasks to improve LLM performance on negation. NSSP simply asks given a sentence will the next sentence have a negation. NSP(NSP with polarity switch) asks given an ordered pair of sentences if the second sentence is in fact the next sentence. If the polarity is switch then the answer should be no.
- Future work:
Contributions: Next sentence polarity prediction and Next sentence prediction (with reversed polarity)