Skip to content

Commit cd404a7

Browse files
Update README.md
1 parent 18478dd commit cd404a7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ This repository provides an evaluation metric for generative question answering
55
<h2> Dataset </h2>
66
We provide human judgments of correctness for 4 datasets:MS-MARCO NLG, AVSD, Narrative QA and SemEval 2018 Task 11 (SemEval). <br>
77
For MS-MARCO NLG and AVSD, we generate the answer using two models for each dataset.
8-
For NarrativeQA and SemEval, we preprocessed the dataset from [The paper](https://www.aclweb.org/anthology/D19-5817). <br>
8+
For NarrativeQA and SemEval, we preprocessed the dataset from [Evaluating Question Answering Evaluation](https://www.aclweb.org/anthology/D19-5817). <br>
99

1010
<h2> Usage </h2>
1111

0 commit comments

Comments
 (0)