Skip to content

Commit 3943265

Browse files
authored
Add prompts for Squad adversarial (#696)
* Update templates.py * Update templates.yaml * Update templates.py
1 parent 45ba5dc commit 3943265

File tree

1 file changed

+58
-242
lines changed

1 file changed

+58
-242
lines changed
Lines changed: 58 additions & 242 deletions
Original file line numberDiff line numberDiff line change
@@ -1,286 +1,102 @@
11
dataset: squad_adversarial
22
subset: AddSent
33
templates:
4-
048c2159-2c8c-40e2-90f7-18c9623381ba: !Template
4+
22a2f318-5302-479e-93be-215453060624: !Template
55
answer_choices: null
6-
id: 048c2159-2c8c-40e2-90f7-18c9623381ba
7-
jinja: 'Generate a possible question for the following short passage:
6+
id: 22a2f318-5302-479e-93be-215453060624
7+
jinja: '{{context}}
88
99
10-
{{context}} |||
11-
12-
{{question}}'
13-
metadata: !TemplateMetadata
14-
choices_in_prompt: null
15-
metrics: []
16-
original_task: false
17-
name: possible_qn
18-
reference: ''
19-
08fb6eac-6321-4b25-8578-14a799a103ed: !Template
20-
answer_choices: null
21-
id: 08fb6eac-6321-4b25-8578-14a799a103ed
22-
jinja: 'After reading the following paragraph, please answer this question: {{question}}
23-
24-
25-
{{context}}
10+
Q: {{question}}
2611
2712
28-
|||
29-
30-
{{answers[''text''] | most_frequent | choice}}'
13+
Referring to the passage above, the correct answer to the given question is
14+
||| {{answers["text"][0]}}'
3115
metadata: !TemplateMetadata
32-
choices_in_prompt: null
33-
metrics: []
16+
choices_in_prompt: false
17+
metrics:
18+
- Squad
3419
original_task: true
35-
name: after
36-
reference: ''
37-
1f2c2108-441a-4b3c-a5c8-8ece28edb6e1: !Template
38-
answer_choices: null
39-
id: 1f2c2108-441a-4b3c-a5c8-8ece28edb6e1
40-
jinja: 'At what character does the text "{{answers["text"][0]}}" start in the
41-
following paragraph?
42-
43-
44-
{{context}}
45-
46-
47-
|||
48-
49-
{{answers["answer_start"][0]}}'
50-
metadata: !TemplateMetadata
51-
choices_in_prompt: null
52-
metrics: []
53-
original_task: false
54-
name: find text
55-
reference: ''
56-
279e4019-8d67-498d-8832-a7905bc0c68d: !Template
57-
answer_choices: null
58-
id: 279e4019-8d67-498d-8832-a7905bc0c68d
59-
jinja: 'Use the following non-answers to generate a possible short passage-question
60-
pair:
61-
62-
{{answers["text"]|join('', '')}} |||
63-
64-
{{context}}
65-
66-
{{question}}
67-
68-
'
69-
metadata: !TemplateMetadata
70-
choices_in_prompt: null
71-
metrics: []
72-
original_task: false
73-
name: answers_question
20+
name: answer_given_context_and_question
7421
reference: ''
75-
44df6bac-bffa-4e46-b2d4-f3eb5b43cefa: !Template
22+
402adce7-4857-4524-8ad3-6270b66a5e0f: !Template
7623
answer_choices: null
77-
id: 44df6bac-bffa-4e46-b2d4-f3eb5b43cefa
78-
jinja: 'Generate a title for the following short passage:
24+
id: 402adce7-4857-4524-8ad3-6270b66a5e0f
25+
jinja: 'Refer to the passage below and answer the following question:
7926
8027
81-
{{context}} |||
28+
Passage: {{context}}
8229
83-
{{title|replace("_"," ")}}
84-
85-
'
86-
metadata: !TemplateMetadata
87-
choices_in_prompt: null
88-
metrics: []
89-
original_task: false
90-
name: title
91-
reference: ''
92-
60ae905d-d5fa-4f60-bbcb-acb8d0ec2cf1: !Template
93-
answer_choices: null
94-
id: 60ae905d-d5fa-4f60-bbcb-acb8d0ec2cf1
95-
jinja: "Q: {{question}}\n\nA: \n|||\n{{answers['text'] | most_frequent | choice}}"
96-
metadata: !TemplateMetadata
97-
choices_in_prompt: null
98-
metrics: []
99-
original_task: false
100-
name: cbqa qa
101-
reference: ''
102-
6118ec43-d051-4599-b24f-8779f66b9ad6: !Template
103-
answer_choices: null
104-
id: 6118ec43-d051-4599-b24f-8779f66b9ad6
105-
jinja: '{{question}}
10630
31+
Question: {{question}}
10732
10833
|||
10934
110-
111-
{{answers[''text''] | most_frequent | choice}}'
35+
{{answers["text"][0]}}'
11236
metadata: !TemplateMetadata
113-
choices_in_prompt: null
114-
metrics: []
115-
original_task: false
116-
name: cbqa
117-
reference: ''
118-
754e8bad-454f-4ae3-9747-299506955569: !Template
119-
answer_choices: null
120-
id: 754e8bad-454f-4ae3-9747-299506955569
121-
jinja: 'Please come up with a good question to test reading comprehension about
122-
the following paragraph:
123-
124-
125-
{{context}}
126-
127-
128-
|||
129-
130-
131-
{{question}}'
132-
metadata: !TemplateMetadata
133-
choices_in_prompt: null
134-
metrics: []
135-
original_task: null
136-
name: generate question
137-
reference: ''
138-
7ff4bc14-08d4-47c1-9cd3-b7473d6505e7: !Template
139-
answer_choices: null
140-
id: 7ff4bc14-08d4-47c1-9cd3-b7473d6505e7
141-
jinja: 'For the following passage-question pair, list all possible wrong answers
142-
(pitfalls) test-takers may choose:
143-
144-
145-
{{context}}
146-
147-
{{question}} |||
148-
149-
{{answers["text"]|join(", ")}}'
150-
metadata: !TemplateMetadata
151-
choices_in_prompt: null
152-
metrics: []
37+
choices_in_prompt: false
38+
metrics:
39+
- Squad
15340
original_task: true
154-
name: possible_pitfalls
41+
name: answer_question_given_context
15542
reference: ''
156-
88b952a3-3784-43bb-a463-4a34478785d5: !Template
43+
b4994c82-bfb2-4e0c-a5d7-081053830097: !Template
15744
answer_choices: null
158-
id: 88b952a3-3784-43bb-a463-4a34478785d5
159-
jinja: '{{["Question", "Problem"] | choice}} {{range(1, 12) | choice}}: {{question}}
160-
161-
162-
Hint: {{context}}
45+
id: b4994c82-bfb2-4e0c-a5d7-081053830097
46+
jinja: '{{context}}
16347
16448
165-
|||
166-
167-
{{answers["text"] | most_frequent | choice}}'
49+
From the above passage, a reasonable question with "{{answers["text"][0]}}"
50+
as the answer would be: ||| {{question}}'
16851
metadata: !TemplateMetadata
169-
choices_in_prompt: null
170-
metrics: []
52+
choices_in_prompt: false
53+
metrics:
54+
- BLEU
55+
- ROUGE
17156
original_task: false
172-
name: question/hint
173-
reference: ''
174-
8bcc0d77-6925-4fa1-b8cc-e6da3b272197: !Template
57+
name: jeopardy
58+
reference: jeopardy style- wiki_qa
59+
b60cd43d-7026-434b-abf8-f67cc965316a: !Template
17560
answer_choices: null
176-
id: 8bcc0d77-6925-4fa1-b8cc-e6da3b272197
177-
jinja: "Question: {{question}}\n\nAnswer: \n|||\n{{answers['text'] | most_frequent\
178-
\ | choice}}"
179-
metadata: !TemplateMetadata
180-
choices_in_prompt: null
181-
metrics: []
182-
original_task: false
183-
name: cbqa question answer
184-
reference: ''
185-
a99d7cf5-d723-4c7a-b843-e2b8a476754d: !Template
186-
answer_choices: null
187-
id: a99d7cf5-d723-4c7a-b843-e2b8a476754d
188-
jinja: 'I''ve always wondered: {{question}}
189-
190-
191-
I searched Wikipedia and this is what I found. What''s the answer?
192-
193-
194-
{{context}}
195-
61+
id: b60cd43d-7026-434b-abf8-f67cc965316a
62+
jinja: '{{context}}
19663
197-
|||
19864
199-
{{answers[''text''] | most_frequent | choice}}'
65+
Generate a question from the above passage : ||| {{question}}'
20066
metadata: !TemplateMetadata
201-
choices_in_prompt: null
202-
metrics: []
67+
choices_in_prompt: false
68+
metrics:
69+
- BLEU
70+
- ROUGE
20371
original_task: false
204-
name: wondered
72+
name: given_context_generate_question
20573
reference: ''
206-
a9d70ff7-8080-4eaa-9be2-1b67fe9b44f4: !Template
74+
dada0334-1dc2-4e39-a7e1-258ac622ab4f: !Template
20775
answer_choices: null
208-
id: a9d70ff7-8080-4eaa-9be2-1b67fe9b44f4
209-
jinja: 'I''m working on the final exam for my class and am trying to figure out
210-
the answer to the question "{{question}}" I found the following info on Wikipedia
211-
and I think it has the answer. Can you tell me the answer?
212-
213-
214-
{{context}}
215-
216-
217-
|||
218-
219-
{{answers[''text''] | most_frequent | choice}}'
220-
metadata: !TemplateMetadata
221-
choices_in_prompt: null
222-
metrics: []
223-
original_task: false
224-
name: exam
225-
reference: ''
226-
f086fa63-6ca2-48d2-857d-179ab88fce48: !Template
227-
answer_choices: null
228-
id: f086fa63-6ca2-48d2-857d-179ab88fce48
229-
jinja: 'I''m creating a final exam for my reading class. Can you please come up
230-
with a good question to quiz how well students have read the following text
231-
snippet?
232-
233-
234-
{{context}}
235-
236-
237-
|||
238-
239-
240-
{{question}}'
76+
id: dada0334-1dc2-4e39-a7e1-258ac622ab4f
77+
jinja: "{{context}}\n\nWith reference to the above context, {{question}} ||| \n\
78+
\n{{answers.text[0]}}"
24179
metadata: !TemplateMetadata
242-
choices_in_prompt: null
243-
metrics: []
244-
original_task: null
245-
name: exam creation help
246-
reference: ''
247-
f9b51e3b-a41a-47a5-b929-76a1e0efd430: !Template
248-
answer_choices: null
249-
id: f9b51e3b-a41a-47a5-b929-76a1e0efd430
250-
jinja: 'Count the characters up until "{{answers["text"][0]}}" appears in the
251-
following chunk of text.
252-
253-
254-
{{context}}
255-
256-
257-
|||
258-
259-
260-
{{answers["answer_start"][0]}}'
261-
metadata: !TemplateMetadata
262-
choices_in_prompt: null
263-
metrics: []
264-
original_task: null
265-
name: count letters
80+
choices_in_prompt: false
81+
metrics:
82+
- Squad
83+
original_task: true
84+
name: answer_the_question
26685
reference: ''
267-
fb81ba4d-341a-43f0-a94f-fa7e350d10c0: !Template
86+
e638bc9e-5059-4ace-a6f9-4871f548342f: !Template
26887
answer_choices: null
269-
id: fb81ba4d-341a-43f0-a94f-fa7e350d10c0
270-
jinja: 'List all possible non-answers that have a lot of words in common with
271-
the following context-question pair:
272-
88+
id: e638bc9e-5059-4ace-a6f9-4871f548342f
89+
jinja: '{{context}}
27390
274-
{{context}}
27591
276-
{{question}} |||
92+
Q: {{question}}
27793
278-
{{answers["text"]|join('', '')}}
27994
280-
'
95+
A: ||| {{answers["text"][0]}}'
28196
metadata: !TemplateMetadata
282-
choices_in_prompt: null
283-
metrics: []
97+
choices_in_prompt: false
98+
metrics:
99+
- Squad
284100
original_task: true
285-
name: incorrect_answers
101+
name: given_context_answer_question_variation
286102
reference: ''

0 commit comments

Comments
 (0)