Skip to content

Commit e65186c

Browse files
committed
link to t-zero repo + p3 materialized version
1 parent 4f69052 commit e65186c

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,14 @@ Promptsource is a toolkit for collecting and applying prompts to NLP datasets.
33

44
Promptsource uses a simple templating language to programatically map an example of a dataset into a text input and a text target.
55

6-
Promptsource contains a growing collection of prompts (which we call **P3**: **P**ublic **P**ool of **P**rompts). As of October 18th, there are ~2'000 prompts for 170+ datasets in P3.
6+
Promptsource contains a growing collection of prompts (which we call **P3**: **P**ublic **P**ool of **P**rompts). As of October 18th, there are ~2'000 prompts for 170+ datasets in [P3](https://huggingface.co/datasets/bigscience/P3).
77
Feel free to use these prompts as they are (you'll find citation details [here](#Citation)).
88

99
Note that a subset of the prompts are still *Work in Progress*. You'll find the list of the prompts which will potentially be modified in the near future [here](WIP.md). Modifications will in majority consist of metadata collection, but in some cases, will impact the templates themselves. To facilitate traceability, Promptsource is currently pinned at version `0.1.0`.
1010

11-
Propmtsource and P3 were originally developed as part of the paper [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207). We release T0* (pronounce "T Zero"), a series of model trained on P3. Checkpoints are available [here](https://huggingface.co/bigscience/T0pp). In particular, we recommend using T0++ (pronounce "T Zero Plus Plus") as it leads (on average) to the best performances on a variety of NLP tasks
11+
Propmtsource and P3 were originally developed as part of the paper [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207). We release T0* (pronounce "T Zero"), a series of model trained on [P3](https://huggingface.co/datasets/bigscience/P3). Checkpoints are available [here](https://huggingface.co/bigscience/T0pp). In particular, we recommend using T0++ (pronounce "T Zero Plus Plus") as it leads (on average) to the best performances on a variety of NLP tasks.
12+
13+
**You will find the official repository to reproduce the results of T Zero here: https://github.com/bigscience-workshop/t-zero.**
1214

1315
## Setup
1416
1. Download the repo
@@ -52,7 +54,7 @@ from promptsource.templates import DatasetTemplates
5254
ag_news_prompts = DatasetTemplates('ag_news')
5355
# Select a prompt by name
5456
prompt = ag_news_prompts["classify_question_first"]
55-
# Apply the prompt on the example
57+
# Apply the prompt on the example
5658
result = prompt.apply(example)
5759
print("INPUT: ", result[0])
5860
print("TARGET: ", result[1])
@@ -107,7 +109,7 @@ Promptsource was developed as part of the [BigScience project for open research
107109
If you want to cite this P3 or Promptsource, you can use this bibtex:
108110
```bibtex
109111
@misc{sanh2021multitask,
110-
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
112+
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
111113
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
112114
year={2021},
113115
eprint={2110.08207},

0 commit comments

Comments
 (0)