Skip to content

donaldziff/gtt

 
 

Repository files navigation

Template Filling with Generative Transformers (NAACL 2021 short)

paper link

Dependencies

  • Python 3.6.10
  • Transformers: transformers 2.4.1 installed from source.
  • Pytorch-Lightning==0.7.1
  • Pytorch==1.4.0
  • seqeval

Dataset

  • ./data/muc, refer to ./data/muc/README.md for details

Eval

  • eval on preds_gtt.out python eval.py --pred_file model_gtt/preds_gtt.out

GTT model

Citation

If you use our materials, please cite

@inproceedings{du2021gtt,
  title={Template Filling with Generative Transformers},
  author={Du, Xinya and Rush, Alexander M and Cardie, Claire},
  booktitle = "Proceedings of the 2021 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies",
  year={2021}
}

Max Ziff's modifications to the orginal repo

My modifications consist in the following:

  • Minor bug-fixes and feature enhancements to the primary implementation files: ./model_gtt/run_pl_gtt.py and ./model_gtt/transformer_base.py
  • New testing harness and data collection: ./model_gtt/run_pl_max.sh, ./gather_scores.sh and clean.sh
  • Test scripts: ./model_gtt/experiment*.sh and ./model_gtt/test*.sh
  • Jupyter notebook for producing graphs: ./graphs.ipynb

I ran the experiment and test scripts and created many model checkpoints derived from various BERT-models as described.

I gathered the results as follows:

bash ./gather_scores.sh > results.txt
cat results.txt | bash clean.sh > clean-results.csv

About

Template Filling with Generative Transformers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Jupyter Notebook 71.3%
  • Python 18.2%
  • Emacs Lisp 7.8%
  • Shell 1.9%
  • Roff 0.8%