- Python 3.6.10
- Transformers: transformers 2.4.1 installed from source.
- Pytorch-Lightning==0.7.1
- Pytorch==1.4.0
- seqeval
./data/muc, refer to./data/muc/README.mdfor details
- eval on preds_gtt.out
python eval.py --pred_file model_gtt/preds_gtt.out
- The encoder-decoder model (code is written based on hugging face transformers/examples/ner/3ee431d
- How to run: see README in model_gtt
If you use our materials, please cite
@inproceedings{du2021gtt,
title={Template Filling with Generative Transformers},
author={Du, Xinya and Rush, Alexander M and Cardie, Claire},
booktitle = "Proceedings of the 2021 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies",
year={2021}
}
My modifications consist in the following:
- Minor bug-fixes and feature enhancements to the primary implementation files:
./model_gtt/run_pl_gtt.pyand./model_gtt/transformer_base.py - New testing harness and data collection:
./model_gtt/run_pl_max.sh,./gather_scores.shandclean.sh - Test scripts:
./model_gtt/experiment*.shand./model_gtt/test*.sh - Jupyter notebook for producing graphs:
./graphs.ipynb
I ran the experiment and test scripts and created many model checkpoints derived from various BERT-models as described.
I gathered the results as follows:
bash ./gather_scores.sh > results.txt
cat results.txt | bash clean.sh > clean-results.csv