Pretrained FEVER models are available here.
However, if you are interested in evaluating or retraining, follow the below steps.
To run the evaluation, first download the evaluation sets to ../../eval/ and then run run_eval.sh.
# download evaluation sets to ../../eval/
cd code/bert-concat
bash run_eval.shNote: Due to randomized initialization of abstract entity markers` embeddings, the results of Anon. evaluation set might slightly vary when compared to the results reported in our paper.
We use a simple BERT-based classifier on the claim and extracted evidence sentences. For the evidence extraction, we use the state-of-the-art retrieval results from KGAT. First download the train and dev files from here.
# Original training strategy
# as downloaded above
export INPUT_PATH='fever_train'
# add path to save checkpoints
export MODEL_PATH=''
python train.py train \
-input ${INPUT_PATH} \
-bs 16 \
-save-dir ${MODEL_PATH} \
-eval-steps 1000
# CWA/Skip-fact training strategy
# provide the checkpoint to the relevant RuleTaker fine-tuned BERT
export CHECKPOINT_PATH=''
python train.py train \
-input ${INPUT_PATH} \
-checkpoint ${CHECKPOINT_PATH} \
-bs 16 \
-save-dir ${MODEL_PATH} \
-eval-steps 1000