transformers/examples/summarization/bart
2020-04-20 10:49:56 -04:00
..
__init__.py Summarization Examples: add Bart CNN Evaluation (#3082) 2020-03-03 15:29:59 -05:00
evaluate_cnn.py [Bart] Replace config.output_past with use_cache kwarg (#3632) 2020-04-07 19:08:26 -04:00
finetune.py [examples] fix summarization do_predict (#3866) 2020-04-20 10:49:56 -04:00
README.md [Docs] examples/summarization/bart: Simplify CNN/DM preprocessi… (#3516) 2020-03-29 13:25:42 -04:00
run_train_tiny.sh [examples] summarization/bart/finetune.py supports t5 (#3824) 2020-04-16 15:15:19 -04:00
run_train.sh [examples] summarization/bart/finetune.py supports t5 (#3824) 2020-04-16 15:15:19 -04:00
test_bart_examples.py [examples] fix summarization do_predict (#3866) 2020-04-20 10:49:56 -04:00
utils.py [examples] SummarizationDataset cleanup (#3451) 2020-04-07 19:05:58 -04:00

Get Preprocessed CNN Data

To be able to reproduce the authors' results on the CNN/Daily Mail dataset you first need to download both CNN and Daily Mail datasets from Kyunghyun Cho's website (the links next to "Stories") in the same folder. Then uncompress the archives by running:

wget https://s3.amazonaws.com/datasets.huggingface.co/summarization/cnn_dm.tgz
tar -xzvf cnn_dm.tgz

this should make a directory called cnn_dm/ with files like test.source. To use your own data, copy that files format. Each article to be summarized is on its own line.

Evaluation

To create summaries for each article in dataset, run:

python evaluate_cnn.py <path_to_test.source> cnn_test_summaries.txt

the default batch size, 8, fits in 16GB GPU memory, but may need to be adjusted to fit your system.

Training

Run/modify run_train.sh

Where is the code?

The core model is in src/transformers/modeling_bart.py. This directory only contains examples.

(WIP) Rouge Scores

Stanford CoreNLP Setup

ptb_tokenize () {
    cat $1 | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > $2
}

sudo apt install openjdk-8-jre-headless
sudo apt-get install ant
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip
unzip stanford-corenlp-full-2018-10-05.zip
cd stanford-corenlp-full-2018-10-05
export CLASSPATH=stanford-corenlp-3.9.2.jar:stanford-corenlp-3.9.2-models.jar

Then run ptb_tokenize on test.target and your generated hypotheses.

Rouge Setup

Install files2rouge following the instructions at here. I also needed to run sudo apt-get install libxml-parser-perl

from files2rouge import files2rouge
from files2rouge import settings
files2rouge.run(<path_to_tokenized_hypo>,
                <path_to_tokenized_target>,
               saveto='rouge_output.txt')