mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-28 08:42:23 +06:00
48 lines
1.8 KiB
Markdown
48 lines
1.8 KiB
Markdown
### Get CNN Data
|
|
To be able to reproduce the authors' results on the CNN/Daily Mail dataset you first need to download both CNN and Daily Mail datasets [from Kyunghyun Cho's website](https://cs.nyu.edu/~kcho/DMQA/) (the links next to "Stories") in the same folder. Then uncompress the archives by running:
|
|
|
|
```bash
|
|
wget https://s3.amazonaws.com/datasets.huggingface.co/summarization/cnn_dm.tgz
|
|
tar -xzvf cnn_dm.tgz
|
|
```
|
|
|
|
this should make a directory called cnn_dm/ with files like `test.source`.
|
|
To use your own data, copy that files format. Each article to be summarized is on its own line.
|
|
|
|
### Evaluation
|
|
|
|
To create summaries for each article in dataset, run:
|
|
```bash
|
|
python evaluate_cnn.py <path_to_test.source> test_generations.txt <model-name> --score_path rouge_scores.txt
|
|
```
|
|
The default batch size, 8, fits in 16GB GPU memory, but may need to be adjusted to fit your system.
|
|
|
|
### Training
|
|
Run/modify `finetune_bart.sh` or `finetune_t5.sh`
|
|
|
|
### Stanford CoreNLP Setup
|
|
```
|
|
ptb_tokenize () {
|
|
cat $1 | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > $2
|
|
}
|
|
|
|
sudo apt install openjdk-8-jre-headless
|
|
sudo apt-get install ant
|
|
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip
|
|
unzip stanford-corenlp-full-2018-10-05.zip
|
|
cd stanford-corenlp-full-2018-10-05
|
|
export CLASSPATH=stanford-corenlp-3.9.2.jar:stanford-corenlp-3.9.2-models.jar
|
|
```
|
|
Then run `ptb_tokenize` on `test.target` and your generated hypotheses.
|
|
### Rouge Setup
|
|
Install `files2rouge` following the instructions at [here](https://github.com/pltrdy/files2rouge).
|
|
I also needed to run `sudo apt-get install libxml-parser-perl`
|
|
|
|
```python
|
|
from files2rouge import files2rouge
|
|
from files2rouge import settings
|
|
files2rouge.run(<path_to_tokenized_hypo>,
|
|
<path_to_tokenized_target>,
|
|
saveto='rouge_output.txt')
|
|
```
|