mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-03 03:31:05 +06:00
split seq2seq script into summarization & translation (#10611)
* split seq2seq script, update docs * needless diff * fix readme * remove test diff * s/summarization/translation Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * cr * fix arguments & better mbart/t5 refs * copyright Co-authored-by: Suraj Patil <surajp815@gmail.com> * reword readme Co-authored-by: Suraj Patil <surajp815@gmail.com> * s/summarization/translation * short script names * fix tests * fix isort, include mbart doc * delete old script, update tests * automate source prefix * automate source prefix for translation * s/translation/trans Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * fix script name (short version) * typos Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * exact parameter Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * remove superfluous source_prefix calls in docs * rename scripts & warn for source prefix * black * flake8 Co-authored-by: theo <theo@matussie.re> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by: Suraj Patil <surajp815@gmail.com> Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
This commit is contained in:
parent
505494a86f
commit
6f840990a7
@ -168,13 +168,13 @@ Here is an example of how this can be used on a filesystem that is shared betwee
|
||||
On the instance with the normal network run your program which will download and cache models (and optionally datasets if you use 🤗 Datasets). For example:
|
||||
|
||||
```
|
||||
python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
|
||||
python examples/seq2seq/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
|
||||
```
|
||||
|
||||
and then with the same filesystem you can now run the same program on a firewalled instance:
|
||||
```
|
||||
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
|
||||
python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
|
||||
python examples/seq2seq/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
|
||||
```
|
||||
and it should succeed without any hanging waiting to timeout.
|
||||
|
||||
|
@ -279,16 +279,16 @@ To deploy this feature:
|
||||
and make sure you have added the distributed launcher ``-m torch.distributed.launch
|
||||
--nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE`` if you haven't been using it already.
|
||||
|
||||
For example here is how you could use it for ``run_seq2seq.py`` with 2 GPUs:
|
||||
For example here is how you could use it for ``run_translation.py`` with 2 GPUs:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_seq2seq.py \
|
||||
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_translation.py \
|
||||
--model_name_or_path t5-small --per_device_train_batch_size 1 \
|
||||
--output_dir output_dir --overwrite_output_dir \
|
||||
--do_train --max_train_samples 500 --num_train_epochs 1 \
|
||||
--dataset_name wmt16 --dataset_config "ro-en" \
|
||||
--task translation_en_to_ro --source_prefix "translate English to Romanian: " \
|
||||
--source_lang en --target_lang ro \
|
||||
--fp16 --sharded_ddp simple
|
||||
|
||||
Notes:
|
||||
@ -304,16 +304,16 @@ Notes:
|
||||
to the command line arguments, and make sure you have added the distributed launcher ``-m torch.distributed.launch
|
||||
--nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE`` if you haven't been using it already.
|
||||
|
||||
For example here is how you could use it for ``run_seq2seq.py`` with 2 GPUs:
|
||||
For example here is how you could use it for ``run_translation.py`` with 2 GPUs:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_seq2seq.py \
|
||||
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_translation.py \
|
||||
--model_name_or_path t5-small --per_device_train_batch_size 1 \
|
||||
--output_dir output_dir --overwrite_output_dir \
|
||||
--do_train --max_train_samples 500 --num_train_epochs 1 \
|
||||
--dataset_name wmt16 --dataset_config "ro-en" \
|
||||
--task translation_en_to_ro --source_prefix "translate English to Romanian: " \
|
||||
--source_lang en --target_lang ro \
|
||||
--fp16 --sharded_ddp zero_dp_2
|
||||
|
||||
:obj:`zero_dp_2` is an optimized version of the simple wrapper, while :obj:`zero_dp_3` fully shards model weights,
|
||||
@ -333,7 +333,7 @@ Notes:
|
||||
|
||||
Known caveats:
|
||||
|
||||
- This feature is incompatible with :obj:`--predict_with_generate` in the `run_seq2seq.py` script.
|
||||
- This feature is incompatible with :obj:`--predict_with_generate` in the `run_translation.py` script.
|
||||
- Using :obj:`--sharded_ddp zero_dp_3` requires wrapping each layer of the model in the special container
|
||||
:obj:`FullyShardedDataParallelism` of fairscale. It should be used with the option :obj:`auto_wrap` if you are not
|
||||
doing this yourself: :obj:`--sharded_ddp "zero_dp_3 auto_wrap"`.
|
||||
@ -402,17 +402,17 @@ In fact, you can continue using ``-m torch.distributed.launch`` with DeepSpeed a
|
||||
the ``deepspeed`` launcher. But since in the DeepSpeed documentation it'll be used everywhere, for consistency we will
|
||||
use it here as well.
|
||||
|
||||
Here is an example of running ``run_seq2seq.py`` under DeepSpeed deploying all available GPUs:
|
||||
Here is an example of running ``run_translation.py`` under DeepSpeed deploying all available GPUs:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
deepspeed examples/seq2seq/run_seq2seq.py \
|
||||
deepspeed examples/seq2seq/run_translation.py \
|
||||
--deepspeed examples/tests/deepspeed/ds_config.json \
|
||||
--model_name_or_path t5-small --per_device_train_batch_size 1 \
|
||||
--output_dir output_dir --overwrite_output_dir --fp16 \
|
||||
--do_train --max_train_samples 500 --num_train_epochs 1 \
|
||||
--dataset_name wmt16 --dataset_config "ro-en" \
|
||||
--task translation_en_to_ro --source_prefix "translate English to Romanian: "
|
||||
--source_lang en --target_lang ro
|
||||
|
||||
|
||||
Note that in the DeepSpeed documentation you are likely to see ``--deepspeed --deepspeed_config ds_config.json`` - i.e.
|
||||
@ -431,13 +431,13 @@ To deploy DeepSpeed with one GPU adjust the :class:`~transformers.Trainer` comma
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
deepspeed --num_gpus=1 examples/seq2seq/run_seq2seq.py \
|
||||
deepspeed --num_gpus=1 examples/seq2seq/run_translation.py \
|
||||
--deepspeed examples/tests/deepspeed/ds_config.json \
|
||||
--model_name_or_path t5-small --per_device_train_batch_size 1 \
|
||||
--output_dir output_dir --overwrite_output_dir --fp16 \
|
||||
--do_train --max_train_samples 500 --num_train_epochs 1 \
|
||||
--dataset_name wmt16 --dataset_config "ro-en" \
|
||||
--task translation_en_to_ro --source_prefix "translate English to Romanian: "
|
||||
--source_lang en --target_lang ro
|
||||
|
||||
This is almost the same as with multiple-GPUs, but here we tell DeepSpeed explicitly to use just one GPU. By default,
|
||||
DeepSpeed deploys all GPUs it can see. If you have only 1 GPU to start with, then you don't need this argument. The
|
||||
@ -483,7 +483,7 @@ Notes:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
deepspeed --include localhost:1 examples/seq2seq/run_seq2seq.py ...
|
||||
deepspeed --include localhost:1 examples/seq2seq/run_translation.py ...
|
||||
|
||||
In this example, we tell DeepSpeed to use GPU 1 (second gpu).
|
||||
|
||||
@ -574,7 +574,7 @@ with:
|
||||
|
||||
.. code-block::
|
||||
|
||||
!deepspeed examples/seq2seq/run_seq2seq.py ...
|
||||
!deepspeed examples/seq2seq/run_translation.py ...
|
||||
|
||||
or with bash magic, where you can write a multi-line code for the shell to run:
|
||||
|
||||
@ -583,7 +583,7 @@ or with bash magic, where you can write a multi-line code for the shell to run:
|
||||
%%bash
|
||||
|
||||
cd /somewhere
|
||||
deepspeed examples/seq2seq/run_seq2seq.py ...
|
||||
deepspeed examples/seq2seq/run_translation.py ...
|
||||
|
||||
|
||||
|
||||
|
@ -742,8 +742,8 @@ Summarization
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Summarization is the task of summarizing a document or an article into a shorter text. If you would like to fine-tune a
|
||||
model on a summarization task, you may leverage the `run_seq2seq.py
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/seq2seq/run_seq2seq.py>`__ script.
|
||||
model on a summarization task, you may leverage the `run_summarization.py
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/seq2seq/run_summarization.py>`__ script.
|
||||
|
||||
An example of a summarization dataset is the CNN / Daily Mail dataset, which consists of long news articles and was
|
||||
created for the task of summarization. If you would like to fine-tune a model on a summarization task, various
|
||||
@ -822,8 +822,8 @@ Translation
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Translation is the task of translating a text from one language to another. If you would like to fine-tune a model on a
|
||||
translation task, you may leverage the `run_seq2seq.py
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/seq2seq/run_seq2seq.py>`__ script.
|
||||
translation task, you may leverage the `run_translation.py
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/seq2seq/run_translation.py>`__ script.
|
||||
|
||||
An example of a translation dataset is the WMT English to German dataset, which has sentences in English as the input
|
||||
data and the corresponding sentences in German as the target data. If you would like to fine-tune a model on a
|
||||
|
@ -30,7 +30,7 @@ For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2s
|
||||
- `FSMTForConditionalGeneration` (translation only)
|
||||
- `T5ForConditionalGeneration`
|
||||
|
||||
`run_seq2seq.py` is a lightweight example of how to download and preprocess a dataset from the [🤗 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it.
|
||||
`run_summarization.py` and `run_translation.py` are lightweight examples of how to download and preprocess a dataset from the [🤗 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it.
|
||||
|
||||
For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets.html#json-files
|
||||
and you also will find examples of these below.
|
||||
@ -39,11 +39,10 @@ and you also will find examples of these below.
|
||||
|
||||
Here is an example on a summarization task:
|
||||
```bash
|
||||
python examples/seq2seq/run_seq2seq.py \
|
||||
python examples/seq2seq/run_summarization.py \
|
||||
--model_name_or_path t5-small \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--task summarization \
|
||||
--dataset_name xsum \
|
||||
--output_dir /tmp/tst-summarization \
|
||||
--per_device_train_batch_size=4 \
|
||||
@ -60,11 +59,10 @@ And here is how you would use it on your own files, after adjusting the values f
|
||||
`--train_file`, `--validation_file`, `--text_column` and `--summary_column` to match your setup:
|
||||
|
||||
```bash
|
||||
python examples/seq2seq/run_seq2seq.py \
|
||||
python examples/seq2seq/run_summarization.py \
|
||||
--model_name_or_path t5-small \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--task summarization \
|
||||
--train_file path_to_csv_or_jsonlines_file \
|
||||
--validation_file path_to_csv_or_jsonlines_file \
|
||||
--output_dir /tmp/tst-summarization \
|
||||
@ -140,14 +138,14 @@ And as with the CSV files, you can specify which values to select from the file,
|
||||
Here is an example of a translation fine-tuning with T5:
|
||||
|
||||
```bash
|
||||
python examples/seq2seq/run_seq2seq.py \
|
||||
python examples/seq2seq/run_translation.py \
|
||||
--model_name_or_path t5-small \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--task translation_en_to_ro \
|
||||
--source_lang en \
|
||||
--target_lang ro \
|
||||
--dataset_name wmt16 \
|
||||
--dataset_config_name ro-en \
|
||||
--source_prefix "translate English to Romanian: " \
|
||||
--output_dir /tmp/tst-translation \
|
||||
--per_device_train_batch_size=4 \
|
||||
--per_device_eval_batch_size=4 \
|
||||
@ -160,11 +158,10 @@ python examples/seq2seq/run_seq2seq.py \
|
||||
And the same with MBart:
|
||||
|
||||
```bash
|
||||
python examples/seq2seq/run_seq2seq.py \
|
||||
python examples/seq2seq/run_translation.py \
|
||||
--model_name_or_path facebook/mbart-large-en-ro \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--task translation_en_to_ro \
|
||||
--dataset_name wmt16 \
|
||||
--dataset_config_name ro-en \
|
||||
--source_lang en_XX \
|
||||
@ -180,18 +177,8 @@ python examples/seq2seq/run_seq2seq.py \
|
||||
|
||||
Note, that depending on the used model additional language-specific command-line arguments are sometimes required. Specifically:
|
||||
|
||||
* MBart models require:
|
||||
```
|
||||
--source_lang en_XX \
|
||||
--target_lang ro_RO \
|
||||
```
|
||||
* T5 requires:
|
||||
|
||||
```
|
||||
--source_prefix "translate English to Romanian: "
|
||||
```
|
||||
|
||||
* yet, other models, require neither.
|
||||
* MBart models require different `--{source,target}_lang` values, e.g. in place of `en` it expects `en_XX`, for `ro` it expects `ro_RO`. The full MBart specification for language codes can be looked up [here](https://huggingface.co/facebook/mbart-large-cc25)
|
||||
* T5 models can use a `--source_prefix` argument to override the otherwise automated prefix of the form `translate {source_lang} to {target_lang}` for `run_translation.py` and `summarize: ` for `run_summarization.py`
|
||||
|
||||
Also, if you switch to a different language pair, make sure to adjust the source and target values in all command line arguments.
|
||||
|
||||
@ -199,14 +186,14 @@ And here is how you would use the translation finetuning on your own files, afte
|
||||
values for the arguments `--train_file`, `--validation_file` to match your setup:
|
||||
|
||||
```bash
|
||||
python examples/seq2seq/run_seq2seq.py \
|
||||
python examples/seq2seq/run_translation.py \
|
||||
--model_name_or_path t5-small \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--task translation_en_to_ro \
|
||||
--source_lang en \
|
||||
--target_lang ro \
|
||||
--dataset_name wmt16 \
|
||||
--dataset_config_name ro-en \
|
||||
--source_prefix "translate English to Romanian: " \
|
||||
--train_file path_to_jsonlines_file \
|
||||
--validation_file path_to_jsonlines_file \
|
||||
--output_dir /tmp/tst-translation \
|
||||
@ -229,13 +216,13 @@ Here the languages are Romanian (`ro`) and English (`en`).
|
||||
If you want to use a pre-processed dataset that leads to high bleu scores, but for the `en-de` language pair, you can use `--dataset_name wmt14-en-de-pre-processed`, as following:
|
||||
|
||||
```bash
|
||||
python examples/seq2seq/run_seq2seq.py \
|
||||
python examples/seq2seq/run_translation.py \
|
||||
--model_name_or_path t5-small \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--task translation_en_to_de \
|
||||
--source_lang en \
|
||||
--target_lang de \
|
||||
--dataset_name wmt14-en-de-pre-processed \
|
||||
--source_prefix "translate English to German: " \
|
||||
--output_dir /tmp/tst-translation \
|
||||
--per_device_train_batch_size=4 \
|
||||
--per_device_eval_batch_size=4 \
|
||||
|
@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env python
|
||||
# coding=utf-8
|
||||
# Copyright The HuggingFace Team and The HuggingFace Inc. team. All rights reserved.
|
||||
# Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@ -20,7 +20,6 @@ Fine-tuning the library models for sequence to sequence.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
@ -37,8 +36,6 @@ from transformers import (
|
||||
AutoTokenizer,
|
||||
DataCollatorForSeq2Seq,
|
||||
HfArgumentParser,
|
||||
MBartTokenizer,
|
||||
MBartTokenizerFast,
|
||||
Seq2SeqTrainer,
|
||||
Seq2SeqTrainingArguments,
|
||||
default_data_collator,
|
||||
@ -103,13 +100,6 @@ class DataTrainingArguments:
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
"""
|
||||
|
||||
task: str = field(
|
||||
default="summarization",
|
||||
metadata={
|
||||
"help": "The name of the task, should be summarization (or summarization_{dataset} for evaluating "
|
||||
"pegasus) or translation (or translation_{xx}_to_{yy})."
|
||||
},
|
||||
)
|
||||
dataset_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
@ -130,15 +120,14 @@ class DataTrainingArguments:
|
||||
validation_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "An optional input evaluation data file to evaluate the metrics (rouge/sacreblue) on "
|
||||
"help": "An optional input evaluation data file to evaluate the metrics (rouge) on "
|
||||
"(a jsonlines or csv file)."
|
||||
},
|
||||
)
|
||||
test_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "An optional input test data file to evaluate the metrics (rouge/sacreblue) on "
|
||||
"(a jsonlines or csv file)."
|
||||
"help": "An optional input test data file to evaluate the metrics (rouge) on " "(a jsonlines or csv file)."
|
||||
},
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
@ -200,8 +189,6 @@ class DataTrainingArguments:
|
||||
"value if set."
|
||||
},
|
||||
)
|
||||
source_lang: Optional[str] = field(default=None, metadata={"help": "Source language id for translation."})
|
||||
target_lang: Optional[str] = field(default=None, metadata={"help": "Target language id for translation."})
|
||||
num_beams: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
@ -229,10 +216,6 @@ class DataTrainingArguments:
|
||||
if self.validation_file is not None:
|
||||
extension = self.validation_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
|
||||
if not self.task.startswith("summarization") and not self.task.startswith("translation"):
|
||||
raise ValueError(
|
||||
"`task` should be summarization, summarization_{dataset}, translation or translation_{xx}_to_{yy}."
|
||||
)
|
||||
if self.val_max_target_length is None:
|
||||
self.val_max_target_length = self.max_target_length
|
||||
|
||||
@ -265,6 +248,18 @@ def main():
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
if data_args.source_prefix is None and model_args.model_name_or_path in [
|
||||
"t5-small",
|
||||
"t5-base",
|
||||
"t5-large",
|
||||
"t5-3b",
|
||||
"t5-11b",
|
||||
]:
|
||||
logger.warning(
|
||||
"You're running a t5 model but didn't provide a source prefix, which is the expected, e.g. with "
|
||||
"`--source_prefix 'summarize: ' `"
|
||||
)
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
@ -305,11 +300,8 @@ def main():
|
||||
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
|
||||
# (the dataset will be downloaded automatically from the datasets Hub).
|
||||
#
|
||||
# For CSV/JSON files in the summarization task, this script will use the first column for the full texts and the
|
||||
# second column for the summaries (unless you specify column names for this with the `text_column` and
|
||||
# `summary_column` arguments).
|
||||
# For translation, only JSON files are supported, with one field named "translation" containing two keys for the
|
||||
# source and target languages (unless you adapt what follows).
|
||||
# For CSV/JSON files this script will use the first column for the full texts and the second column for the
|
||||
# summaries (unless you specify column names for this with the `text_column` and `summary_column` arguments).
|
||||
#
|
||||
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
|
||||
# download the dataset.
|
||||
@ -358,16 +350,6 @@ def main():
|
||||
use_auth_token=True if model_args.use_auth_token else None,
|
||||
)
|
||||
|
||||
# Set decoder_start_token_id
|
||||
if model.config.decoder_start_token_id is None and isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):
|
||||
assert (
|
||||
data_args.target_lang is not None and data_args.source_lang is not None
|
||||
), "mBart requires --target_lang and --source_lang"
|
||||
if isinstance(tokenizer, MBartTokenizer):
|
||||
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.target_lang]
|
||||
else:
|
||||
model.config.decoder_start_token_id = tokenizer.convert_tokens_to_ids(data_args.target_lang)
|
||||
|
||||
if model.config.decoder_start_token_id is None:
|
||||
raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined")
|
||||
|
||||
@ -385,55 +367,24 @@ def main():
|
||||
logger.info("There is nothing to do. Please pass `do_train`, `do_eval` and/or `do_predict`.")
|
||||
return
|
||||
|
||||
# For translation we set the codes of our source and target languages (only useful for mBART, the others will
|
||||
# ignore those attributes).
|
||||
if data_args.task.startswith("translation") or isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):
|
||||
if data_args.source_lang is not None:
|
||||
tokenizer.src_lang = data_args.source_lang
|
||||
if data_args.target_lang is not None:
|
||||
tokenizer.tgt_lang = data_args.target_lang
|
||||
|
||||
# To serialize preprocess_function below, each of those four variables needs to be defined (even if we won't use
|
||||
# them all).
|
||||
source_lang, target_lang, text_column, summary_column = None, None, None, None
|
||||
|
||||
if data_args.task.startswith("summarization"):
|
||||
# Get the column names for input/target.
|
||||
dataset_columns = summarization_name_mapping.get(data_args.dataset_name, None)
|
||||
if data_args.text_column is None:
|
||||
text_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
|
||||
else:
|
||||
text_column = data_args.text_column
|
||||
if text_column not in column_names:
|
||||
raise ValueError(
|
||||
f"--text_column' value '{data_args.text_column}' needs to be one of: {', '.join(column_names)}"
|
||||
)
|
||||
if data_args.summary_column is None:
|
||||
summary_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
|
||||
else:
|
||||
summary_column = data_args.summary_column
|
||||
if summary_column not in column_names:
|
||||
raise ValueError(
|
||||
f"--summary_column' value '{data_args.summary_column}' needs to be one of: {', '.join(column_names)}"
|
||||
)
|
||||
# Get the column names for input/target.
|
||||
dataset_columns = summarization_name_mapping.get(data_args.dataset_name, None)
|
||||
if data_args.text_column is None:
|
||||
text_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
|
||||
else:
|
||||
# Get the language codes for input/target.
|
||||
lang_search = re.match("translation_([a-z]+)_to_([a-z]+)", data_args.task)
|
||||
if data_args.source_lang is not None:
|
||||
source_lang = data_args.source_lang.split("_")[0]
|
||||
else:
|
||||
assert (
|
||||
lang_search is not None
|
||||
), "Provide a source language via --source_lang or rename your task 'translation_xx_to_yy'."
|
||||
source_lang = lang_search.groups()[0]
|
||||
|
||||
if data_args.target_lang is not None:
|
||||
target_lang = data_args.target_lang.split("_")[0]
|
||||
else:
|
||||
assert (
|
||||
lang_search is not None
|
||||
), "Provide a target language via --target_lang or rename your task 'translation_xx_to_yy'."
|
||||
target_lang = lang_search.groups()[1]
|
||||
text_column = data_args.text_column
|
||||
if text_column not in column_names:
|
||||
raise ValueError(
|
||||
f"--text_column' value '{data_args.text_column}' needs to be one of: {', '.join(column_names)}"
|
||||
)
|
||||
if data_args.summary_column is None:
|
||||
summary_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
|
||||
else:
|
||||
summary_column = data_args.summary_column
|
||||
if summary_column not in column_names:
|
||||
raise ValueError(
|
||||
f"--summary_column' value '{data_args.summary_column}' needs to be one of: {', '.join(column_names)}"
|
||||
)
|
||||
|
||||
# Temporarily set max_target_length for training.
|
||||
max_target_length = data_args.max_target_length
|
||||
@ -446,12 +397,8 @@ def main():
|
||||
)
|
||||
|
||||
def preprocess_function(examples):
|
||||
if data_args.task.startswith("translation"):
|
||||
inputs = [ex[source_lang] for ex in examples["translation"]]
|
||||
targets = [ex[target_lang] for ex in examples["translation"]]
|
||||
else:
|
||||
inputs = examples[text_column]
|
||||
targets = examples[summary_column]
|
||||
inputs = examples[text_column]
|
||||
targets = examples[summary_column]
|
||||
inputs = [prefix + inp for inp in inputs]
|
||||
model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True)
|
||||
|
||||
@ -526,19 +473,15 @@ def main():
|
||||
)
|
||||
|
||||
# Metric
|
||||
metric_name = "rouge" if data_args.task.startswith("summarization") else "sacrebleu"
|
||||
metric = load_metric(metric_name)
|
||||
metric = load_metric("rouge")
|
||||
|
||||
def postprocess_text(preds, labels):
|
||||
preds = [pred.strip() for pred in preds]
|
||||
labels = [label.strip() for label in labels]
|
||||
|
||||
# rougeLSum expects newline after each sentence
|
||||
if metric_name == "rouge":
|
||||
preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds]
|
||||
labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels]
|
||||
else: # sacrebleu
|
||||
labels = [[label] for label in labels]
|
||||
preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds]
|
||||
labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels]
|
||||
|
||||
return preds, labels
|
||||
|
||||
@ -555,13 +498,9 @@ def main():
|
||||
# Some simple post-processing
|
||||
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
|
||||
|
||||
if metric_name == "rouge":
|
||||
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
|
||||
# Extract a few results from ROUGE
|
||||
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
|
||||
else:
|
||||
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
|
||||
result = {"bleu": result["score"]}
|
||||
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
|
||||
# Extract a few results from ROUGE
|
||||
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
|
||||
|
||||
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
|
||||
result["gen_len"] = np.mean(prediction_lens)
|
||||
@ -601,6 +540,7 @@ def main():
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
results = {}
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
|
||||
@ -613,7 +553,6 @@ def main():
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# predict
|
||||
if training_args.do_predict:
|
||||
logger.info("*** Test ***")
|
||||
|
||||
@ -640,6 +579,8 @@ def main():
|
||||
with open(output_test_preds_file, "w") as writer:
|
||||
writer.write("\n".join(test_preds))
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _mp_fn(index):
|
||||
# For xla_spawn (TPUs)
|
558
examples/seq2seq/run_translation.py
Executable file
558
examples/seq2seq/run_translation.py
Executable file
@ -0,0 +1,558 @@
|
||||
#!/usr/bin/env python
|
||||
# coding=utf-8
|
||||
# Copyright The HuggingFace Team and The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
Fine-tuning the library models for sequence to sequence.
|
||||
"""
|
||||
# You can also adapt this script on your own sequence to sequence task. Pointers for this are left as comments.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
import numpy as np
|
||||
from datasets import load_dataset, load_metric
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
AutoConfig,
|
||||
AutoModelForSeq2SeqLM,
|
||||
AutoTokenizer,
|
||||
DataCollatorForSeq2Seq,
|
||||
HfArgumentParser,
|
||||
MBartTokenizer,
|
||||
MBartTokenizerFast,
|
||||
Seq2SeqTrainer,
|
||||
Seq2SeqTrainingArguments,
|
||||
default_data_collator,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.trainer_utils import get_last_checkpoint, is_main_process
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
|
||||
)
|
||||
tokenizer_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
use_fast_tokenizer: bool = field(
|
||||
default=True,
|
||||
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
|
||||
)
|
||||
model_revision: str = field(
|
||||
default="main",
|
||||
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
|
||||
)
|
||||
use_auth_token: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": "Will use the token generated when running `transformers-cli login` (necessary to use this script "
|
||||
"with private models)."
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
"""
|
||||
|
||||
source_lang: str = field(default=None, metadata={"help": "Source language id for translation."})
|
||||
target_lang: str = field(default=None, metadata={"help": "Target language id for translation."})
|
||||
|
||||
dataset_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
dataset_config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a jsonlines)."})
|
||||
validation_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "An optional input evaluation data file to evaluate the metrics (sacreblue) on "
|
||||
"a jsonlines file."
|
||||
},
|
||||
)
|
||||
test_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "An optional input test data file to evaluate the metrics (sacreblue) on " "a jsonlines file."
|
||||
},
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
|
||||
)
|
||||
preprocessing_num_workers: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "The number of processes to use for the preprocessing."},
|
||||
)
|
||||
max_source_length: Optional[int] = field(
|
||||
default=1024,
|
||||
metadata={
|
||||
"help": "The maximum total input sequence length after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
},
|
||||
)
|
||||
max_target_length: Optional[int] = field(
|
||||
default=128,
|
||||
metadata={
|
||||
"help": "The maximum total sequence length for target text after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
},
|
||||
)
|
||||
val_max_target_length: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "The maximum total sequence length for validation target text after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded. Will default to `max_target_length`."
|
||||
"This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
|
||||
"during ``evaluate`` and ``predict``."
|
||||
},
|
||||
)
|
||||
pad_to_max_length: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": "Whether to pad all samples to model maximum sentence length. "
|
||||
"If False, will pad the samples dynamically when batching to the maximum length in the batch. More "
|
||||
"efficient on GPU but very bad for TPU."
|
||||
},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
},
|
||||
)
|
||||
max_val_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "For debugging purposes or quicker training, truncate the number of validation examples to this "
|
||||
"value if set."
|
||||
},
|
||||
)
|
||||
max_test_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "For debugging purposes or quicker training, truncate the number of test examples to this "
|
||||
"value if set."
|
||||
},
|
||||
)
|
||||
num_beams: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": "Number of beams to use for evaluation. This argument will be passed to ``model.generate``, "
|
||||
"which is used during ``evaluate`` and ``predict``."
|
||||
},
|
||||
)
|
||||
ignore_pad_token_for_loss: bool = field(
|
||||
default=True,
|
||||
metadata={
|
||||
"help": "Whether to ignore the tokens corresponding to padded labels in the loss computation or not."
|
||||
},
|
||||
)
|
||||
source_prefix: Optional[str] = field(
|
||||
default=None, metadata={"help": "A prefix to add before every source text (useful for T5 models)."}
|
||||
)
|
||||
|
||||
def __post_init__(self):
|
||||
if self.dataset_name is None and self.train_file is None and self.validation_file is None:
|
||||
raise ValueError("Need either a dataset name or a training/validation file.")
|
||||
elif self.source_lang is None or self.target_lang is None:
|
||||
raise ValueError("Need to specify the source language and the target language.")
|
||||
|
||||
if self.train_file is not None:
|
||||
extension = self.train_file.split(".")[-1]
|
||||
assert extension == "json", "`train_file` should be a json file."
|
||||
if self.validation_file is not None:
|
||||
extension = self.validation_file.split(".")[-1]
|
||||
assert extension == "json", "`validation_file` should be a json file."
|
||||
if self.val_max_target_length is None:
|
||||
self.val_max_target_length = self.max_target_length
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
if data_args.source_prefix is None and model_args.model_name_or_path in [
|
||||
"t5-small",
|
||||
"t5-base",
|
||||
"t5-large",
|
||||
"t5-3b",
|
||||
"t5-11b",
|
||||
]:
|
||||
logger.warning(
|
||||
"You're running a t5 model but didn't provide a source prefix, which is expected, e.g. with "
|
||||
"`--source_prefix 'translate English to German: ' `"
|
||||
)
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
|
||||
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
# Set the verbosity to info of the Transformers logger (on main process only):
|
||||
if is_main_process(training_args.local_rank):
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
logger.info("Training/evaluation parameters %s", training_args)
|
||||
|
||||
# Set seed before initializing model.
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# Get the datasets: you can either provide your own JSON training and evaluation files (see below)
|
||||
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
|
||||
# (the dataset will be downloaded automatically from the datasets Hub).
|
||||
#
|
||||
# For translation, only JSON files are supported, with one field named "translation" containing two keys for the
|
||||
# source and target languages (unless you adapt what follows).
|
||||
#
|
||||
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
|
||||
# download the dataset.
|
||||
if data_args.dataset_name is not None:
|
||||
# Downloading and loading a dataset from the hub.
|
||||
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
|
||||
else:
|
||||
data_files = {}
|
||||
if data_args.train_file is not None:
|
||||
data_files["train"] = data_args.train_file
|
||||
extension = data_args.train_file.split(".")[-1]
|
||||
if data_args.validation_file is not None:
|
||||
data_files["validation"] = data_args.validation_file
|
||||
extension = data_args.validation_file.split(".")[-1]
|
||||
if data_args.test_file is not None:
|
||||
data_files["test"] = data_args.test_file
|
||||
extension = data_args.test_file.split(".")[-1]
|
||||
datasets = load_dataset(extension, data_files=data_files)
|
||||
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
|
||||
# https://huggingface.co/docs/datasets/loading_datasets.html.
|
||||
|
||||
# Load pretrained model and tokenizer
|
||||
#
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
config = AutoConfig.from_pretrained(
|
||||
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
use_auth_token=True if model_args.use_auth_token else None,
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
use_fast=model_args.use_fast_tokenizer,
|
||||
revision=model_args.model_revision,
|
||||
use_auth_token=True if model_args.use_auth_token else None,
|
||||
)
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
from_tf=bool(".ckpt" in model_args.model_name_or_path),
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
use_auth_token=True if model_args.use_auth_token else None,
|
||||
)
|
||||
|
||||
# Set decoder_start_token_id
|
||||
if model.config.decoder_start_token_id is None and isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):
|
||||
assert (
|
||||
data_args.target_lang is not None and data_args.source_lang is not None
|
||||
), "mBart requires --target_lang and --source_lang"
|
||||
if isinstance(tokenizer, MBartTokenizer):
|
||||
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.target_lang]
|
||||
else:
|
||||
model.config.decoder_start_token_id = tokenizer.convert_tokens_to_ids(data_args.target_lang)
|
||||
|
||||
if model.config.decoder_start_token_id is None:
|
||||
raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined")
|
||||
|
||||
prefix = data_args.source_prefix if data_args.source_prefix is not None else ""
|
||||
|
||||
# Preprocessing the datasets.
|
||||
# We need to tokenize inputs and targets.
|
||||
if training_args.do_train:
|
||||
column_names = datasets["train"].column_names
|
||||
elif training_args.do_eval:
|
||||
column_names = datasets["validation"].column_names
|
||||
elif training_args.do_predict:
|
||||
column_names = datasets["test"].column_names
|
||||
else:
|
||||
logger.info("There is nothing to do. Please pass `do_train`, `do_eval` and/or `do_predict`.")
|
||||
return
|
||||
|
||||
# For translation we set the codes of our source and target languages (only useful for mBART, the others will
|
||||
# ignore those attributes).
|
||||
if isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):
|
||||
if data_args.source_lang is not None:
|
||||
tokenizer.src_lang = data_args.source_lang
|
||||
if data_args.target_lang is not None:
|
||||
tokenizer.tgt_lang = data_args.target_lang
|
||||
|
||||
# Get the language codes for input/target.
|
||||
source_lang = data_args.source_lang.split("_")[0]
|
||||
target_lang = data_args.target_lang.split("_")[0]
|
||||
|
||||
# Temporarily set max_target_length for training.
|
||||
max_target_length = data_args.max_target_length
|
||||
padding = "max_length" if data_args.pad_to_max_length else False
|
||||
|
||||
if training_args.label_smoothing_factor > 0 and not hasattr(model, "prepare_decoder_input_ids_from_labels"):
|
||||
logger.warn(
|
||||
"label_smoothing is enabled but the `prepare_decoder_input_ids_from_labels` method is not defined for"
|
||||
f"`{model.__class__.__name__}`. This will lead to loss being calculated twice and will take up more memory"
|
||||
)
|
||||
|
||||
def preprocess_function(examples):
|
||||
inputs = [ex[source_lang] for ex in examples["translation"]]
|
||||
targets = [ex[target_lang] for ex in examples["translation"]]
|
||||
inputs = [prefix + inp for inp in inputs]
|
||||
model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True)
|
||||
|
||||
# Setup the tokenizer for targets
|
||||
with tokenizer.as_target_tokenizer():
|
||||
labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True)
|
||||
|
||||
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
|
||||
# padding in the loss.
|
||||
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
|
||||
labels["input_ids"] = [
|
||||
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
|
||||
]
|
||||
|
||||
model_inputs["labels"] = labels["input_ids"]
|
||||
return model_inputs
|
||||
|
||||
if training_args.do_train:
|
||||
train_dataset = datasets["train"]
|
||||
if "train" not in datasets:
|
||||
raise ValueError("--do_train requires a train dataset")
|
||||
if data_args.max_train_samples is not None:
|
||||
train_dataset = train_dataset.select(range(data_args.max_train_samples))
|
||||
train_dataset = train_dataset.map(
|
||||
preprocess_function,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
)
|
||||
|
||||
if training_args.do_eval:
|
||||
max_target_length = data_args.val_max_target_length
|
||||
if "validation" not in datasets:
|
||||
raise ValueError("--do_eval requires a validation dataset")
|
||||
eval_dataset = datasets["validation"]
|
||||
if data_args.max_val_samples is not None:
|
||||
eval_dataset = eval_dataset.select(range(data_args.max_val_samples))
|
||||
eval_dataset = eval_dataset.map(
|
||||
preprocess_function,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
)
|
||||
|
||||
if training_args.do_predict:
|
||||
max_target_length = data_args.val_max_target_length
|
||||
if "test" not in datasets:
|
||||
raise ValueError("--do_predict requires a test dataset")
|
||||
test_dataset = datasets["test"]
|
||||
if data_args.max_test_samples is not None:
|
||||
test_dataset = test_dataset.select(range(data_args.max_test_samples))
|
||||
test_dataset = test_dataset.map(
|
||||
preprocess_function,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
)
|
||||
|
||||
# Data collator
|
||||
label_pad_token_id = -100 if data_args.ignore_pad_token_for_loss else tokenizer.pad_token_id
|
||||
if data_args.pad_to_max_length:
|
||||
data_collator = default_data_collator
|
||||
else:
|
||||
data_collator = DataCollatorForSeq2Seq(
|
||||
tokenizer,
|
||||
model=model,
|
||||
label_pad_token_id=label_pad_token_id,
|
||||
pad_to_multiple_of=8 if training_args.fp16 else None,
|
||||
)
|
||||
|
||||
# Metric
|
||||
metric = load_metric("sacrebleu")
|
||||
|
||||
def postprocess_text(preds, labels):
|
||||
preds = [pred.strip() for pred in preds]
|
||||
labels = [[label.strip()] for label in labels]
|
||||
|
||||
return preds, labels
|
||||
|
||||
def compute_metrics(eval_preds):
|
||||
preds, labels = eval_preds
|
||||
if isinstance(preds, tuple):
|
||||
preds = preds[0]
|
||||
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
|
||||
if data_args.ignore_pad_token_for_loss:
|
||||
# Replace -100 in the labels as we can't decode them.
|
||||
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
|
||||
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
|
||||
|
||||
# Some simple post-processing
|
||||
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
|
||||
|
||||
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
|
||||
result = {"bleu": result["score"]}
|
||||
|
||||
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
|
||||
result["gen_len"] = np.mean(prediction_lens)
|
||||
result = {k: round(v, 4) for k, v in result.items()}
|
||||
return result
|
||||
|
||||
# Initialize our Trainer
|
||||
trainer = Seq2SeqTrainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=train_dataset if training_args.do_train else None,
|
||||
eval_dataset=eval_dataset if training_args.do_eval else None,
|
||||
tokenizer=tokenizer,
|
||||
data_collator=data_collator,
|
||||
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
|
||||
)
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
if last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
elif os.path.isdir(model_args.model_name_or_path):
|
||||
checkpoint = model_args.model_name_or_path
|
||||
else:
|
||||
checkpoint = None
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model() # Saves the tokenizer too for easy upload
|
||||
|
||||
metrics = train_result.metrics
|
||||
max_train_samples = (
|
||||
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
|
||||
)
|
||||
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
|
||||
|
||||
trainer.log_metrics("train", metrics)
|
||||
trainer.save_metrics("train", metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
results = {}
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
|
||||
metrics = trainer.evaluate(
|
||||
max_length=data_args.val_max_target_length, num_beams=data_args.num_beams, metric_key_prefix="eval"
|
||||
)
|
||||
max_val_samples = data_args.max_val_samples if data_args.max_val_samples is not None else len(eval_dataset)
|
||||
metrics["eval_samples"] = min(max_val_samples, len(eval_dataset))
|
||||
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
if training_args.do_predict:
|
||||
logger.info("*** Test ***")
|
||||
|
||||
test_results = trainer.predict(
|
||||
test_dataset,
|
||||
metric_key_prefix="test",
|
||||
max_length=data_args.val_max_target_length,
|
||||
num_beams=data_args.num_beams,
|
||||
)
|
||||
metrics = test_results.metrics
|
||||
max_test_samples = data_args.max_test_samples if data_args.max_test_samples is not None else len(test_dataset)
|
||||
metrics["test_samples"] = min(max_test_samples, len(test_dataset))
|
||||
|
||||
trainer.log_metrics("test", metrics)
|
||||
trainer.save_metrics("test", metrics)
|
||||
|
||||
if trainer.is_world_process_zero():
|
||||
if training_args.predict_with_generate:
|
||||
test_preds = tokenizer.batch_decode(
|
||||
test_results.predictions, skip_special_tokens=True, clean_up_tokenization_spaces=True
|
||||
)
|
||||
test_preds = [pred.strip() for pred in test_preds]
|
||||
output_test_preds_file = os.path.join(training_args.output_dir, "test_generations.txt")
|
||||
with open(output_test_preds_file, "w") as writer:
|
||||
writer.write("\n".join(test_preds))
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _mp_fn(index):
|
||||
# For xla_spawn (TPUs)
|
||||
main()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -49,8 +49,9 @@ if SRC_DIRS is not None:
|
||||
import run_mlm
|
||||
import run_ner
|
||||
import run_qa as run_squad
|
||||
import run_seq2seq
|
||||
import run_summarization
|
||||
import run_swag
|
||||
import run_translation
|
||||
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
@ -277,15 +278,14 @@ class ExamplesTests(TestCasePlus):
|
||||
self.assertGreaterEqual(len(result[0]), 10)
|
||||
|
||||
@slow
|
||||
def test_run_seq2seq_summarization(self):
|
||||
def test_run_summarization(self):
|
||||
stream_handler = logging.StreamHandler(sys.stdout)
|
||||
logger.addHandler(stream_handler)
|
||||
|
||||
tmp_dir = self.get_auto_remove_tmp_dir()
|
||||
testargs = f"""
|
||||
run_seq2seq.py
|
||||
run_summarization.py
|
||||
--model_name_or_path t5-small
|
||||
--task summarization
|
||||
--train_file tests/fixtures/tests_samples/xsum/sample.json
|
||||
--validation_file tests/fixtures/tests_samples/xsum/sample.json
|
||||
--output_dir {tmp_dir}
|
||||
@ -301,7 +301,7 @@ class ExamplesTests(TestCasePlus):
|
||||
""".split()
|
||||
|
||||
with patch.object(sys, "argv", testargs):
|
||||
run_seq2seq.main()
|
||||
run_summarization.main()
|
||||
result = get_results(tmp_dir)
|
||||
self.assertGreaterEqual(result["eval_rouge1"], 10)
|
||||
self.assertGreaterEqual(result["eval_rouge2"], 2)
|
||||
@ -309,15 +309,16 @@ class ExamplesTests(TestCasePlus):
|
||||
self.assertGreaterEqual(result["eval_rougeLsum"], 7)
|
||||
|
||||
@slow
|
||||
def test_run_seq2seq_translation(self):
|
||||
def test_run_translation(self):
|
||||
stream_handler = logging.StreamHandler(sys.stdout)
|
||||
logger.addHandler(stream_handler)
|
||||
|
||||
tmp_dir = self.get_auto_remove_tmp_dir()
|
||||
testargs = f"""
|
||||
run_seq2seq.py
|
||||
run_translation.py
|
||||
--model_name_or_path sshleifer/student_marian_en_ro_6_1
|
||||
--task translation_en_to_ro
|
||||
--source_lang en
|
||||
--target_lang ro
|
||||
--train_file tests/fixtures/tests_samples/wmt16/sample.json
|
||||
--validation_file tests/fixtures/tests_samples/wmt16/sample.json
|
||||
--output_dir {tmp_dir}
|
||||
@ -335,6 +336,6 @@ class ExamplesTests(TestCasePlus):
|
||||
""".split()
|
||||
|
||||
with patch.object(sys, "argv", testargs):
|
||||
run_seq2seq.main()
|
||||
run_translation.main()
|
||||
result = get_results(tmp_dir)
|
||||
self.assertGreaterEqual(result["eval_bleu"], 30)
|
||||
|
@ -233,7 +233,6 @@ class TestDeepSpeed(TestCasePlus):
|
||||
--group_by_length
|
||||
--label_smoothing_factor 0.1
|
||||
--adafactor
|
||||
--task translation
|
||||
--target_lang ro_RO
|
||||
--source_lang en_XX
|
||||
""".split()
|
||||
@ -246,7 +245,7 @@ class TestDeepSpeed(TestCasePlus):
|
||||
args = [x for x in args if x not in remove_args]
|
||||
|
||||
ds_args = f"--deepspeed {self.test_file_dir_str}/ds_config.json".split()
|
||||
script = [f"{self.examples_dir_str}/seq2seq/run_seq2seq.py"]
|
||||
script = [f"{self.examples_dir_str}/seq2seq/run_translation.py"]
|
||||
num_gpus = get_gpu_count() if distributed else 1
|
||||
launcher = f"deepspeed --num_gpus {num_gpus}".split()
|
||||
|
||||
|
@ -35,7 +35,7 @@ from transformers.trainer_utils import set_seed
|
||||
|
||||
bindir = os.path.abspath(os.path.dirname(__file__))
|
||||
sys.path.append(f"{bindir}/../../seq2seq")
|
||||
from run_seq2seq import main # noqa
|
||||
from run_translation import main # noqa
|
||||
|
||||
|
||||
set_seed(42)
|
||||
@ -209,7 +209,6 @@ class TestTrainerExt(TestCasePlus):
|
||||
--group_by_length
|
||||
--label_smoothing_factor 0.1
|
||||
--adafactor
|
||||
--task translation
|
||||
--target_lang ro_RO
|
||||
--source_lang en_XX
|
||||
"""
|
||||
@ -226,12 +225,12 @@ class TestTrainerExt(TestCasePlus):
|
||||
distributed_args = f"""
|
||||
-m torch.distributed.launch
|
||||
--nproc_per_node={n_gpu}
|
||||
{self.examples_dir_str}/seq2seq/run_seq2seq.py
|
||||
{self.examples_dir_str}/seq2seq/run_translation.py
|
||||
""".split()
|
||||
cmd = [sys.executable] + distributed_args + args
|
||||
execute_subprocess_async(cmd, env=self.get_env())
|
||||
else:
|
||||
testargs = ["run_seq2seq.py"] + args
|
||||
testargs = ["run_translation.py"] + args
|
||||
with patch.object(sys, "argv", testargs):
|
||||
main()
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user