mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-01 18:51:14 +06:00
cache in run_classifier + various fixes to the examples
This commit is contained in:
parent
e6e5f19257
commit
15ebd67d4e
56
README.md
56
README.md
@ -541,6 +541,7 @@ where
|
|||||||
- `bert-base-german-cased`: Trained on German data only, 12-layer, 768-hidden, 12-heads, 110M parameters [Performance Evaluation](https://deepset.ai/german-bert)
|
- `bert-base-german-cased`: Trained on German data only, 12-layer, 768-hidden, 12-heads, 110M parameters [Performance Evaluation](https://deepset.ai/german-bert)
|
||||||
- `bert-large-uncased-whole-word-masking`: 24-layer, 1024-hidden, 16-heads, 340M parameters - Trained with Whole Word Masking (mask all of the the tokens corresponding to a word at once)
|
- `bert-large-uncased-whole-word-masking`: 24-layer, 1024-hidden, 16-heads, 340M parameters - Trained with Whole Word Masking (mask all of the the tokens corresponding to a word at once)
|
||||||
- `bert-large-cased-whole-word-masking`: 24-layer, 1024-hidden, 16-heads, 340M parameters - Trained with Whole Word Masking (mask all of the the tokens corresponding to a word at once)
|
- `bert-large-cased-whole-word-masking`: 24-layer, 1024-hidden, 16-heads, 340M parameters - Trained with Whole Word Masking (mask all of the the tokens corresponding to a word at once)
|
||||||
|
- `bert-large-uncased-whole-word-masking-finetuned-squad`: The `bert-large-uncased-whole-word-masking` model finetuned on SQuAD (using the `run_squad.py` examples). Results: *exact_match: 86.91579943235573, f1: 93.1532499015869*
|
||||||
- `openai-gpt`: OpenAI GPT English model, 12-layer, 768-hidden, 12-heads, 110M parameters
|
- `openai-gpt`: OpenAI GPT English model, 12-layer, 768-hidden, 12-heads, 110M parameters
|
||||||
- `gpt2`: OpenAI GPT-2 English model, 12-layer, 768-hidden, 12-heads, 117M parameters
|
- `gpt2`: OpenAI GPT-2 English model, 12-layer, 768-hidden, 12-heads, 117M parameters
|
||||||
- `gpt2-medium`: OpenAI GPT-2 English model, 24-layer, 1024-hidden, 16-heads, 345M parameters
|
- `gpt2-medium`: OpenAI GPT-2 English model, 24-layer, 1024-hidden, 16-heads, 345M parameters
|
||||||
@ -608,13 +609,15 @@ There are three types of files you need to save to be able to reload a fine-tune
|
|||||||
- the configuration file of the model which is saved as a JSON file, and
|
- the configuration file of the model which is saved as a JSON file, and
|
||||||
- the vocabulary (and the merges for the BPE-based models GPT and GPT-2).
|
- the vocabulary (and the merges for the BPE-based models GPT and GPT-2).
|
||||||
|
|
||||||
The defaults files names of these files are as follow:
|
The *default filenames* of these files are as follow:
|
||||||
|
|
||||||
- the model weights file: `pytorch_model.bin`,
|
- the model weights file: `pytorch_model.bin`,
|
||||||
- the configuration file: `config.json`,
|
- the configuration file: `config.json`,
|
||||||
- the vocabulary file: `vocab.txt` for BERT and Transformer-XL, `vocab.json` for GPT/GPT-2 (BPE vocabulary),
|
- the vocabulary file: `vocab.txt` for BERT and Transformer-XL, `vocab.json` for GPT/GPT-2 (BPE vocabulary),
|
||||||
- for GPT/GPT-2 (BPE vocabulary) the additional merges file: `merges.txt`.
|
- for GPT/GPT-2 (BPE vocabulary) the additional merges file: `merges.txt`.
|
||||||
|
|
||||||
|
**If you save a model using these *default filenames*, you can then re-load the model and tokenizer using the `from_pretrained()` method.**
|
||||||
|
|
||||||
Here is the recommended way of saving the model, configuration and vocabulary to an `output_dir` directory and reloading the model and tokenizer afterwards:
|
Here is the recommended way of saving the model, configuration and vocabulary to an `output_dir` directory and reloading the model and tokenizer afterwards:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -1268,6 +1271,30 @@ python run_classifier.py \
|
|||||||
--fp16
|
--fp16
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Distributed training**
|
||||||
|
Here is an example using distributed training on 8 V100 GPUs and Bert Whole Word Masking model to reach a F1 > 93 on SQuAD:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m torch.distributed.launch --nproc_per_node=8 \
|
||||||
|
run_classifier.py \
|
||||||
|
--bert_model bert-large-cased-whole-word-masking \
|
||||||
|
--task_name MRPC \
|
||||||
|
--do_train \
|
||||||
|
--do_eval \
|
||||||
|
--do_lower_case \
|
||||||
|
--data_dir $GLUE_DIR/MRPC/ \
|
||||||
|
--max_seq_length 128 \
|
||||||
|
--train_batch_size 64 \
|
||||||
|
--learning_rate 2e-5 \
|
||||||
|
--num_train_epochs 3.0 \
|
||||||
|
--output_dir /tmp/mrpc_output/
|
||||||
|
```
|
||||||
|
|
||||||
|
Training with these hyper-parameters gave us the following results:
|
||||||
|
```bash
|
||||||
|
{"exact_match": 86.91579943235573, "f1": 93.1532499015869}
|
||||||
|
```
|
||||||
|
|
||||||
#### SQuAD
|
#### SQuAD
|
||||||
|
|
||||||
This example code fine-tunes BERT on the SQuAD dataset. It runs in 24 min (with BERT-base) or 68 min (with BERT-large) on a single tesla V100 16GB.
|
This example code fine-tunes BERT on the SQuAD dataset. It runs in 24 min (with BERT-base) or 68 min (with BERT-large) on a single tesla V100 16GB.
|
||||||
@ -1298,9 +1325,36 @@ python run_squad.py \
|
|||||||
|
|
||||||
Training with the previous hyper-parameters gave us the following results:
|
Training with the previous hyper-parameters gave us the following results:
|
||||||
```bash
|
```bash
|
||||||
|
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json /tmp/debug_squad/predictions.json
|
||||||
{"f1": 88.52381567990474, "exact_match": 81.22043519394512}
|
{"f1": 88.52381567990474, "exact_match": 81.22043519394512}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Here is an example using distributed training on 8 V100 GPUs and Bert Whole Word Masking model to reach a F1 > 93 on SQuAD:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m torch.distributed.launch --nproc_per_node=8 \
|
||||||
|
run_squad.py \
|
||||||
|
--bert_model bert-large-cased-whole-word-masking \
|
||||||
|
--do_train \
|
||||||
|
--do_predict \
|
||||||
|
--do_lower_case \
|
||||||
|
--train_file $SQUAD_DIR/train-v1.1.json \
|
||||||
|
--predict_file $SQUAD_DIR/dev-v1.1.json \
|
||||||
|
--learning_rate 3e-5 \
|
||||||
|
--num_train_epochs 2 \
|
||||||
|
--max_seq_length 384 \
|
||||||
|
--doc_stride 128 \
|
||||||
|
--output_dir ../models/train_squad_large_cased_wwm/ \
|
||||||
|
--train_batch_size 24 \
|
||||||
|
--gradient_accumulation_steps 12
|
||||||
|
```
|
||||||
|
|
||||||
|
Training with these hyper-parameters gave us the following results:
|
||||||
|
```bash
|
||||||
|
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ../models/train_squad_large_cased_wwm/predictions.json
|
||||||
|
{"exact_match": 86.91579943235573, "f1": 93.1532499015869}
|
||||||
|
```
|
||||||
|
|
||||||
#### SWAG
|
#### SWAG
|
||||||
|
|
||||||
The data for SWAG can be downloaded by cloning the following [repository](https://github.com/rowanz/swagaf)
|
The data for SWAG can be downloaded by cloning the following [repository](https://github.com/rowanz/swagaf)
|
||||||
|
@ -20,8 +20,6 @@ def run_model():
|
|||||||
parser.add_argument('--model_name_or_path', type=str, default='bert-base-uncased',
|
parser.add_argument('--model_name_or_path', type=str, default='bert-base-uncased',
|
||||||
help='pretrained model name or path to local checkpoint')
|
help='pretrained model name or path to local checkpoint')
|
||||||
parser.add_argument("--seed", type=int, default=42)
|
parser.add_argument("--seed", type=int, default=42)
|
||||||
parser.add_argument("--batch_size", type=int, default=-1)
|
|
||||||
parser.add_argument('--unconditional', action='store_true', help='If true, unconditional generation.')
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
print(args)
|
print(args)
|
||||||
|
|
||||||
@ -34,57 +32,12 @@ def run_model():
|
|||||||
torch.cuda.manual_seed(args.seed)
|
torch.cuda.manual_seed(args.seed)
|
||||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||||
|
|
||||||
enc = GPT2Tokenizer.from_pretrained(args.model_name_or_path)
|
tokenizer = BertTokenizer.from_pretrained(args.model_name_or_path)
|
||||||
model = GPT2LMHeadModel.from_pretrained(args.model_name_or_path)
|
model = BertModel.from_pretrained(args.model_name_or_path)
|
||||||
model.to(device)
|
model.to(device)
|
||||||
model.eval()
|
model.eval()
|
||||||
|
|
||||||
if args.length == -1:
|
|
||||||
args.length = model.config.n_ctx // 2
|
|
||||||
elif args.length > model.config.n_ctx:
|
|
||||||
raise ValueError("Can't get samples longer than window size: %s" % model.config.n_ctx)
|
|
||||||
|
|
||||||
while True:
|
|
||||||
context_tokens = []
|
|
||||||
if not args.unconditional:
|
|
||||||
raw_text = input("Model prompt >>> ")
|
|
||||||
while not raw_text:
|
|
||||||
print('Prompt should not be empty!')
|
|
||||||
raw_text = input("Model prompt >>> ")
|
|
||||||
context_tokens = enc.encode(raw_text)
|
|
||||||
generated = 0
|
|
||||||
for _ in range(args.nsamples // args.batch_size):
|
|
||||||
out = sample_sequence(
|
|
||||||
model=model, length=args.length,
|
|
||||||
context=context_tokens,
|
|
||||||
start_token=None,
|
|
||||||
batch_size=args.batch_size,
|
|
||||||
temperature=args.temperature, top_k=args.top_k, device=device
|
|
||||||
)
|
|
||||||
out = out[:, len(context_tokens):].tolist()
|
|
||||||
for i in range(args.batch_size):
|
|
||||||
generated += 1
|
|
||||||
text = enc.decode(out[i])
|
|
||||||
print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40)
|
|
||||||
print(text)
|
|
||||||
print("=" * 80)
|
|
||||||
else:
|
|
||||||
generated = 0
|
|
||||||
for _ in range(args.nsamples // args.batch_size):
|
|
||||||
out = sample_sequence(
|
|
||||||
model=model, length=args.length,
|
|
||||||
context=None,
|
|
||||||
start_token=enc.encoder['<|endoftext|>'],
|
|
||||||
batch_size=args.batch_size,
|
|
||||||
temperature=args.temperature, top_k=args.top_k, device=device
|
|
||||||
)
|
|
||||||
out = out[:,1:].tolist()
|
|
||||||
for i in range(args.batch_size):
|
|
||||||
generated += 1
|
|
||||||
text = enc.decode(out[i])
|
|
||||||
print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40)
|
|
||||||
print(text)
|
|
||||||
print("=" * 80)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
run_model()
|
run_model()
|
||||||
|
@ -18,550 +18,31 @@
|
|||||||
from __future__ import absolute_import, division, print_function
|
from __future__ import absolute_import, division, print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import csv
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import random
|
import random
|
||||||
import sys
|
from tqdm import tqdm, trange
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import math
|
|
||||||
import torch
|
import torch
|
||||||
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
|
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
|
||||||
TensorDataset)
|
TensorDataset)
|
||||||
from torch.utils.data.distributed import DistributedSampler
|
from torch.utils.data.distributed import DistributedSampler
|
||||||
from tqdm import tqdm, trange
|
|
||||||
|
|
||||||
from torch.nn import CrossEntropyLoss, MSELoss
|
from torch.nn import CrossEntropyLoss, MSELoss
|
||||||
from scipy.stats import pearsonr, spearmanr
|
|
||||||
from sklearn.metrics import matthews_corrcoef, f1_score
|
|
||||||
|
|
||||||
from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE, WEIGHTS_NAME, CONFIG_NAME
|
from tensorboardX import SummaryWriter
|
||||||
from pytorch_pretrained_bert.modeling import BertForSequenceClassification, BertConfig
|
|
||||||
|
from pytorch_pretrained_bert.file_utils import WEIGHTS_NAME, CONFIG_NAME
|
||||||
|
from pytorch_pretrained_bert.modeling import BertForSequenceClassification
|
||||||
from pytorch_pretrained_bert.tokenization import BertTokenizer
|
from pytorch_pretrained_bert.tokenization import BertTokenizer
|
||||||
from pytorch_pretrained_bert.optimization import BertAdam, WarmupLinearSchedule
|
from pytorch_pretrained_bert.optimization import BertAdam, WarmupLinearSchedule
|
||||||
|
|
||||||
|
from run_classifier_dataset_utils import processors, output_modes, convert_examples_to_features, compute_metrics
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class InputExample(object):
|
|
||||||
"""A single training/test example for simple sequence classification."""
|
|
||||||
|
|
||||||
def __init__(self, guid, text_a, text_b=None, label=None):
|
|
||||||
"""Constructs a InputExample.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
guid: Unique id for the example.
|
|
||||||
text_a: string. The untokenized text of the first sequence. For single
|
|
||||||
sequence tasks, only this sequence must be specified.
|
|
||||||
text_b: (Optional) string. The untokenized text of the second sequence.
|
|
||||||
Only must be specified for sequence pair tasks.
|
|
||||||
label: (Optional) string. The label of the example. This should be
|
|
||||||
specified for train and dev examples, but not for test examples.
|
|
||||||
"""
|
|
||||||
self.guid = guid
|
|
||||||
self.text_a = text_a
|
|
||||||
self.text_b = text_b
|
|
||||||
self.label = label
|
|
||||||
|
|
||||||
|
|
||||||
class InputFeatures(object):
|
|
||||||
"""A single set of features of data."""
|
|
||||||
|
|
||||||
def __init__(self, input_ids, input_mask, segment_ids, label_id):
|
|
||||||
self.input_ids = input_ids
|
|
||||||
self.input_mask = input_mask
|
|
||||||
self.segment_ids = segment_ids
|
|
||||||
self.label_id = label_id
|
|
||||||
|
|
||||||
|
|
||||||
class DataProcessor(object):
|
|
||||||
"""Base class for data converters for sequence classification data sets."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""Gets a collection of `InputExample`s for the train set."""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""Gets a collection of `InputExample`s for the dev set."""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""Gets the list of labels for this data set."""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _read_tsv(cls, input_file, quotechar=None):
|
|
||||||
"""Reads a tab separated value file."""
|
|
||||||
with open(input_file, "r", encoding="utf-8") as f:
|
|
||||||
reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
|
|
||||||
lines = []
|
|
||||||
for line in reader:
|
|
||||||
if sys.version_info[0] == 2:
|
|
||||||
line = list(unicode(cell, 'utf-8') for cell in line)
|
|
||||||
lines.append(line)
|
|
||||||
return lines
|
|
||||||
|
|
||||||
|
|
||||||
class MrpcProcessor(DataProcessor):
|
|
||||||
"""Processor for the MRPC data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
logger.info("LOOKING AT {}".format(os.path.join(data_dir, "train.tsv")))
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["0", "1"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, i)
|
|
||||||
text_a = line[3]
|
|
||||||
text_b = line[4]
|
|
||||||
label = line[0]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class MnliProcessor(DataProcessor):
|
|
||||||
"""Processor for the MultiNLI data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")),
|
|
||||||
"dev_matched")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["contradiction", "entailment", "neutral"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, line[0])
|
|
||||||
text_a = line[8]
|
|
||||||
text_b = line[9]
|
|
||||||
label = line[-1]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class MnliMismatchedProcessor(MnliProcessor):
|
|
||||||
"""Processor for the MultiNLI Mismatched data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev_mismatched.tsv")),
|
|
||||||
"dev_matched")
|
|
||||||
|
|
||||||
|
|
||||||
class ColaProcessor(DataProcessor):
|
|
||||||
"""Processor for the CoLA data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["0", "1"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
guid = "%s-%s" % (set_type, i)
|
|
||||||
text_a = line[3]
|
|
||||||
label = line[1]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class Sst2Processor(DataProcessor):
|
|
||||||
"""Processor for the SST-2 data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["0", "1"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, i)
|
|
||||||
text_a = line[0]
|
|
||||||
label = line[1]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class StsbProcessor(DataProcessor):
|
|
||||||
"""Processor for the STS-B data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return [None]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, line[0])
|
|
||||||
text_a = line[7]
|
|
||||||
text_b = line[8]
|
|
||||||
label = line[-1]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class QqpProcessor(DataProcessor):
|
|
||||||
"""Processor for the QQP data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["0", "1"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, line[0])
|
|
||||||
try:
|
|
||||||
text_a = line[3]
|
|
||||||
text_b = line[4]
|
|
||||||
label = line[5]
|
|
||||||
except IndexError:
|
|
||||||
continue
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class QnliProcessor(DataProcessor):
|
|
||||||
"""Processor for the QNLI data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")),
|
|
||||||
"dev_matched")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["entailment", "not_entailment"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, line[0])
|
|
||||||
text_a = line[1]
|
|
||||||
text_b = line[2]
|
|
||||||
label = line[-1]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class RteProcessor(DataProcessor):
|
|
||||||
"""Processor for the RTE data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["entailment", "not_entailment"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, line[0])
|
|
||||||
text_a = line[1]
|
|
||||||
text_b = line[2]
|
|
||||||
label = line[-1]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
class WnliProcessor(DataProcessor):
|
|
||||||
"""Processor for the WNLI data set (GLUE version)."""
|
|
||||||
|
|
||||||
def get_train_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
|
||||||
|
|
||||||
def get_dev_examples(self, data_dir):
|
|
||||||
"""See base class."""
|
|
||||||
return self._create_examples(
|
|
||||||
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
|
||||||
|
|
||||||
def get_labels(self):
|
|
||||||
"""See base class."""
|
|
||||||
return ["0", "1"]
|
|
||||||
|
|
||||||
def _create_examples(self, lines, set_type):
|
|
||||||
"""Creates examples for the training and dev sets."""
|
|
||||||
examples = []
|
|
||||||
for (i, line) in enumerate(lines):
|
|
||||||
if i == 0:
|
|
||||||
continue
|
|
||||||
guid = "%s-%s" % (set_type, line[0])
|
|
||||||
text_a = line[1]
|
|
||||||
text_b = line[2]
|
|
||||||
label = line[-1]
|
|
||||||
examples.append(
|
|
||||||
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
|
||||||
return examples
|
|
||||||
|
|
||||||
|
|
||||||
def convert_examples_to_features(examples, label_list, max_seq_length,
|
|
||||||
tokenizer, output_mode):
|
|
||||||
"""Loads a data file into a list of `InputBatch`s."""
|
|
||||||
|
|
||||||
label_map = {label : i for i, label in enumerate(label_list)}
|
|
||||||
|
|
||||||
features = []
|
|
||||||
for (ex_index, example) in enumerate(examples):
|
|
||||||
if ex_index % 10000 == 0:
|
|
||||||
logger.info("Writing example %d of %d" % (ex_index, len(examples)))
|
|
||||||
|
|
||||||
tokens_a = tokenizer.tokenize(example.text_a)
|
|
||||||
|
|
||||||
tokens_b = None
|
|
||||||
if example.text_b:
|
|
||||||
tokens_b = tokenizer.tokenize(example.text_b)
|
|
||||||
# Modifies `tokens_a` and `tokens_b` in place so that the total
|
|
||||||
# length is less than the specified length.
|
|
||||||
# Account for [CLS], [SEP], [SEP] with "- 3"
|
|
||||||
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
|
|
||||||
else:
|
|
||||||
# Account for [CLS] and [SEP] with "- 2"
|
|
||||||
if len(tokens_a) > max_seq_length - 2:
|
|
||||||
tokens_a = tokens_a[:(max_seq_length - 2)]
|
|
||||||
|
|
||||||
# The convention in BERT is:
|
|
||||||
# (a) For sequence pairs:
|
|
||||||
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
|
|
||||||
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
|
|
||||||
# (b) For single sequences:
|
|
||||||
# tokens: [CLS] the dog is hairy . [SEP]
|
|
||||||
# type_ids: 0 0 0 0 0 0 0
|
|
||||||
#
|
|
||||||
# Where "type_ids" are used to indicate whether this is the first
|
|
||||||
# sequence or the second sequence. The embedding vectors for `type=0` and
|
|
||||||
# `type=1` were learned during pre-training and are added to the wordpiece
|
|
||||||
# embedding vector (and position vector). This is not *strictly* necessary
|
|
||||||
# since the [SEP] token unambiguously separates the sequences, but it makes
|
|
||||||
# it easier for the model to learn the concept of sequences.
|
|
||||||
#
|
|
||||||
# For classification tasks, the first vector (corresponding to [CLS]) is
|
|
||||||
# used as as the "sentence vector". Note that this only makes sense because
|
|
||||||
# the entire model is fine-tuned.
|
|
||||||
tokens = ["[CLS]"] + tokens_a + ["[SEP]"]
|
|
||||||
segment_ids = [0] * len(tokens)
|
|
||||||
|
|
||||||
if tokens_b:
|
|
||||||
tokens += tokens_b + ["[SEP]"]
|
|
||||||
segment_ids += [1] * (len(tokens_b) + 1)
|
|
||||||
|
|
||||||
input_ids = tokenizer.convert_tokens_to_ids(tokens)
|
|
||||||
|
|
||||||
# The mask has 1 for real tokens and 0 for padding tokens. Only real
|
|
||||||
# tokens are attended to.
|
|
||||||
input_mask = [1] * len(input_ids)
|
|
||||||
|
|
||||||
# Zero-pad up to the sequence length.
|
|
||||||
padding = [0] * (max_seq_length - len(input_ids))
|
|
||||||
input_ids += padding
|
|
||||||
input_mask += padding
|
|
||||||
segment_ids += padding
|
|
||||||
|
|
||||||
assert len(input_ids) == max_seq_length
|
|
||||||
assert len(input_mask) == max_seq_length
|
|
||||||
assert len(segment_ids) == max_seq_length
|
|
||||||
|
|
||||||
if output_mode == "classification":
|
|
||||||
label_id = label_map[example.label]
|
|
||||||
elif output_mode == "regression":
|
|
||||||
label_id = float(example.label)
|
|
||||||
else:
|
|
||||||
raise KeyError(output_mode)
|
|
||||||
|
|
||||||
if ex_index < 5:
|
|
||||||
logger.info("*** Example ***")
|
|
||||||
logger.info("guid: %s" % (example.guid))
|
|
||||||
logger.info("tokens: %s" % " ".join(
|
|
||||||
[str(x) for x in tokens]))
|
|
||||||
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
|
|
||||||
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
|
|
||||||
logger.info(
|
|
||||||
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
|
|
||||||
logger.info("label: %s (id = %d)" % (example.label, label_id))
|
|
||||||
|
|
||||||
features.append(
|
|
||||||
InputFeatures(input_ids=input_ids,
|
|
||||||
input_mask=input_mask,
|
|
||||||
segment_ids=segment_ids,
|
|
||||||
label_id=label_id))
|
|
||||||
return features
|
|
||||||
|
|
||||||
|
|
||||||
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
|
|
||||||
"""Truncates a sequence pair in place to the maximum length."""
|
|
||||||
|
|
||||||
# This is a simple heuristic which will always truncate the longer sequence
|
|
||||||
# one token at a time. This makes more sense than truncating an equal percent
|
|
||||||
# of tokens from each, since if one sequence is very short then each token
|
|
||||||
# that's truncated likely contains more information than a longer sequence.
|
|
||||||
while True:
|
|
||||||
total_length = len(tokens_a) + len(tokens_b)
|
|
||||||
if total_length <= max_length:
|
|
||||||
break
|
|
||||||
if len(tokens_a) > len(tokens_b):
|
|
||||||
tokens_a.pop()
|
|
||||||
else:
|
|
||||||
tokens_b.pop()
|
|
||||||
|
|
||||||
|
|
||||||
def simple_accuracy(preds, labels):
|
|
||||||
return (preds == labels).mean()
|
|
||||||
|
|
||||||
|
|
||||||
def acc_and_f1(preds, labels):
|
|
||||||
acc = simple_accuracy(preds, labels)
|
|
||||||
f1 = f1_score(y_true=labels, y_pred=preds)
|
|
||||||
return {
|
|
||||||
"acc": acc,
|
|
||||||
"f1": f1,
|
|
||||||
"acc_and_f1": (acc + f1) / 2,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def pearson_and_spearman(preds, labels):
|
|
||||||
pearson_corr = pearsonr(preds, labels)[0]
|
|
||||||
spearman_corr = spearmanr(preds, labels)[0]
|
|
||||||
return {
|
|
||||||
"pearson": pearson_corr,
|
|
||||||
"spearmanr": spearman_corr,
|
|
||||||
"corr": (pearson_corr + spearman_corr) / 2,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def compute_metrics(task_name, preds, labels):
|
|
||||||
assert len(preds) == len(labels)
|
|
||||||
if task_name == "cola":
|
|
||||||
return {"mcc": matthews_corrcoef(labels, preds)}
|
|
||||||
elif task_name == "sst-2":
|
|
||||||
return {"acc": simple_accuracy(preds, labels)}
|
|
||||||
elif task_name == "mrpc":
|
|
||||||
return acc_and_f1(preds, labels)
|
|
||||||
elif task_name == "sts-b":
|
|
||||||
return pearson_and_spearman(preds, labels)
|
|
||||||
elif task_name == "qqp":
|
|
||||||
return acc_and_f1(preds, labels)
|
|
||||||
elif task_name == "mnli":
|
|
||||||
return {"acc": simple_accuracy(preds, labels)}
|
|
||||||
elif task_name == "mnli-mm":
|
|
||||||
return {"acc": simple_accuracy(preds, labels)}
|
|
||||||
elif task_name == "qnli":
|
|
||||||
return {"acc": simple_accuracy(preds, labels)}
|
|
||||||
elif task_name == "rte":
|
|
||||||
return {"acc": simple_accuracy(preds, labels)}
|
|
||||||
elif task_name == "wnli":
|
|
||||||
return {"acc": simple_accuracy(preds, labels)}
|
|
||||||
else:
|
|
||||||
raise KeyError(task_name)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
|
|
||||||
@ -661,31 +142,6 @@ def main():
|
|||||||
ptvsd.enable_attach(address=(args.server_ip, args.server_port), redirect_output=True)
|
ptvsd.enable_attach(address=(args.server_ip, args.server_port), redirect_output=True)
|
||||||
ptvsd.wait_for_attach()
|
ptvsd.wait_for_attach()
|
||||||
|
|
||||||
processors = {
|
|
||||||
"cola": ColaProcessor,
|
|
||||||
"mnli": MnliProcessor,
|
|
||||||
"mnli-mm": MnliMismatchedProcessor,
|
|
||||||
"mrpc": MrpcProcessor,
|
|
||||||
"sst-2": Sst2Processor,
|
|
||||||
"sts-b": StsbProcessor,
|
|
||||||
"qqp": QqpProcessor,
|
|
||||||
"qnli": QnliProcessor,
|
|
||||||
"rte": RteProcessor,
|
|
||||||
"wnli": WnliProcessor,
|
|
||||||
}
|
|
||||||
|
|
||||||
output_modes = {
|
|
||||||
"cola": "classification",
|
|
||||||
"mnli": "classification",
|
|
||||||
"mrpc": "classification",
|
|
||||||
"sst-2": "classification",
|
|
||||||
"sts-b": "regression",
|
|
||||||
"qqp": "classification",
|
|
||||||
"qnli": "classification",
|
|
||||||
"rte": "classification",
|
|
||||||
"wnli": "classification",
|
|
||||||
}
|
|
||||||
|
|
||||||
if args.local_rank == -1 or args.no_cuda:
|
if args.local_rank == -1 or args.no_cuda:
|
||||||
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
|
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
|
||||||
n_gpu = torch.cuda.device_count()
|
n_gpu = torch.cuda.device_count()
|
||||||
@ -737,30 +193,39 @@ def main():
|
|||||||
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
|
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
|
||||||
|
|
||||||
# Prepare model
|
# Prepare model
|
||||||
cache_dir = args.cache_dir if args.cache_dir else os.path.join(str(PYTORCH_PRETRAINED_BERT_CACHE), 'distributed_{}'.format(args.local_rank))
|
model = BertForSequenceClassification.from_pretrained(args.bert_model, num_labels=num_labels)
|
||||||
model = BertForSequenceClassification.from_pretrained(args.bert_model,
|
|
||||||
cache_dir=cache_dir,
|
|
||||||
num_labels=num_labels)
|
|
||||||
if args.fp16:
|
if args.fp16:
|
||||||
model.half()
|
model.half()
|
||||||
model.to(device)
|
model.to(device)
|
||||||
if args.local_rank != -1:
|
if args.local_rank != -1:
|
||||||
try:
|
model = torch.nn.parallel.DistributedDataParallel(model,
|
||||||
from apex.parallel import DistributedDataParallel as DDP
|
device_ids=[args.local_rank],
|
||||||
except ImportError:
|
output_device=args.local_rank,
|
||||||
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.")
|
find_unused_parameters=True)
|
||||||
|
|
||||||
model = DDP(model)
|
|
||||||
elif n_gpu > 1:
|
elif n_gpu > 1:
|
||||||
model = torch.nn.DataParallel(model)
|
model = torch.nn.DataParallel(model)
|
||||||
|
|
||||||
if args.do_train:
|
if args.do_train:
|
||||||
|
if args.local_rank in [-1, 0]:
|
||||||
|
tb_writer = SummaryWriter()
|
||||||
|
|
||||||
# Prepare data loader
|
# Prepare data loader
|
||||||
|
|
||||||
train_examples = processor.get_train_examples(args.data_dir)
|
train_examples = processor.get_train_examples(args.data_dir)
|
||||||
train_features = convert_examples_to_features(
|
cached_train_features_file = args.data_dir + '_{0}_{1}_{2}'.format(
|
||||||
train_examples, label_list, args.max_seq_length, tokenizer, output_mode)
|
list(filter(None, args.bert_model.split('/'))).pop(),
|
||||||
|
str(args.max_seq_length),
|
||||||
|
str(task_name))
|
||||||
|
try:
|
||||||
|
with open(cached_train_features_file, "rb") as reader:
|
||||||
|
train_features = pickle.load(reader)
|
||||||
|
except:
|
||||||
|
train_features = convert_examples_to_features(
|
||||||
|
train_examples, label_list, args.max_seq_length, tokenizer, output_mode)
|
||||||
|
if args.local_rank == -1 or torch.distributed.get_rank() == 0:
|
||||||
|
logger.info(" Saving train features into cached file %s", cached_train_features_file)
|
||||||
|
with open(cached_train_features_file, "wb") as writer:
|
||||||
|
pickle.dump(train_features, writer)
|
||||||
|
|
||||||
all_input_ids = torch.tensor([f.input_ids for f in train_features], dtype=torch.long)
|
all_input_ids = torch.tensor([f.input_ids for f in train_features], dtype=torch.long)
|
||||||
all_input_mask = torch.tensor([f.input_mask for f in train_features], dtype=torch.long)
|
all_input_mask = torch.tensor([f.input_mask for f in train_features], dtype=torch.long)
|
||||||
all_segment_ids = torch.tensor([f.segment_ids for f in train_features], dtype=torch.long)
|
all_segment_ids = torch.tensor([f.segment_ids for f in train_features], dtype=torch.long)
|
||||||
@ -778,8 +243,6 @@ def main():
|
|||||||
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=args.train_batch_size)
|
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=args.train_batch_size)
|
||||||
|
|
||||||
num_train_optimization_steps = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs
|
num_train_optimization_steps = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs
|
||||||
if args.local_rank != -1:
|
|
||||||
num_train_optimization_steps = num_train_optimization_steps // torch.distributed.get_world_size()
|
|
||||||
|
|
||||||
# Prepare optimizer
|
# Prepare optimizer
|
||||||
|
|
||||||
@ -863,6 +326,9 @@ def main():
|
|||||||
optimizer.step()
|
optimizer.step()
|
||||||
optimizer.zero_grad()
|
optimizer.zero_grad()
|
||||||
global_step += 1
|
global_step += 1
|
||||||
|
if args.local_rank in [-1, 0]:
|
||||||
|
tb_writer.add_scalar('lr', optimizer.get_lr()[0], global_step)
|
||||||
|
tb_writer.add_scalar('loss', loss.item(), global_step)
|
||||||
|
|
||||||
if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
|
if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
|
||||||
# Save a trained model, configuration and tokenizer
|
# Save a trained model, configuration and tokenizer
|
||||||
|
571
examples/run_classifier_dataset_utils.py
Normal file
571
examples/run_classifier_dataset_utils.py
Normal file
@ -0,0 +1,571 @@
|
|||||||
|
# coding=utf-8
|
||||||
|
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
|
||||||
|
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
""" BERT classification fine-tuning: utilities to work with GLUE tasks """
|
||||||
|
|
||||||
|
from __future__ import absolute_import, division, print_function
|
||||||
|
|
||||||
|
import csv
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from scipy.stats import pearsonr, spearmanr
|
||||||
|
from sklearn.metrics import matthews_corrcoef, f1_score
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class InputExample(object):
|
||||||
|
"""A single training/test example for simple sequence classification."""
|
||||||
|
|
||||||
|
def __init__(self, guid, text_a, text_b=None, label=None):
|
||||||
|
"""Constructs a InputExample.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
guid: Unique id for the example.
|
||||||
|
text_a: string. The untokenized text of the first sequence. For single
|
||||||
|
sequence tasks, only this sequence must be specified.
|
||||||
|
text_b: (Optional) string. The untokenized text of the second sequence.
|
||||||
|
Only must be specified for sequence pair tasks.
|
||||||
|
label: (Optional) string. The label of the example. This should be
|
||||||
|
specified for train and dev examples, but not for test examples.
|
||||||
|
"""
|
||||||
|
self.guid = guid
|
||||||
|
self.text_a = text_a
|
||||||
|
self.text_b = text_b
|
||||||
|
self.label = label
|
||||||
|
|
||||||
|
|
||||||
|
class InputFeatures(object):
|
||||||
|
"""A single set of features of data."""
|
||||||
|
|
||||||
|
def __init__(self, input_ids, input_mask, segment_ids, label_id):
|
||||||
|
self.input_ids = input_ids
|
||||||
|
self.input_mask = input_mask
|
||||||
|
self.segment_ids = segment_ids
|
||||||
|
self.label_id = label_id
|
||||||
|
|
||||||
|
|
||||||
|
class DataProcessor(object):
|
||||||
|
"""Base class for data converters for sequence classification data sets."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""Gets a collection of `InputExample`s for the train set."""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""Gets a collection of `InputExample`s for the dev set."""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""Gets the list of labels for this data set."""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _read_tsv(cls, input_file, quotechar=None):
|
||||||
|
"""Reads a tab separated value file."""
|
||||||
|
with open(input_file, "r", encoding="utf-8") as f:
|
||||||
|
reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
|
||||||
|
lines = []
|
||||||
|
for line in reader:
|
||||||
|
if sys.version_info[0] == 2:
|
||||||
|
line = list(unicode(cell, 'utf-8') for cell in line)
|
||||||
|
lines.append(line)
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
class MrpcProcessor(DataProcessor):
|
||||||
|
"""Processor for the MRPC data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
logger.info("LOOKING AT {}".format(os.path.join(data_dir, "train.tsv")))
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["0", "1"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, i)
|
||||||
|
text_a = line[3]
|
||||||
|
text_b = line[4]
|
||||||
|
label = line[0]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class MnliProcessor(DataProcessor):
|
||||||
|
"""Processor for the MultiNLI data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")),
|
||||||
|
"dev_matched")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["contradiction", "entailment", "neutral"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, line[0])
|
||||||
|
text_a = line[8]
|
||||||
|
text_b = line[9]
|
||||||
|
label = line[-1]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class MnliMismatchedProcessor(MnliProcessor):
|
||||||
|
"""Processor for the MultiNLI Mismatched data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev_mismatched.tsv")),
|
||||||
|
"dev_matched")
|
||||||
|
|
||||||
|
|
||||||
|
class ColaProcessor(DataProcessor):
|
||||||
|
"""Processor for the CoLA data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["0", "1"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
guid = "%s-%s" % (set_type, i)
|
||||||
|
text_a = line[3]
|
||||||
|
label = line[1]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class Sst2Processor(DataProcessor):
|
||||||
|
"""Processor for the SST-2 data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["0", "1"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, i)
|
||||||
|
text_a = line[0]
|
||||||
|
label = line[1]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class StsbProcessor(DataProcessor):
|
||||||
|
"""Processor for the STS-B data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return [None]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, line[0])
|
||||||
|
text_a = line[7]
|
||||||
|
text_b = line[8]
|
||||||
|
label = line[-1]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class QqpProcessor(DataProcessor):
|
||||||
|
"""Processor for the QQP data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["0", "1"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, line[0])
|
||||||
|
try:
|
||||||
|
text_a = line[3]
|
||||||
|
text_b = line[4]
|
||||||
|
label = line[5]
|
||||||
|
except IndexError:
|
||||||
|
continue
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class QnliProcessor(DataProcessor):
|
||||||
|
"""Processor for the QNLI data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")),
|
||||||
|
"dev_matched")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["entailment", "not_entailment"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, line[0])
|
||||||
|
text_a = line[1]
|
||||||
|
text_b = line[2]
|
||||||
|
label = line[-1]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class RteProcessor(DataProcessor):
|
||||||
|
"""Processor for the RTE data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["entailment", "not_entailment"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, line[0])
|
||||||
|
text_a = line[1]
|
||||||
|
text_b = line[2]
|
||||||
|
label = line[-1]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
class WnliProcessor(DataProcessor):
|
||||||
|
"""Processor for the WNLI data set (GLUE version)."""
|
||||||
|
|
||||||
|
def get_train_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
|
||||||
|
|
||||||
|
def get_dev_examples(self, data_dir):
|
||||||
|
"""See base class."""
|
||||||
|
return self._create_examples(
|
||||||
|
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
|
||||||
|
|
||||||
|
def get_labels(self):
|
||||||
|
"""See base class."""
|
||||||
|
return ["0", "1"]
|
||||||
|
|
||||||
|
def _create_examples(self, lines, set_type):
|
||||||
|
"""Creates examples for the training and dev sets."""
|
||||||
|
examples = []
|
||||||
|
for (i, line) in enumerate(lines):
|
||||||
|
if i == 0:
|
||||||
|
continue
|
||||||
|
guid = "%s-%s" % (set_type, line[0])
|
||||||
|
text_a = line[1]
|
||||||
|
text_b = line[2]
|
||||||
|
label = line[-1]
|
||||||
|
examples.append(
|
||||||
|
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
|
||||||
|
return examples
|
||||||
|
|
||||||
|
|
||||||
|
def convert_examples_to_features(examples, label_list, max_seq_length,
|
||||||
|
tokenizer, output_mode):
|
||||||
|
"""Loads a data file into a list of `InputBatch`s."""
|
||||||
|
|
||||||
|
label_map = {label : i for i, label in enumerate(label_list)}
|
||||||
|
|
||||||
|
features = []
|
||||||
|
for (ex_index, example) in enumerate(examples):
|
||||||
|
if ex_index % 10000 == 0:
|
||||||
|
logger.info("Writing example %d of %d" % (ex_index, len(examples)))
|
||||||
|
|
||||||
|
tokens_a = tokenizer.tokenize(example.text_a)
|
||||||
|
|
||||||
|
tokens_b = None
|
||||||
|
if example.text_b:
|
||||||
|
tokens_b = tokenizer.tokenize(example.text_b)
|
||||||
|
# Modifies `tokens_a` and `tokens_b` in place so that the total
|
||||||
|
# length is less than the specified length.
|
||||||
|
# Account for [CLS], [SEP], [SEP] with "- 3"
|
||||||
|
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
|
||||||
|
else:
|
||||||
|
# Account for [CLS] and [SEP] with "- 2"
|
||||||
|
if len(tokens_a) > max_seq_length - 2:
|
||||||
|
tokens_a = tokens_a[:(max_seq_length - 2)]
|
||||||
|
|
||||||
|
# The convention in BERT is:
|
||||||
|
# (a) For sequence pairs:
|
||||||
|
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
|
||||||
|
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
|
||||||
|
# (b) For single sequences:
|
||||||
|
# tokens: [CLS] the dog is hairy . [SEP]
|
||||||
|
# type_ids: 0 0 0 0 0 0 0
|
||||||
|
#
|
||||||
|
# Where "type_ids" are used to indicate whether this is the first
|
||||||
|
# sequence or the second sequence. The embedding vectors for `type=0` and
|
||||||
|
# `type=1` were learned during pre-training and are added to the wordpiece
|
||||||
|
# embedding vector (and position vector). This is not *strictly* necessary
|
||||||
|
# since the [SEP] token unambiguously separates the sequences, but it makes
|
||||||
|
# it easier for the model to learn the concept of sequences.
|
||||||
|
#
|
||||||
|
# For classification tasks, the first vector (corresponding to [CLS]) is
|
||||||
|
# used as as the "sentence vector". Note that this only makes sense because
|
||||||
|
# the entire model is fine-tuned.
|
||||||
|
tokens = ["[CLS]"] + tokens_a + ["[SEP]"]
|
||||||
|
segment_ids = [0] * len(tokens)
|
||||||
|
|
||||||
|
if tokens_b:
|
||||||
|
tokens += tokens_b + ["[SEP]"]
|
||||||
|
segment_ids += [1] * (len(tokens_b) + 1)
|
||||||
|
|
||||||
|
input_ids = tokenizer.convert_tokens_to_ids(tokens)
|
||||||
|
|
||||||
|
# The mask has 1 for real tokens and 0 for padding tokens. Only real
|
||||||
|
# tokens are attended to.
|
||||||
|
input_mask = [1] * len(input_ids)
|
||||||
|
|
||||||
|
# Zero-pad up to the sequence length.
|
||||||
|
padding = [0] * (max_seq_length - len(input_ids))
|
||||||
|
input_ids += padding
|
||||||
|
input_mask += padding
|
||||||
|
segment_ids += padding
|
||||||
|
|
||||||
|
assert len(input_ids) == max_seq_length
|
||||||
|
assert len(input_mask) == max_seq_length
|
||||||
|
assert len(segment_ids) == max_seq_length
|
||||||
|
|
||||||
|
if output_mode == "classification":
|
||||||
|
label_id = label_map[example.label]
|
||||||
|
elif output_mode == "regression":
|
||||||
|
label_id = float(example.label)
|
||||||
|
else:
|
||||||
|
raise KeyError(output_mode)
|
||||||
|
|
||||||
|
if ex_index < 5:
|
||||||
|
logger.info("*** Example ***")
|
||||||
|
logger.info("guid: %s" % (example.guid))
|
||||||
|
logger.info("tokens: %s" % " ".join(
|
||||||
|
[str(x) for x in tokens]))
|
||||||
|
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
|
||||||
|
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
|
||||||
|
logger.info(
|
||||||
|
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
|
||||||
|
logger.info("label: %s (id = %d)" % (example.label, label_id))
|
||||||
|
|
||||||
|
features.append(
|
||||||
|
InputFeatures(input_ids=input_ids,
|
||||||
|
input_mask=input_mask,
|
||||||
|
segment_ids=segment_ids,
|
||||||
|
label_id=label_id))
|
||||||
|
return features
|
||||||
|
|
||||||
|
|
||||||
|
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
|
||||||
|
"""Truncates a sequence pair in place to the maximum length."""
|
||||||
|
|
||||||
|
# This is a simple heuristic which will always truncate the longer sequence
|
||||||
|
# one token at a time. This makes more sense than truncating an equal percent
|
||||||
|
# of tokens from each, since if one sequence is very short then each token
|
||||||
|
# that's truncated likely contains more information than a longer sequence.
|
||||||
|
while True:
|
||||||
|
total_length = len(tokens_a) + len(tokens_b)
|
||||||
|
if total_length <= max_length:
|
||||||
|
break
|
||||||
|
if len(tokens_a) > len(tokens_b):
|
||||||
|
tokens_a.pop()
|
||||||
|
else:
|
||||||
|
tokens_b.pop()
|
||||||
|
|
||||||
|
|
||||||
|
def simple_accuracy(preds, labels):
|
||||||
|
return (preds == labels).mean()
|
||||||
|
|
||||||
|
|
||||||
|
def acc_and_f1(preds, labels):
|
||||||
|
acc = simple_accuracy(preds, labels)
|
||||||
|
f1 = f1_score(y_true=labels, y_pred=preds)
|
||||||
|
return {
|
||||||
|
"acc": acc,
|
||||||
|
"f1": f1,
|
||||||
|
"acc_and_f1": (acc + f1) / 2,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def pearson_and_spearman(preds, labels):
|
||||||
|
pearson_corr = pearsonr(preds, labels)[0]
|
||||||
|
spearman_corr = spearmanr(preds, labels)[0]
|
||||||
|
return {
|
||||||
|
"pearson": pearson_corr,
|
||||||
|
"spearmanr": spearman_corr,
|
||||||
|
"corr": (pearson_corr + spearman_corr) / 2,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def compute_metrics(task_name, preds, labels):
|
||||||
|
assert len(preds) == len(labels)
|
||||||
|
if task_name == "cola":
|
||||||
|
return {"mcc": matthews_corrcoef(labels, preds)}
|
||||||
|
elif task_name == "sst-2":
|
||||||
|
return {"acc": simple_accuracy(preds, labels)}
|
||||||
|
elif task_name == "mrpc":
|
||||||
|
return acc_and_f1(preds, labels)
|
||||||
|
elif task_name == "sts-b":
|
||||||
|
return pearson_and_spearman(preds, labels)
|
||||||
|
elif task_name == "qqp":
|
||||||
|
return acc_and_f1(preds, labels)
|
||||||
|
elif task_name == "mnli":
|
||||||
|
return {"acc": simple_accuracy(preds, labels)}
|
||||||
|
elif task_name == "mnli-mm":
|
||||||
|
return {"acc": simple_accuracy(preds, labels)}
|
||||||
|
elif task_name == "qnli":
|
||||||
|
return {"acc": simple_accuracy(preds, labels)}
|
||||||
|
elif task_name == "rte":
|
||||||
|
return {"acc": simple_accuracy(preds, labels)}
|
||||||
|
elif task_name == "wnli":
|
||||||
|
return {"acc": simple_accuracy(preds, labels)}
|
||||||
|
else:
|
||||||
|
raise KeyError(task_name)
|
||||||
|
|
||||||
|
processors = {
|
||||||
|
"cola": ColaProcessor,
|
||||||
|
"mnli": MnliProcessor,
|
||||||
|
"mnli-mm": MnliMismatchedProcessor,
|
||||||
|
"mrpc": MrpcProcessor,
|
||||||
|
"sst-2": Sst2Processor,
|
||||||
|
"sts-b": StsbProcessor,
|
||||||
|
"qqp": QqpProcessor,
|
||||||
|
"qnli": QnliProcessor,
|
||||||
|
"rte": RteProcessor,
|
||||||
|
"wnli": WnliProcessor,
|
||||||
|
}
|
||||||
|
|
||||||
|
output_modes = {
|
||||||
|
"cola": "classification",
|
||||||
|
"mnli": "classification",
|
||||||
|
"mrpc": "classification",
|
||||||
|
"sst-2": "classification",
|
||||||
|
"sts-b": "regression",
|
||||||
|
"qqp": "classification",
|
||||||
|
"qnli": "classification",
|
||||||
|
"rte": "classification",
|
||||||
|
"wnli": "classification",
|
||||||
|
}
|
@ -18,10 +18,7 @@
|
|||||||
from __future__ import absolute_import, division, print_function
|
from __future__ import absolute_import, division, print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import collections
|
|
||||||
import json
|
|
||||||
import logging
|
import logging
|
||||||
import math
|
|
||||||
import os
|
import os
|
||||||
import random
|
import random
|
||||||
import sys
|
import sys
|
||||||
@ -301,9 +298,6 @@ def main():
|
|||||||
else:
|
else:
|
||||||
loss.backward()
|
loss.backward()
|
||||||
if (step + 1) % args.gradient_accumulation_steps == 0:
|
if (step + 1) % args.gradient_accumulation_steps == 0:
|
||||||
if args.local_rank in [-1, 0]:
|
|
||||||
tb_writer.add_scalar('lr', optimizer.get_lr()[0], global_step)
|
|
||||||
tb_writer.add_scalar('loss', loss.item(), global_step)
|
|
||||||
if args.fp16:
|
if args.fp16:
|
||||||
# modify learning rate with special warm up BERT uses
|
# modify learning rate with special warm up BERT uses
|
||||||
# if args.fp16 is False, BertAdam is used and handles this automatically
|
# if args.fp16 is False, BertAdam is used and handles this automatically
|
||||||
@ -313,6 +307,9 @@ def main():
|
|||||||
optimizer.step()
|
optimizer.step()
|
||||||
optimizer.zero_grad()
|
optimizer.zero_grad()
|
||||||
global_step += 1
|
global_step += 1
|
||||||
|
if args.local_rank in [-1, 0]:
|
||||||
|
tb_writer.add_scalar('lr', optimizer.get_lr()[0], global_step)
|
||||||
|
tb_writer.add_scalar('loss', loss.item(), global_step)
|
||||||
|
|
||||||
if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
|
if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
|
||||||
# Save a trained model, configuration and tokenizer
|
# Save a trained model, configuration and tokenizer
|
||||||
|
Loading…
Reference in New Issue
Block a user