mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-21 21:49:06 +06:00

* Add NER example with accelerate library * This commit contains the first (yet really unfinished) version of a script for showing how to train HuggingFace model with their new accelerate library. * Fix metric calculation * make style quality * mv ner_no_trainer to token-classification dir * Delete --debug flag from running script * hf_datasets -> raw_datasets * Make a few slight adjustments * Add an informative comment + rewrite a help comment * Change header * Fix a few things * Enforce to use fast tokenizers only * DataCollatorWithPadding -> DataCollatorForTokenClassification * Change bash script: python3 -> accelerate launch * make style * Add a few missing things (see below) * Add a max-lenghth padding to predictions and labels to enable accelerate gather functionality * Add PyTorch no trainer example to the example README.md * Remove --do-train from args as being redundant for now * DataCollatorWithPadding -> DataCollatorForTokenClassification * Remove some obsolete args.do_train conditions from the script * Delete --do_train from bash running script * Delete use_slow_tokenizer from args * Add unintentionally removed flag --label_all_tokens * Delete --debug flag from running script
192 lines
6.8 KiB
Markdown
192 lines
6.8 KiB
Markdown
<!---
|
||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||
|
||
Licensed under the Apache License, Version 2.0 (the "License");
|
||
you may not use this file except in compliance with the License.
|
||
You may obtain a copy of the License at
|
||
|
||
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
||
Unless required by applicable law or agreed to in writing, software
|
||
distributed under the License is distributed on an "AS IS" BASIS,
|
||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||
See the License for the specific language governing permissions and
|
||
limitations under the License.
|
||
-->
|
||
|
||
# Token classification
|
||
|
||
## PyTorch version
|
||
|
||
Fine-tuning the library models for token classification task such as Named Entity Recognition (NER), Parts-of-speech
|
||
tagging (POS) pr phrase extraction (CHUNKS). The main scrip `run_ner.py` leverages the 🤗 Datasets library and the Trainer API. You can easily
|
||
customize it to your needs if you need extra processing on your datasets.
|
||
|
||
It will either run on a datasets hosted on our [hub](https://huggingface.co/datasets) or with your own text files for
|
||
training and validation, you might just need to add some tweaks in the data preprocessing.
|
||
|
||
The following example fine-tunes BERT on CoNLL-2003:
|
||
|
||
```bash
|
||
python run_ner.py \
|
||
--model_name_or_path bert-base-uncased \
|
||
--dataset_name conll2003 \
|
||
--output_dir /tmp/test-ner \
|
||
--do_train \
|
||
--do_eval
|
||
```
|
||
|
||
or just can just run the bash script `run.sh`.
|
||
|
||
To run on your own training and validation files, use the following command:
|
||
|
||
```bash
|
||
python run_ner.py \
|
||
--model_name_or_path bert-base-uncased \
|
||
--train_file path_to_train_file \
|
||
--validation_file path_to_validation_file \
|
||
--output_dir /tmp/test-ner \
|
||
--do_train \
|
||
--do_eval
|
||
```
|
||
|
||
**Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it
|
||
uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in
|
||
[this table](https://huggingface.co/transformers/index.html#bigtable), if it doesn't you can still use the old version
|
||
of the script.
|
||
|
||
## Old version of the script
|
||
|
||
You can find the old version of the PyTorch script [here](https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py).
|
||
|
||
## Pytorch version, no Trainer
|
||
|
||
Based on the script [run_ner_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_no_trainer.py).
|
||
|
||
Like `run_ner.py`, this script allows you to fine-tune any of the models on the [hub](https://huggingface.co/models) on a
|
||
token classification task, either NER, POS or CHUNKS tasks or your own data in a csv or a JSON file. The main difference is that this
|
||
script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.
|
||
|
||
It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer
|
||
or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by
|
||
the mean of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally
|
||
after installing it:
|
||
|
||
```bash
|
||
pip install accelerate
|
||
```
|
||
|
||
then
|
||
|
||
```bash
|
||
export TASK_NAME=ner
|
||
|
||
python run_ner_no_trainer.py \
|
||
--model_name_or_path bert-base-cased \
|
||
--task_name $TASK_NAME \
|
||
--max_seq_length 128 \
|
||
--per_device_train_batch_size 32 \
|
||
--learning_rate 2e-5 \
|
||
--num_train_epochs 3 \
|
||
--output_dir /tmp/$TASK_NAME/
|
||
```
|
||
|
||
You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run
|
||
|
||
```bash
|
||
accelerate config
|
||
```
|
||
|
||
and reply to the questions asked. Then
|
||
|
||
```bash
|
||
accelerate test
|
||
```
|
||
|
||
that will check everything is ready for training. Finally, you cna launch training with
|
||
|
||
```bash
|
||
export TASK_NAME=ner
|
||
|
||
accelerate launch run_ner_no_trainer.py \
|
||
--model_name_or_path bert-base-cased \
|
||
--task_name $TASK_NAME \
|
||
--max_seq_length 128 \
|
||
--per_device_train_batch_size 32 \
|
||
--learning_rate 2e-5 \
|
||
--num_train_epochs 3 \
|
||
--output_dir /tmp/$TASK_NAME/
|
||
```
|
||
|
||
This command is the same and will work for:
|
||
|
||
- a CPU-only setup
|
||
- a setup with one GPU
|
||
- a distributed training with several GPUs (single or multi node)
|
||
- a training on TPUs
|
||
|
||
Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.
|
||
|
||
### TensorFlow version
|
||
|
||
The following examples are covered in this section:
|
||
|
||
* NER on the GermEval 2014 (German NER) dataset
|
||
* Emerging and Rare Entities task: WNUT’17 (English NER) dataset
|
||
|
||
Details and results for the fine-tuning provided by @stefan-it.
|
||
|
||
### GermEval 2014 (German NER) dataset
|
||
|
||
#### Data (Download and pre-processing steps)
|
||
|
||
Data can be obtained from the [GermEval 2014](https://sites.google.com/site/germeval2014ner/data) shared task page.
|
||
|
||
Here are the commands for downloading and pre-processing train, dev and test datasets. The original data format has four (tab-separated) columns, in a pre-processing step only the two relevant columns (token and outer span NER annotation) are extracted:
|
||
|
||
```bash
|
||
curl -L 'https://drive.google.com/uc?export=download&id=1Jjhbal535VVz2ap4v4r_rN1UEHTdLK5P' \
|
||
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp
|
||
curl -L 'https://drive.google.com/uc?export=download&id=1ZfRcQThdtAR5PPRjIDtrVP7BtXSCUBbm' \
|
||
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp
|
||
curl -L 'https://drive.google.com/uc?export=download&id=1u9mb7kNJHWQCWyweMDRMuTFoOHOfeBTH' \
|
||
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp
|
||
```
|
||
|
||
The GermEval 2014 dataset contains some strange "control character" tokens like `'\x96', '\u200e', '\x95', '\xad' or '\x80'`.
|
||
One problem with these tokens is, that `BertTokenizer` returns an empty token for them, resulting in misaligned `InputExample`s.
|
||
The `preprocess.py` script located in the `scripts` folder a) filters these tokens and b) splits longer sentences into smaller ones (once the max. subtoken length is reached).
|
||
|
||
Let's define some variables that we need for further pre-processing steps and training the model:
|
||
|
||
```bash
|
||
export MAX_LENGTH=128
|
||
export BERT_MODEL=bert-base-multilingual-cased
|
||
```
|
||
|
||
Run the pre-processing script on training, dev and test datasets:
|
||
|
||
```bash
|
||
python3 scripts/preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt
|
||
python3 scripts/preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt
|
||
python3 scripts/preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt
|
||
```
|
||
|
||
The GermEval 2014 dataset has much more labels than CoNLL-2002/2003 datasets, so an own set of labels must be used:
|
||
|
||
```bash
|
||
cat train.txt dev.txt test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > labels.txt
|
||
```
|
||
|
||
#### Prepare the run
|
||
|
||
Additional environment variables must be set:
|
||
|
||
```bash
|
||
export OUTPUT_DIR=germeval-model
|
||
export BATCH_SIZE=32
|
||
export NUM_EPOCHS=3
|
||
export SAVE_STEPS=750
|
||
export SEED=1
|
||
```
|