transformers/examples/tensorflow/question-answering
Matt 6eb51450fa
TF Examples Rewrite (#18451)
* Finished QA example

* Dodge a merge conflict

* Update text classification and LM examples

* Update NER example

* New Keras metrics WIP, fix NER example

* Update NER example

* Update MC, summarization and translation examples

* Add XLA warnings when shapes are variable

* Make sure batch_size is consistently scaled by num_replicas

* Add PushToHubCallback to all models

* Add docs links for KerasMetricCallback

* Add docs links for prepare_tf_dataset and jit_compile

* Correct inferred model names

* Don't assume the dataset has 'lang'

* Don't assume the dataset has 'lang'

* Write metrics in text classification

* Add 'framework' to TrainingArguments and TFTrainingArguments

* Export metrics in all examples and add tests

* Fix training args for Flax

* Update command line args for translation test

* make fixup

* Fix accidentally running other tests in fp16

* Remove do_train/do_eval from run_clm.py

* Remove do_train/do_eval from run_mlm.py

* Add tensorflow tests to circleci

* Fix circleci

* Update examples/tensorflow/language-modeling/run_mlm.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update examples/tensorflow/test_tensorflow_examples.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update examples/tensorflow/translation/run_translation.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update examples/tensorflow/token-classification/run_ner.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Fix save path for tests

* Fix some model card kwargs

* Explain the magical -1000

* Actually enable tests this time

* Skip text classification PR until we fix shape inference

* make fixup

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2022-08-10 16:49:51 +01:00
..
README.md Tensorflow QA example (#12252) 2021-06-21 16:37:28 +01:00
requirements.txt Migrate metric to Evaluate library for tensorflow examples (#18327) 2022-07-28 14:24:27 -04:00
run_qa.py TF Examples Rewrite (#18451) 2022-08-10 16:49:51 +01:00
utils_qa.py Misc. fixes for Pytorch QA examples: (#16958) 2022-04-27 08:51:39 -04:00

Question answering example

This folder contains the run_qa.py script, demonstrating question answering with the 🤗 Transformers library. For straightforward use-cases you may be able to use this script without modification, although we have also included comments in the code to indicate areas that you may need to adapt to your own projects.

Usage notes

Note that when contexts are long they may be split into multiple training cases, not all of which may contain the answer span.

As-is, the example script will train on SQuAD or any other question-answering dataset formatted the same way, and can handle user inputs as well.

Multi-GPU and TPU usage

By default, the script uses a MirroredStrategy and will use multiple GPUs effectively if they are available. TPUs can also be used by passing the name of the TPU resource with the --tpu argument. There are some issues surrounding these strategies and our models right now, which are most likely to appear in the evaluation/prediction steps. We're actively working on better support for multi-GPU and TPU training in TF, but if you encounter problems a quick workaround is to train in the multi-GPU or TPU context and then perform predictions outside of it.

Memory usage and data loading

One thing to note is that all data is loaded into memory in this script. Most question answering datasets are small enough that this is not an issue, but if you have a very large dataset you will need to modify the script to handle data streaming. This is particularly challenging for TPUs, given the stricter requirements and the sheer volume of data required to keep them fed. A full explanation of all the possible pitfalls is a bit beyond this example script and README, but for more information you can see the 'Input Datasets' section of this document.

Example command

python run_qa.py \
--model_name_or_path distilbert-base-cased \
--output_dir output \
--dataset_name squad \
--do_train \
--do_eval \