![]() * add: tokenizer training script for TF TPU LM training. * add: script for preparing the TFRecord shards. * add: sequence of execution to readme. * remove limit from the tfrecord shard name. * Add initial train_model.py * Add basic training arguments and model init * Get up to the point of writing the data collator * Pushing progress so far! * Complete first draft of model training code * feat: grouping of texts efficiently. Co-authored-by: Matt <rocketknight1@gmail.com> * Add proper masking collator and get training loop working * fix: things. * Read sample counts from filenames * Read sample counts from filenames * Draft README * Improve TPU warning * Use distribute instead of distribute.experimental * Apply suggestions from code review Co-authored-by: Matt <Rocketknight1@users.noreply.github.com> * Modularize loading and add MLM probability as arg * minor refactoring to better use the cli args. * readme fillup. * include tpu and inference sections in the readme. * table of contents. * parallelize maps. * polish readme. * change script name to run_mlm.py * address PR feedback (round I). --------- Co-authored-by: Matt <rocketknight1@gmail.com> Co-authored-by: Matt <Rocketknight1@users.noreply.github.com> |
||
---|---|---|
.. | ||
benchmarking | ||
contrastive-image-text | ||
image-classification | ||
language-modeling | ||
language-modeling-tpu | ||
multiple-choice | ||
question-answering | ||
summarization | ||
text-classification | ||
token-classification | ||
translation | ||
_tests_requirements.txt | ||
README.md | ||
test_tensorflow_examples.py |
Examples
This folder contains actively maintained examples of use of 🤗 Transformers organized into different ML tasks. All examples in this folder are TensorFlow examples, and are written using native Keras rather than classes like TFTrainer
, which we now consider deprecated. If you've previously only used 🤗 Transformers via TFTrainer
, we highly recommend taking a look at the new style - we think it's a big improvement!
In addition, all scripts here now support the 🤗 Datasets library - you can grab entire datasets just by changing one command-line argument!
A note on code folding
Most of these examples have been formatted with #region blocks. In IDEs such as PyCharm and VSCode, these blocks mark named regions of code that can be folded for easier viewing. If you find any of these scripts overwhelming or difficult to follow, we highly recommend beginning with all regions folded and then examining regions one at a time!
The Big Table of Tasks
Here is the list of all our examples:
Task | Example datasets |
---|---|
language-modeling |
WikiText-2 |
multiple-choice |
SWAG |
question-answering |
SQuAD |
summarization |
XSum |
text-classification |
GLUE |
token-classification |
CoNLL NER |
translation |
WMT |
Coming soon
- Colab notebooks to easily run through these scripts!