![]() While using `run_clm.py`,[^1] I noticed that some files were being added
to my global cache, not the local cache. I set the `cache_dir` parameter
for the one call to `evaluate.load()`, which partially solved the
problem. I figured that while I was fixing the one script upstream, I
might as well fix the problem in all other example scripts that I could.
There are still some files being added to my global cache, but this
appears to be a bug in `evaluate` itself. This commit at least moves
some of the files into the local cache, which is better than before.
To create this PR, I made the following regex-based transformation:
`evaluate\.load\((.*?)\)` -> `evaluate\.load\($1,
cache_dir=model_args.cache_dir\)`. After using that, I manually fixed
all modified files with `ruff` serving as useful guidance. During the
process, I removed one existing usage of the `cache_dir` parameter in a
script that did not have a corresponding `--cache-dir` argument
declared.
[^1]: I specifically used `pytorch/language-modeling/run_clm.py` from
v4.34.1 of the library. For the original code, see the following URL:
|
||
---|---|---|
.. | ||
benchmarking | ||
contrastive-image-text | ||
image-classification | ||
language-modeling | ||
language-modeling-tpu | ||
multiple-choice | ||
question-answering | ||
summarization | ||
text-classification | ||
token-classification | ||
translation | ||
_tests_requirements.txt | ||
README.md | ||
test_tensorflow_examples.py |
Examples
This folder contains actively maintained examples of the use of 🤗 Transformers organized into different ML tasks. All examples in this folder are TensorFlow examples and are written using native Keras rather than classes like TFTrainer
, which we now consider deprecated. If you've previously only used 🤗 Transformers via TFTrainer
, we highly recommend taking a look at the new style - we think it's a big improvement!
In addition, all scripts here now support the 🤗 Datasets library - you can grab entire datasets just by changing one command-line argument!
A note on code folding
Most of these examples have been formatted with #region blocks. In IDEs such as PyCharm and VSCode, these blocks mark named regions of code that can be folded for easier viewing. If you find any of these scripts overwhelming or difficult to follow, we highly recommend beginning with all regions folded and then examining regions one at a time!
The Big Table of Tasks
Here is the list of all our examples:
Task | Example datasets |
---|---|
language-modeling |
WikiText-2 |
multiple-choice |
SWAG |
question-answering |
SQuAD |
summarization |
XSum |
text-classification |
GLUE |
token-classification |
CoNLL NER |
translation |
WMT |
Coming soon
- Colab notebooks to easily run through these scripts!