transformers/examples/tensorflow/language-modeling
Albert Villanova del Moral a14b055b65
Pass datasets trust_remote_code (#31406)
* Pass datasets trust_remote_code

* Pass trust_remote_code in more tests

* Add trust_remote_dataset_code arg to some tests

* Revert "Temporarily pin datasets upper version to fix CI"

This reverts commit b7672826ca.

* Pass trust_remote_code in librispeech_asr_dummy docstrings

* Revert "Pin datasets<2.20.0 for examples"

This reverts commit 833fc17a3e.

* Pass trust_remote_code to all examples

* Revert "Add trust_remote_dataset_code arg to some tests" to research_projects

* Pass trust_remote_code to tests

* Pass trust_remote_code to docstrings

* Fix flax examples tests requirements

* Pass trust_remote_dataset_code arg to tests

* Replace trust_remote_dataset_code with trust_remote_code in one example

* Fix duplicate trust_remote_code

* Replace args.trust_remote_dataset_code with args.trust_remote_code

* Replace trust_remote_dataset_code with trust_remote_code in parser

* Replace trust_remote_dataset_code with trust_remote_code in dataclasses

* Replace trust_remote_dataset_code with trust_remote_code arg
2024-06-17 17:29:13 +01:00
..
README.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
requirements.txt Fixing requirements for TF LM models and use correct model mappings (#14372) 2021-11-11 15:34:00 +00:00
run_clm.py Pass datasets trust_remote_code (#31406) 2024-06-17 17:29:13 +01:00
run_mlm.py Pass datasets trust_remote_code (#31406) 2024-06-17 17:29:13 +01:00

Language modelling examples

This folder contains some scripts showing examples of language model pre-training with the 🤗 Transformers library. For straightforward use-cases you may be able to use these scripts without modification, although we have also included comments in the code to indicate areas that you may need to adapt to your own projects. The two scripts have almost identical arguments, but they differ in the type of LM they train - a causal language model (like GPT) or a masked language model (like BERT). Masked language models generally train more quickly and perform better when fine-tuned on new tasks with a task-specific output head, like text classification. However, their ability to generate text is weaker than causal language models.

Pre-training versus fine-tuning

These scripts can be used to both pre-train a language model completely from scratch, as well as to fine-tune a language model on text from your domain of interest. To start with an existing pre-trained language model you can use the --model_name_or_path argument, or to train from scratch you can use the --model_type argument to indicate the class of model architecture to initialize.

Multi-GPU and TPU usage

By default, these scripts use a MirroredStrategy and will use multiple GPUs effectively if they are available. TPUs can also be used by passing the name of the TPU resource with the --tpu argument.

run_mlm.py

This script trains a masked language model.

Example command

python run_mlm.py \
--model_name_or_path distilbert/distilbert-base-cased \
--output_dir output \
--dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1

When using a custom dataset, the validation file can be separately passed as an input argument. Otherwise some split (customizable) of training data is used as validation.

python run_mlm.py \
--model_name_or_path distilbert/distilbert-base-cased \
--output_dir output \
--train_file train_file_path

run_clm.py

This script trains a causal language model.

Example command

python run_clm.py \
--model_name_or_path distilbert/distilgpt2 \
--output_dir output \
--dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1

When using a custom dataset, the validation file can be separately passed as an input argument. Otherwise some split (customizable) of training data is used as validation.

python run_clm.py \
--model_name_or_path distilbert/distilgpt2 \
--output_dir output \
--train_file train_file_path