![]() * Add files * Init * Add TimmWrapperModel * Fix up * Some fixes * Fix up * Remove old file * Sort out import orders * Fix some model loading * Compatible with pipeline and trainer * Fix up * Delete test_timm_model_1/config.json * Remove accidentally commited files * Delete src/transformers/models/modeling_timm_wrapper.py * Remove empty imports; fix transformations applied * Tidy up * Add image classifcation model to special cases * Create pretrained model; enable device_map='auto' * Enable most tests; fix init order * Sort imports * [run-slow] timm_wrapper * Pass num_classes into timm.create_model * Remove train transforms from image processor * Update timm creation with pretrained=False * Fix gamma/beta issue for timm models * Fixing gamma and beta renaming for timm models * Simplify config and model creation * Remove attn_implementation diff * Fixup * Docstrings * Fix warning msg text according to test case * Fix device_map auto * Set dtype and device for pixel_values in forward * Enable output hidden states * Enable tests for hidden_states and model parallel * Remove default scriptable arg * Refactor inner model * Update timm version * Fix _find_mismatched_keys function * Change inheritance for Classification model (fix weights loading with device_map) * Minor bugfix * Disable save pretrained for image processor * Rename hook method for loaded keys correction * Rename state dict keys on save, remove `timm_model` prefix, make checkpoint compatible with `timm` * Managing num_labels <-> num_classes attributes * Enable loading checkpoints in Trainer to resume training * Update error message for output_hidden_states * Add output hidden states test * Decouple base and classification models * Add more test cases * Add save-load-to-timm test * Fix test name * Fixup * Add do_pooling * Add test for do_pooling * Fix doc * Add tests for TimmWrapperModel * Add validation for `num_classes=0` in timm config + test for DINO checkpoint * Adjust atol for test * Fix docs * dev-ci * dev-ci * Add tests for image processor * Update docs * Update init to new format * Update docs in configuration * Fix some docs in image processor * Improve docs for modeling * fix for is_timm_checkpoint * Update code examples * Fix header * Fix typehint * Increase tolerance a bit * Fix Path * Fixing model parallel tests * Disable "parallel" tests * Add comment for metadata * Refactor AutoImageProcessor for timm wrapper loading * Remove custom test_model_outputs_equivalence * Add require_timm decorator * Fix comment * Make image processor work with older timm versions and tensor input * Save config instead of whole model in image processor tests * Add docstring for `image_processor_filename` * Sanitize kwargs for timm image processor * Fix doc style * Update check for tensor input * Update normalize * Remove _load_timm_model function --------- Co-authored-by: Amy Roberts <22614925+amyeroberts@users.noreply.github.com> |
||
---|---|---|
.. | ||
flax | ||
legacy | ||
modular-transformers | ||
pytorch | ||
research_projects | ||
tensorflow | ||
README.md | ||
run_on_remote.py |
Examples
We host a wide range of example scripts for multiple learning frameworks. Simply choose your favorite: TensorFlow, PyTorch or JAX/Flax.
We also have some research projects, as well as some legacy examples. Note that unlike the main examples these are not actively maintained, and may require specific older versions of dependencies in order to run.
While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won't work out-of-the-box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data, allowing you to tweak and edit them as required.
Please discuss on the forum or in an issue a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability.
Important note
Important
To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
Then cd in the example folder of your choice and run
pip install -r requirements.txt
To browse the examples corresponding to released versions of 🤗 Transformers, click on the line below and then on your desired version of the library:
Examples for older versions of 🤗 Transformers
- v4.21.0
- v4.20.1
- v4.19.4
- v4.18.0
- v4.17.0
- v4.16.2
- v4.15.0
- v4.14.1
- v4.13.0
- v4.12.5
- v4.11.3
- v4.10.3
- v4.9.2
- v4.8.2
- v4.7.0
- v4.6.1
- v4.5.1
- v4.4.2
- v4.3.3
- v4.2.2
- v4.1.1
- v4.0.1
- v3.5.1
- v3.4.0
- v3.3.1
- v3.2.0
- v3.1.0
- v3.0.2
- v2.11.0
- v2.10.0
- v2.9.1
- v2.8.0
- v2.7.0
- v2.6.0
- v2.5.1
- v2.4.0
- v2.3.0
- v2.2.0
- v2.1.1
- v2.0.0
- v1.2.0
- v1.1.0
- v1.0.0
Alternatively, you can switch your cloned 🤗 Transformers to a specific version (for instance with v3.5.1) with
git checkout tags/v3.5.1
and run the example command as usual afterward.
Running the Examples on Remote Hardware with Auto-Setup
run_on_remote.py is a script that launches any example on remote self-hosted hardware, with automatic hardware and environment setup. It uses Runhouse to launch on self-hosted hardware (e.g. in your own cloud account or on-premise cluster) but there are other options for running remotely as well. You can easily customize the example used, command line arguments, dependencies, and type of compute hardware, and then run the script to automatically launch the example.
You can refer to hardware setup for more information about hardware and dependency setup with Runhouse, or this Colab tutorial for a more in-depth walkthrough.
You can run the script with the following commands:
# First install runhouse:
pip install runhouse
# For an on-demand V100 with whichever cloud provider you have configured:
python run_on_remote.py \
--example pytorch/text-generation/run_generation.py \
--model_type=gpt2 \
--model_name_or_path=openai-community/gpt2 \
--prompt "I am a language model and"
# For byo (bring your own) cluster:
python run_on_remote.py --host <cluster_ip> --user <ssh_user> --key_path <ssh_key_path> \
--example <example> <args>
# For on-demand instances
python run_on_remote.py --instance <instance> --provider <provider> \
--example <example> <args>
You can also adapt the script to your own needs.