transformers/examples
Stas Bekman 580dd87c55
[Deepspeed] add support for bf16 mode (#14569)
* [WIP] add support for bf16 mode

* prep for bf16

* prep for bf16

* fix; zero2/bf16 is ok

* check bf16 is available

* test fixes

* enable zero3_bf16

* config files

* docs

* split stage_dtype; merge back to non-dtype-specific config file

* fix doc

* cleanup

* cleanup

* bfloat16 => bf16 to match the PR changes

* s/zero_gather_fp16_weights_on_model_save/zero_gather_16bit_weights_on_model_save/; s/save_fp16_model/save_16bit_model/

* test fixes/skipping

* move

* fix

* Update docs/source/main_classes/deepspeed.mdx

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* backticks

* cleanup

* cleanup

* cleanup

* new version

* add note about grad accum in bf16

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-03-11 17:53:53 -08:00
..
flax Speedup training by using numpy instead of jnp for batch shuffling (#15963) 2022-03-08 12:18:38 +01:00
legacy Remove dependency to quiet Dependabot (#15205) 2022-01-18 09:44:35 -05:00
pytorch Don't compute metrics in LM examples on TPU (#16029) 2022-03-10 07:44:51 -05:00
research_projects [Deepspeed] add support for bf16 mode (#14569) 2022-03-11 17:53:53 -08:00
tensorflow Swag example: Update doc format (#16014) 2022-03-09 13:25:34 +00:00
README.md Make links explicit (#15395) 2022-01-28 17:31:22 +00:00

Examples

We host a wide range of example scripts for multiple learning frameworks. Simply choose your favorite: TensorFlow, PyTorch or JAX/Flax.

We also have some research projects, as well as some legacy examples. Note that unlike the main examples these are not actively maintained, and may require specific older versions of dependencies in order to run.

While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data, allowing you to tweak and edit them as required.

Please discuss on the forum or in an issue a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability.

Important note

Important

To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/transformers
cd transformers
pip install .

Then cd in the example folder of your choice and run

pip install -r requirements.txt

To browse the examples corresponding to released versions of 🤗 Transformers, click on the line below and then on your desired version of the library:

Examples for older versions of 🤗 Transformers

Alternatively, you can switch your cloned 🤗 Transformers to a specific version (for instance with v3.5.1) with

git checkout tags/v3.5.1

and run the example command as usual afterward.