transformers/docs/source/main_classes
Stas Bekman 2df34f4aba
[trainer] deepspeed integration (#9211)
* deepspeed integration

* style

* add test

* ds wants to do its own backward

* fp16 assert

* Update src/transformers/training_args.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* style

* for clarity extract what args are being passed to deepspeed

* introduce the concept of self.wrapped_model

* s/self.wrapped_model/self.model_wrapped/

* complete transition to self.wrapped_model / self.model

* fix

* doc

* give ds its own init

* add custom overrides, handle bs correctly

* fix test

* clean up model_init logic, fix small bug

* complete fix

* collapse --deepspeed_config into --deepspeed

* style

* start adding doc notes

* style

* implement hf2ds optimizer and scheduler configuration remapping

* oops

* call get_num_training_steps absolutely when needed

* workaround broken auto-formatter

* deepspeed_config arg is no longer needed - fixed in deepspeed master

* use hf's fp16 args in config

* clean

* start on the docs

* rebase cleanup

* finish up --fp16

* clarify the supported stages

* big refactor thanks to discovering deepspeed.init_distributed

* cleanup

* revert fp16 part

* add checkpoint-support

* more init ds into integrations

* extend docs

* cleanup

* unfix docs

* clean up old code

* imports

* move docs

* fix logic

* make it clear which file it's referring to

* document nodes/gpus

* style

* wrong format

* style

* deepspeed handles gradient clipping

* easier to read

* major doc rewrite

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* docs

* switch to AdamW optimizer

* style

* Apply suggestions from code review

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* clarify doc

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2021-01-12 19:05:18 -08:00
..
callback.rst Copyright (#8970) 2020-12-07 18:36:34 -05:00
configuration.rst Copyright (#8970) 2020-12-07 18:36:34 -05:00
logging.rst Copyright (#8970) 2020-12-07 18:36:34 -05:00
model.rst [Flax] Align FlaxBertForMaskedLM with BertForMaskedLM, implement from_pretrained, init (#9054) 2020-12-16 13:03:32 +01:00
optimizer_schedules.rst Seq2seq trainer (#9241) 2020-12-22 11:33:44 -05:00
output.rst Add caching mechanism to BERT, RoBERTa (#9183) 2020-12-23 23:01:32 +05:30
pipelines.rst TableQuestionAnsweringPipeline (#9145) 2020-12-16 12:31:50 -05:00
processors.rst Fix documentation links always pointing to master. (#9217) 2021-01-05 06:18:48 -05:00
tokenizer.rst Copyright (#8970) 2020-12-07 18:36:34 -05:00
trainer.rst [trainer] deepspeed integration (#9211) 2021-01-12 19:05:18 -08:00