* Trainer - deprecate tokenizer for processing_class
* Extend chage across Seq2Seq trainer and docs
* Add tests
* Update to FutureWarning and add deprecation version
* Add AdEMAMix optimizer
* Fix test
* Update tests/trainer/test_trainer.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* fix galore lr display with lr schedulers
* style
* add some tests to check for displayed lrs
* copy-paste err for warmup steps
* standardize the default lr to be only in the optimizer
* trying out my luck with the reads
* add gather_use_object arguments
* fix name and pass the CI test for Seq2SeqTrainer
* make style
* make it to functools
* fix typo
* add accelerate version:
* adding warning
* Update src/transformers/trainer.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* make style
* Update src/transformers/training_args.py
* check function move to initial part
* add test for eval_use_gather_object
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Implement JSON dump conversion for torch_dtype in TrainingArguments
* Add unit test for converting torch_dtype in TrainingArguments to JSON
* move unit test for converting torch_dtype into TrainerIntegrationTest class
* reformating using ruff
* convert dict_torch_dtype_to_str to private method _dict_torch_dtype_to_str
---------
Co-authored-by: jun.4 <jun.4@kakaobrain.com>
* Introduce configured_state
* Include note on tuning
* Allow for users to have defined a state already
* Include tests
* Add note on hpam tune
* Guard a bit better
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Finish rebase
* Finish rebase
* Guard carefully
* Fixup test
* Refactor
* Fin refactor
* Comment
* Update wrt feedback
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Added cache clearing for GPU efficiency.
* Added cache clearing for GPU efficiency.
* Added batch_eval_metrics capability
* Ran make fixup
* Fixed bug
* Fixed whitespace issue
* Fixed outdated condition
* Updated docstrings with instructions for batch_eval_metrics. Updated end of dataloader logic
* Added first version of batch_eval_metrics Trainer test
* Fixed batch_eval_metrics Trainer tests for both eval and predict
* Fixed batch_eval_metrics behavior for new Trainer variables
* Fixed batch_eval_metrics Trainer tests
* Ran fixup
* Trainer: load checkpoint model with multiple adapters
* Trainer._load_from_checkpoint support multiple active adapters
* PeftModel.set_adapter does not support multiple adapters yet
* Trainer._load_from_checkpoint test multiple adapters
---------
Co-authored-by: Clara Luise Pohland <clara-luise.pohland@telekom.de>
* Start rework
* Fix failing test
* Include max
* Update src/transformers/trainer.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* add functions to get number of params which require grad, get optimizer group for parameters and get learning rates of param groups to trainer.py
* add tests and raise ValueError when optimizer is None
* add second layer to test and freeze its weigths
* check if torch is available before running tests
* use decorator to check if torch is available
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix test indentation
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* add warnings if training args differ from checkpoint args stored in trainer_state.json
* run formatting and styling
* add a test
* format and styling
---------
Co-authored-by: Jonathan Flynn <jonl.flynn@guardian.co.uk>
* add galore v1
* add import
* add tests and doc
* fix doctest
* forward contrib credits from discussions
* forward contrib credits from discussions
* Apply suggestions from code review
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* fix failing tests'
* switch to `optim_target_modules` and clarify docs
* more clarification
* enhance lookup logic
* update a test to add peak memory
* add regex, all-linear and single string support
* add layer-wise optimization through DummyOptimizers and LRSchedulers
* forward contrib credits from discussions and original idea
* add a section about DDP not supported in layerwise
* Update src/transformers/trainer.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* fix self
* check only if layer_wise
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* oops
* make use of intervals
* clarify comment
* add matching tests
* GaLoRe -> GaLore
* move to `get_scheduler`
* add note on docs
* add a warning
* adapt a bit the docs
* update docstring
* support original API
* Update docs/source/en/trainer.md
* slightly refactor
* Update docs/source/en/trainer.md
Co-authored-by: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix args parsing and add tests
* remove warning for regex
* fix type hint
* add note about extra args
* make `is_regex` return optional
---------
Co-authored-by: Maxime <maximegmd @users.noreply.github.com>
Co-authored-by: Wing Lian <winglian @users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: hiyouga <hiyouga@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
* move code to Trainer.evaluate to enable use of that function with multiple datasets
* test
* update doc string
* and a tip
* forgot the type
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
* Fuffill request
* Add test
* Better test
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Better test
* Better test
* MOre comments
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>