* Set best_model_checkpoint only when ckpt exists.
Rather than set it explicitly without checking if the checkpoint directory even exists as before, now we moved the setting logic inside of _save_checkpoint and are only setting it if it exists.
* Added best_global_step to TrainerState.
* Added tests for best_model_checkpoint.
* Fixed hard-coded values in test to prevent fail.
* Added helper func and removed hard-coded best_step.
* Added side effect patch generator for _eval.
* Added evaluate side effect func.
* Removed erroneous patching.
* Fixed minor bug.
* Applied Ruff.
* Fixed Ruff problem in make style.
* Used Trainer.set_initial_training_values.
* fix wandb hp search unable to resume from sweep_id
* format styles
---------
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Don't accidentally mutate the base_model_tp_plan
* Co-authored by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Trigger tests
* Marking grad accum test as slow
* Add a flaky decorator
* Add a flaky decorator
* Use cyril's codeblock
* Don't copy() when it's None
* Use cyril's new codeblock
* make fixup
* test
* fix
* fix
* skip some and run some first
* test fsdp
* fix
* patches for generate
* test distributed
* copy
* don't test distributed loss for hpu
* require fp16 and run first
* changes from marc's PR fixing zero3
* better alternative
* return True when fp16 support on gaudi without creating bridge
* fix
* fix tested dtype in deepspeed inference test
* test
* fix
* test
* fix
* skip
* require fp16
* run first fsdp
* Apply suggestions from code review
* address comments
* address comments and refactor test
* reduce precison
* avoid doing gaudi1 specific stuff in the genreation loop
* document test_gradient_accumulation_loss_alignment_with_model_loss test a bit more
Fixed 2 issues regarding `tests/trainer/test_data_collator.py::TFDataCollatorIntegrationTest::test_all_mask_replacement`:
1. I got the error `RuntimeError: "bernoulli_tensor_cpu_p_" not implemented for 'Long'`. This is because the `mask_replacement_prob=1` and `torch.bernoulli` doesn't accept this type (which would be a `torch.long` dtype instead. I fixed this by manually casting the probability arguments in the `__post_init__` function of `DataCollatorForLanguageModeling`.
2. I also got the error `tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Equal as input #1(zero-based) was expected to be a int64 tensor but is a int32 tensor [Op:Equal]` due to the line `tf.reduce_all((batch["input_ids"] == inputs) | (batch["input_ids"] == tokenizer.mask_token_id))` in `test_data_collator.py`. This occurs because the type of the `inputs` variable is `tf.int32`. Solved this by manually casting it to `tf.int64` in the test, as the expected return type of `batch["input_ids"]` is `tf.int64`.
* fix: prevent model access error during Optuna hyperparameter tuning
The `transformers.integrations.integration_utils.run_hp_search_optuna` function releases model memory and sets trainer.model to None after each trial. This causes an AttributeError when subsequent Trainer.train calls attempt to access the model before reinitialization. This is only an issue when `fp16_full_eval` or `bf16_full_eval` flags are enabled.
* Update src/transformers/trainer.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* fix: prevent second save in the end of training
* fix: prevent second save in the end of training
* test: added test for no duplicate save on epoch save strategy
* fix: removed TrainerControl
* chore: style formatting
---------
Co-authored-by: JaktensTid <jaktenstid1@gmail.com>
* Add implementation for DataCollatorForMultipleChoice based on docs.
* Add DataCollatorForMultipleChoice to import structure.
* Remove custom DataCollatorForMultipleChoice implementations from example scripts.
* Remove custom implementations of DataCollatorForMultipleChoice from docs in English, Spanish, Japanese and Korean.
* Refactor torch version of DataCollatorForMultipleChoice to be more easily understandable.
* Apply suggested changes and run make fixup.
* fix copies, style and fixup
* add missing documentation
* nits
* fix docstring
* style
* nits
* isort
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
* Adding option to save/reload scaler
* Removing duplicate variable
* Adding save/reload test
* Small fixes on deterministic algorithm call
* Moving LLM test to another file to isolate its environment
* Moving back to old file and using subprocess to run test isolated
* Reverting back accidental change
* Reverting back accidental change
* add RAdamScheduleFree optimizer
* revert schedulefree version to the minimum requirement
* refine is_schedulefree_available so that it can take min_version
* refine documents
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* layernorm_decay_fix
* W293 fix
* ruff format fix
* black format
* ruff format
* erase last layer
* add test_get_parameter_names_rmsnorm
* rmsnorm fix
* use torch.testing.assertclose instead to get more details about error in cis
* fix
* style
* test_all
* revert for I bert
* fixes and updates
* more image processing fixes
* more image processors
* fix mamba and co
* style
* less strick
* ok I won't be strict
* skip and be done
* up
* DataCollatorForLanguageModeling class was updated with new parameters that provides more control over the token masking and relacing
* DataCollatorForLanguageModeling class was updated with new parameters that provides more control over the token masking and relacing
* Addressed review comments, modified the docstring and made a test for the DataCollatorForLanguageModeling
* update codecarbon
* replace directly-specified-test-dirs with tmp_dir
* pass tmp_dir to all get_regression_trainer
* test_trainer.py: Use tmp_dir consistently for all output_dir arguments
* fix some with...as tmp_dir blocks
* reflect the comments to improve test_trainer.py
* refresh .gitignore
* Updated docstring for _determine_best_metric.
* Updated docstring for metric_for_best_model.
* Added test case for save strategy.
* Updated incorrect test case.
* Changed eval_strategy to match save_strategy.
* Separated test cases for metric.
* Allow load_best_model when save_strategy == "best".
* Updated docstring for metric_for_best_model.
* fix GA bugs and add unit test
* narrow down model loss unit test diff gap
* format code to make ruff happy
* send num_items_in_batch argument to decoder
* fix GA loss bug in BertLMHeadModel
* use TinyStories-33M to narrow down diff gap
* fotmat code
* missing .config
* avoid add extra args
---------
Co-authored-by: kangsheng <kangsheng@meituan.com>
* Remove FSDP wrapping from sub-models.
* solve conflict trainer.py
* make fixup
* add unit test for fsdp_auto_wrap_policy when using auto_find_batch_size
* put back extract_model_from_parallel
* use transformers unwrap_model
* kinda works
* update
* add tests
* update
* use special tokens in processors
* typo
* fix copies
* fix
* fix moshi after rebase
* update
* fix tests
* update
* Update docs/source/en/main_classes/tokenizer.md
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* update docs
* test for load time adding tokens
* fix some more tests which are now fetched better
* one more fix
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update trainer for easier handling of accumulate + proper reporting
* test
* Fixup tests
* Full fix
* Fix style
* rm comment
* Fix tests
* Minimize test + remove py 311 check
* Unused import
* Forward contrib credits from discussions
* Fix reported metrics
* Refactor, good as it's going to get
* rm pad tok id check
* object detection and audio are being annoying
* Fin
* Fin x2
---------
Co-authored-by: Gyanateet Dutta <Ryukijano@users.noreply.github.com>
* Add _determine_best_metric and new saving logic.
1. Logic to determine the best logic was separated out from
`_save_checkpoint`.
2. In `_maybe_log_save_evaluate`, whether or not a new best metric was
achieved is determined after each evaluation, and if the save strategy
is "best' then the TrainerControl is updated accordingly.
* Added SaveStrategy.
Same as IntervalStrategy, but with a new attribute called BEST.
* IntervalStrategy -> SaveStrategy
* IntervalStratgy -> SaveStrategy for save_strat.
* Interval -> Save in docstring.
* Updated docstring for save_strategy.
* Added SaveStrategy and made according changes.
`save_strategy` previously followed `IntervalStrategy` but now follows
`SaveStrategy`.
Changes were made accordingly to the code and the docstring.
* Changes from `make fixup`.
* Removed redundant metrics argument.
* Added new test_save_best_checkpoint test.
1. Checks for both cases where `metric_for_best_model` is explicitly
provided and when it's not provided.
2. The first case should have two checkpoints saved, whereas the second
should have three saved.
* Changed should_training_end saving logic.
The Trainer saves a checkpoints at the end of training by default as
long as `save_strategy != SaveStrategy.NO`. This condition was modified
to include `SaveStrategy.BEST` because it would be counterintuitive that
we'd only want the best checkpoint to be saved but the last one is as
well.
* `args.metric_for_best_model` default to loss.
* Undo metric_for_best_model update.
* Remove checking metric_for_best_model.
* Added test cases for loss and no metric.
* Added error for metric and changed default best_metric.
* Removed unused import.
* `new_best_metric` -> `is_new_best_metric`
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Applied `is_new_best_metric` to all.
Changes were made for consistency and also to fix a potential bug.
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* exclude fsdp from delay_optimizer_creation
* add test case for trainer: FSDP mode and fp8 as mixed precision
* rearrange imports
* ruff formatted
* adapt _init_fsdp to fp8
* use _init_fsdp only when resume_from_checkpoint
* In case of FDP, self.layer will be CheckpointWrapper which has no len() method
* delete _init_fsdp
* solve conflict
* fix conflict
* make fixup
* bookmark
* Bookmark
* Bookmark
* Actually implement
* Pass in kwarg explicitly
* Adjust for if we do or don't have labels
* Bookmark fix for od
* bookmark
* Fin
* closer
* Negate accelerate grad accum div
* Fixup not training long enough
* Add in compute_loss to take full model output
* Document
* compute_loss -> compute_loss_fn
* Add a test
* Refactor
* Refactor
* Uncomment tests
* Update tests/trainer/test_trainer.py
Co-authored-by: Daniel Han <danielhanchen@gmail.com>
---------
Co-authored-by: Daniel Han <danielhanchen@gmail.com>
* Fix FSDP Initialization for resume training
* Added init_fsdp function to work with dummy values
* Fix FSDP initialization for resuming training
* Added CUDA decorator for tests
* Added torch_gpu decorator to FSDP tests
* Fixup for failing code quality tests
* Default synced_gpus to True when using FullyShardedDataParallel
Fixes#30228
Related:
* https://github.com/pytorch/pytorch/issues/100069
* https://github.com/pytorch/pytorch/issues/123962
Similar to DeepSpeed ZeRO Stage 3, when using FSDP with multiple GPUs and differently sized data per rank, the ranks reach different synchronization points at the same time, leading to deadlock
To avoid this, we can automatically set synced_gpus to True if we detect that a PreTrainedModel is being managed by FSDP using _is_fsdp_managed_module, which was added in 2.0.0 for torch.compile: https://github.com/pytorch/pytorch/blob/v2.0.0/torch/distributed/fsdp/_dynamo_utils.py
* Remove test file
* ruff formatting
* ruff format
* Update copyright year
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Add test for FSDP-wrapped model generation
Before #33483, these tests would have hung for 10 minutes before crashing due to a timeout error
* Ruff format
* Move argparse import
* Remove barrier
I think this might cause more problems if one of the workers was killed
* Move import into function to decrease load time
https://github.com/huggingface/transformers/pull/33483#discussion_r1787972735
* Add test for accelerate and Trainer
https://github.com/huggingface/transformers/pull/33483#discussion_r1790309675
* Refactor imports
* Ruff format
* Use nullcontext
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Trainer - deprecate tokenizer for processing_class
* Extend chage across Seq2Seq trainer and docs
* Add tests
* Update to FutureWarning and add deprecation version
* Add AdEMAMix optimizer
* Fix test
* Update tests/trainer/test_trainer.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* add DataCollatorBatchFlattening
* Update data_collator.py
* change name
* new FA2 flow if position_ids is provided
* add comments
* minor fix
* minor fix data collator
* add test cases for models
* add test case for data collator
* remove extra code
* formating for ruff check and check_repo.py
* ruff format
ruff format tests src utils
* custom_init_isort.py