Zach Mueller
aa1b09c5d1
Change logic for logging in the examples ( #24956 )
...
Change logic
2023-07-20 12:30:10 -04:00
Sylvain Gugger
e9ad51306f
4.32.0.dev0
2023-07-17 13:30:44 -04:00
Ethan
f7d80cb3d2
Fix steps bugs in no trainer examples ( #24197 )
...
Fix step bugs in no trainer + load checkpoint + grad acc
2023-06-12 11:49:55 -04:00
Sylvain Gugger
ba695c1efd
v4.31.0.dev0
2023-06-07 16:49:00 -04:00
Zachary Mueller
072188d638
Act on deprecations in Accelerate no_trainer examples ( #24053 )
...
Act on deprecation
2023-06-06 13:04:38 -04:00
Zachary Mueller
b191d7db44
Update all no_trainer with skip_first_batches ( #23664 )
2023-05-22 14:49:31 -04:00
Boda Sadallah
a7920065f2
fix bug in group_texts function, that was inserting short batches ( #23429 )
...
* fix bug in group_texts function, that was inserting short batches
* fully exclude short batches and return empty dict instead
* fix style
2023-05-18 14:22:30 -04:00
Sylvain Gugger
a0c0a78233
v4.30.0.dev0
2023-05-09 14:59:38 -04:00
Sylvain Gugger
888c4a2ae0
v4.29.0.dev0
2023-04-12 20:04:29 -04:00
Wang, Yi
4ccaf268fb
add low_cpu_mem_usage option in run_clm.py example which will benefit… ( #22288 )
...
* add low_cpu_mem_usage option in run_clm.py example which will benefit LLM loading
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* update all the example and README under language-modeling
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-03-22 10:42:39 +00:00
Sylvain Gugger
ebdb185bef
v4.28.0.dev0
2023-03-14 13:49:10 -04:00
Sylvain Gugger
b19d64d852
Respect documentation on passive log level ( #21700 )
...
* Respect documentation on passive log level
* Fix test and set log level in examples
* Add doc
2023-02-22 09:39:18 +01:00
Aaron Gokaslan
5e8c8eb5ba
Apply ruff flake8-comprehensions ( #21694 )
2023-02-22 09:14:54 +01:00
Sylvain Gugger
6f79d26442
Update quality tooling for formatting ( #21480 )
...
* Result of black 23.1
* Update target to Python 3.7
* Switch flake8 to ruff
* Configure isort
* Configure isort
* Apply isort with line limit
* Put the right black version
* adapt black in check copies
* Fix copies
2023-02-06 18:10:56 -05:00
Stas Bekman
3b9a1dc132
[examples] improve block_size warning message ( #21463 )
2023-02-06 08:36:12 -08:00
Quentin Lhoest
074d6b75fd
Simplify column_names in run_clm/mlm ( #21382 )
...
* simplify column_names in run_clm
* simplify column_names in run_mlm
* minor
2023-01-31 15:23:47 +01:00
Stas Bekman
98d88b23f5
[run_(clm|mlm).py
examples] add streaming dataset support ( #21343 )
...
* [run_clm example] add streaming dataset support
* unrefactor kwargs
* fix
* fix
* require datasets>=2.0.0
* port to mlm
2023-01-30 14:01:35 -08:00
Sylvain Gugger
7119bb052a
v4.27.0.dev0
2023-01-23 16:52:35 -05:00
Mostafa Elhoushi
5603f78fc4
Add scikit-learn dependency to train langage-modeling ( #21229 )
2023-01-23 09:54:45 -05:00
Sylvain Gugger
05e72aa0c4
Adapt repository creation to latest hf_hub ( #21158 )
...
* Adapt repository creation to latest hf_hub
* Update all examples
* Fix other tests, add Flax examples
* Address review comments
2023-01-18 11:14:00 -05:00
Wang, Yi
9c9fe89f84
[run_clm example] add torch_dtype option for model load. ( #20971 )
...
* [run_clm example] add torch_dtype option for model load.
for BLOOM 175B model. peak memory will reduce about 350G for inference. the weight of BLOOM in model hub is bfloat16
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* add other type in option
* fix style
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-01-03 09:33:11 -05:00
Sylvain Gugger
60d1f31bb0
v4.26.0.dev0
2022-12-01 16:19:33 -05:00
Jiahao Li
9681f052a1
Fix result saving errors of pytorch examples ( #20276 )
2022-11-16 09:51:04 -05:00
Zachary Mueller
822ae69c1b
Update reqs to include min gather_for_metrics Accelerate version ( #20242 )
...
* Update reqs to include min gather_for_metrics Accelerate version
* Other reqs
2022-11-15 13:28:00 -05:00
Muhammad Sakib Khan Inan
777b1bfe62
New logging support to "Trainer" Class (ClearML Logger) ( #20184 )
...
* Init Update
* ClearML Callbacks integration
* update corrections
* args reporting updated
* {'tensorboard': False, 'pytorch': False}
* ClearML Tests added
* add clearml
* output_uri=True in Task.init
* reformatted integrations.py
* reformatted and fixed
* IF-ELSE statement issue on "has_clearml" resolved
* Add clearml in main callback docs
* Add additional clearml documentation
* Update src/transformers/integrations.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Accept suggestion
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Accept suggestion
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Small change in comments
* Make style clearml
* Accept suggestion
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Victor Sonck <victor.sonck@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-15 10:08:59 -05:00
Sylvain Gugger
06886d5a68
Only resize embeddings when necessary ( #20043 )
...
* Only resize embeddings when necessary
* Add comment
2022-11-03 12:05:04 -04:00
Sylvain Gugger
c3a93d8d82
v4.25.0.dev0
2022-10-31 21:48:40 -04:00
Lysandre
10100979ed
Dev version
2022-10-10 17:25:40 -04:00
Santiago Castro
06f341de4f
Add a missing space in a script arg documentation ( #19113 )
2022-09-20 21:43:32 +02:00
Lysandre
16913b3c92
Dev version
2022-09-14 14:58:20 -04:00
Rahul A R
00fc9217d1
Fixed bug which caused overwrite_cache to always be True ( #19000 )
...
* fixed bug which caused overwrite_cache to always be True (#18967 ).
* reformatting changes
2022-09-13 11:29:48 -04:00
Nicholas Broad
4f299b2446
Accelerator end training ( #18910 )
...
* add accelerator.end_training()
Some trackers need this to end their runs.
* fixup and quality
* add space
* add space again ?!?
2022-09-07 07:46:26 -04:00
Sylvain Gugger
c61f116b63
Tie weights after preparing the model in run_clm ( #18855 )
2022-09-01 12:06:56 -04:00
Rahul A R
e9442440fc
streamlining 'checkpointing_steps' parsing ( #18755 )
2022-08-25 11:00:38 -04:00
Atharva Ingle
d90a36d192
remove check for main process for trackers initialization ( #18706 )
2022-08-22 11:16:27 -04:00
Atharva Ingle
e54a1b49aa
model.tie_weights()
should be applied after accelerator.prepare()
(#18676 )
...
* `model.tie_weights()` should be applied after `accelerator.prepare`
Weight tying should be done after the model has been moved to XLA device as mentioned on PyTorch/XLA Troubleshooting guide [here](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks )
* format code
2022-08-18 13:46:57 -04:00
Zachary Mueller
358fc18613
Add evaluate to examples requirements ( #18666 )
2022-08-18 10:57:39 -04:00
zhoutang776
25e651a2de
Update run_translation_no_trainer.py ( #18637 )
...
* Update run_translation_no_trainer.py
found an error in selecting `no_decay` parameters and some small modifications when the user continues to train from a checkpoint
* fixs `no_decay` and `resume_step` issue
1. change `no_decay` list
2. if use continue to train their model from provided checkpoint, the `resume_step` will not be initialized properly if `args.gradient_accumulation_steps != 1`
2022-08-16 13:25:57 -04:00
Rasmus Arpe Fogh Jensen
a765b68aa6
Update no_trainer.py scripts to include accelerate gradient accumulation wrapper ( #18473 )
...
* Added accelerate gradient accumulation wrapper to run_image_classification_no_trainer.py example script
* make fixup changes
* PR comments
* changed input to Acceletor based on PR comment, ran make fixup
* Added comment explaining the sync_gradients statement
* Fixed lr scheduler max steps
* Changed run_clm_no_trainer.py script to use accelerate gradient accum wrapper
* Fixed all scripts except wav2vec2 pretraining to use accelerate gradient accum wrapper
* Added accelerate gradient accum wrapper for wav2vec2_pretraining_no_trainer.py script
* make fixup and lr_scheduler step inserted back into run_qa_beam_search_no_trainer.py
* removed changes to run_wav2vec2_pretraining_no_trainer.py script and fixed using wrong constant in qa_beam_search_no_trainer.py script
2022-08-08 15:52:47 -04:00
Julien Chaumond
9129fd0377
transformers-cli login
=> huggingface-cli login
(#18490 )
...
* zero chance anyone's using that constant no?
* `transformers-cli login` => `huggingface-cli login`
* `transformers-cli repo create` => `huggingface-cli repo create`
* `make style`
2022-08-06 09:42:55 +02:00
Ritik Nandwal
3db4378bd7
Update no trainer scripts for language modeling and image classification examples ( #18443 )
...
* Update no_trainer script for image-classification
* Update no_trainer scripts for language-modeling examples
* Remove unused variable
* Removing truncation from losses array for language modeling examples
2022-08-03 08:33:18 -04:00
Sylvain Gugger
941d233153
Fix ROUGE add example check and update README ( #18398 )
...
* Fix ROUGE add example check and update README
* Stay consistent in values
2022-08-01 11:14:49 -04:00
atturaioe
1f84399171
Migrate metric to Evaluate in Pytorch examples ( #18369 )
...
* Migrate metric to Evaluate in pytorch examples
* Remove unused imports
2022-08-01 07:40:25 -04:00
Lysandre
c89a592e87
Dev version
2022-07-27 17:13:57 +02:00
Zachary Mueller
75259b44bf
Properly calculate the total train iterations and recalculate num epochs in no_trainer scripts ( #17856 )
2022-06-23 15:46:01 -04:00
Sylvain Gugger
7c6ec195ad
v4.21.0.dev0
2022-06-16 12:20:53 -04:00
Sylvain Gugger
3cab90279f
Add examples telemetry ( #17552 )
...
* Add examples telemetry
* Alternative approach
* Add to all other examples
* Add to templates as well
* Put framework separately
* Same for TensorFlow
2022-06-07 11:57:52 -04:00
Sourab Mangrulkar
d156898f3b
Improve notrainer examples ( #17449 )
...
* improve no-trainer examples
* Trigger CI
* adding comment to clarify tracker init on main process
* Trigger CI
* Trigger CI
* Trigger CI
2022-05-28 00:06:31 +05:30
Sylvain Gugger
afe5d42d8d
Black preview ( #17217 )
...
* Black preview
* Fixup too!
* Fix check copies
* Use the same version as the CI
* Bump black
2022-05-12 16:25:55 -04:00
Lysandre Debut
5294fa12ee
Dev version
2022-05-12 11:04:23 -04:00