Commit Graph

381 Commits

Author SHA1 Message Date
Zach Mueller
01ab39b65f
Load state in else (#25318)
* Load else

* New approach

* Propagate
2023-08-08 05:41:00 -04:00
Phuc Van Phan
5fe36970e5
Adding more information in help parser on train_file and validation_file (#25324)
chorse: adding new doc on train and val
2023-08-07 17:56:13 +02:00
Jackmin801
145109382a
Allow trust_remote_code in example scripts (#25248)
* pytorch examples

* pytorch mim no trainer

* cookiecutter

* flax examples

* missed line in pytorch run_glue

* tensorflow examples

* tensorflow run_clip

* tensorflow run_mlm

* tensorflow run_ner

* tensorflow run_clm

* pytorch example from_configs

* pytorch no trainer examples

* Revert "tensorflow run_clip"

This reverts commit 261f86ac1f.

* fix: duplicated argument
2023-08-07 16:32:25 +02:00
Yih-Dar
149cb0cce2
Add token arugment in example scripts (#25172)
* fix

* fix

* fix

* fix

* fix

* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-02 11:17:31 +02:00
Yih-Dar
d53b8ad780
Update use_auth_token -> token in example scripts (#25167)
* pytorch examples

* tensorflow examples

* flax examples

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-07-28 15:33:45 +02:00
Alan Ji
afa96fffdf
make run_generation more generic for other devices (#25133)
* make run_generation more generic for other devices

* use Accelerate to support any device type it supports.

* make style

* fix error usage of accelerator.prepare_model

* use `PartialState` to make sure everything is running on the right device

---------

Co-authored-by: statelesshz <jihuazhong1@huawei.com>
2023-07-28 08:20:10 -04:00
Lucain
6232c380f2
Fix .push_to_hub and cleanup get_full_repo_name usage (#25120)
* Fix .push_to_hub and cleanup get_full_repo_name usage

* Do not rely on Python bool conversion magic

* request changes
2023-07-28 11:40:08 +02:00
Alan Ji
c879318cc5
replace per_gpu_eval_batch_size with per_device_eval_batch_size in readme of multiple-choice task (#25078)
replace `per_gpu_eval_batch_size` with `per_device_eval_batch_size`
in readme of multiple-choice
2023-07-25 08:11:56 -04:00
Zach Mueller
aa1b09c5d1
Change logic for logging in the examples (#24956)
Change logic
2023-07-20 12:30:10 -04:00
statelesshz
37d8611ac9
replace no_cuda with use_cpu in test_pytorch_examples (#24944)
* replace no_cuda with use_cpu in test_pytorch_examples

* remove codes that never be used

* fix style
2023-07-20 07:09:04 -04:00
ranchlai
8fd8c8e49e
Add multi-label text classification support to pytorch example (#24770)
* Add text classification example

* set the problem type and finetuning task

* ruff reformated

* fix bug for unseting label_to_id for regression

* update README.md

* fixed finetuning task

* update comment

* check if label exists in feature before removing

* add useful logging
2023-07-20 07:02:44 -04:00
Hwijeen Ahn
dd49404a89
check if eval dataset is dict (#24877)
* check if eval dataset is dict

* formatting
2023-07-18 13:33:41 -04:00
Sylvain Gugger
e9ad51306f
4.32.0.dev0 2023-07-17 13:30:44 -04:00
Xiaoli Wang
239ace152b
Fix TypeError: Object of type int64 is not JSON serializable (#24340)
* Fix TypeError: Object of type int64 is not JSON serializable

* Convert numpy.float64 and numpy.int64 to float and int for json serialization

* Black reformatted examples/pytorch/token-classification/run_ner_no_trainer.py

* * make style
2023-06-27 12:15:49 +01:00
Patrick von Platen
1609a436ec
Add MMS CTC Fine-Tuning (#24281)
* Add mms ctc fine tuning

* make style

* More fixes that are needed

* make fix-copies

* make draft for README

* add new file

* move to new file

* make style

* make style

* add quick test

* make style

* make style
2023-06-15 01:10:27 +02:00
Ethan
f7d80cb3d2
Fix steps bugs in no trainer examples (#24197)
Fix step bugs in no trainer + load checkpoint + grad acc
2023-06-12 11:49:55 -04:00
Sylvain Gugger
ba695c1efd
v4.31.0.dev0 2023-06-07 16:49:00 -04:00
Zachary Mueller
cbf6bc2350
Oops, missed one (#24054)
Oops
2023-06-06 13:30:19 -04:00
Zachary Mueller
072188d638
Act on deprecations in Accelerate no_trainer examples (#24053)
Act on deprecation
2023-06-06 13:04:38 -04:00
Sylvain Gugger
3ff443a6d9
Re-enable squad test (#23912)
* Re-enable squad test

* [all-test]

* [all-test] Fix all test command

* Fix the all-test
2023-05-31 13:44:26 -04:00
Sylvain Gugger
00f6ba0e7e
Skip failing test for now 2023-05-31 06:31:33 -04:00
Sylvain Gugger
6e4bc67099
Revamp test selection for the example tests (#23737)
* Revamp test selection for the example tests

* Rename old XLA test and fake modif in run_glue

* Fixes

* Fake Trainer modif

* Remove fake modifs
2023-05-25 09:38:21 -04:00
Wang, Yi
33687a3f61
add GPTJ/bloom/llama/opt into model list and enhance the jit support (#23291)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-05-24 10:57:56 +01:00
Zachary Mueller
b191d7db44
Update all no_trainer with skip_first_batches (#23664) 2023-05-22 14:49:31 -04:00
Boda Sadallah
a7920065f2
fix bug in group_texts function, that was inserting short batches (#23429)
* fix bug in group_texts function, that was inserting short batches

* fully exclude short batches and return empty dict instead

* fix style
2023-05-18 14:22:30 -04:00
Zachary Mueller
8a58809312
Fix translation no_trainer (#23407)
* Fix translation
2023-05-16 13:10:42 -04:00
Yih-Dar
d51296d9c2
skip test_run_squad_no_trainer for now (#23302)
skip

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-05-11 19:26:48 +02:00
Hari
5d02e6bd20
Convert numpy arrays to lists before saving the evaluation metrics as json (#23268)
* convert numpy array to list before writing to json

per_category_iou and per_category_accuracy  are ndarray in the eval_metrics

* code reformatted with make style
2023-05-11 08:54:23 -04:00
Maria Khalusova
91f4c84a19
CTC example: updated trainer parameters to save tokenizer (#23243)
trainer parameters changed to save tokenizer in addition to feature_extractor
2023-05-10 07:45:10 -04:00
Sylvain Gugger
a0c0a78233
v4.30.0.dev0 2023-05-09 14:59:38 -04:00
Sebastian
1a8f61110e
fix: Update run_qa.py to work with deepset/germanquad (#23225)
Call str on id to make sure any ints are converted into the expected format for squad datasets
2023-05-09 09:20:10 -04:00
Ashwin Mathur
fc6c8b0eaa
Add no_trainer scripts to pre-train Vision Transformers (#23156)
* Add run_mim_no_trainer.py draft from #20412

Add parse_args method and copy over other dependencies

Add Method call for sending telemetry

Initialize Accelerator

Make one log on every process

Set seed and Handle repository creation

Initialize dataset and Set validation split

Create Config

Adapt Config

Update Config

Create Feature Extractor

Create model

Set column names

Create transforms

Create mask generator

Create method to preprocess images

Shuffle datasets if needed and set transforms

Create Dataloaders

Add optimizer

Add learning rate scheduler

Prepare everything with our accelerator

Tie weights for TPU training

Recalculate training steps and training epochs

Set accelerator checkpointing steps

Initialize trackers and store configuration

Set total batch size

Fix typo: mlm -> mim

Log info at the start of training

Load in the weights and states from previous save

update the progress_bar if load from checkpoint

Define train loop

Add evaluation loop to training

Add to parse_args method

Push repo to hub

Save accelerator state

End training and save model and feature extractor

Remove unused imports

Fix trailing whitespace

* Update code based on comments, Rename feature_extractor to image_processor

* Fix linting

* Add argument for learning rate

* Add argument for setting number of training epochs

* Remove incorrect logger argument

* Convert max_train_steps to int for tqdm

---------

Co-authored-by: Saad Mahmud <shuvro.mahmud79@gmail.com>
2023-05-05 13:22:49 -04:00
Robert Stone
b6933d76d2
Tidy Pytorch GLUE benchmark example (#23134)
Migration to Evaluate for metric is not quite complete
2023-05-03 15:50:41 -04:00
regisss
bcedd0a471
Save the tokenizer and image preprocessor after training a model with the contrastive image-text example (#23035)
Save tokenizer and image preprocessor
2023-05-02 09:23:16 -04:00
Sylvain Gugger
888c4a2ae0
v4.29.0.dev0 2023-04-12 20:04:29 -04:00
Sylvain Gugger
1b1867d86b
Replace -100s in predictions by the pad token (#22693)
* Replace -100s in predictions by the pad token

* Style

* Try to catch them all
2023-04-11 09:32:20 -04:00
Mikel Penagarikano
d5239bab5b
Sync preprocesses before loading the processor at run_speech_recognition_ctc.py (#21926)
* Update run_speech_recognition_ctc.py

Make sure all processes wait until data is saved before loading the processor from the output_dit

* Make sure all processes wait until data is saved before loading the processor from the output_dit

* Update run_speech_recognition_ctc.py

* Update run_speech_recognition_seq2seq.py
2023-04-05 09:36:04 -04:00
Maziyar Panahi
98268b2e76
Add id2label and label2id to model's config in run_xnil (#22558)
Add id2label and label2id to config in run_xnil
2023-04-04 09:28:57 -04:00
Sabine
173193ccd0
Update Neptune docs (#22452) 2023-03-29 13:15:38 -04:00
Sylvain
ef28df0572 Fix quality due to ruff release 2023-03-22 20:45:08 -04:00
Connor Henderson
8e6c34b390
fix: Allow only test_file in pytorch and flax summarization (#22293)
allow only test_file in pytorch and flax summarization
2023-03-22 10:46:56 +00:00
Wang, Yi
4ccaf268fb
add low_cpu_mem_usage option in run_clm.py example which will benefit… (#22288)
* add low_cpu_mem_usage option in run_clm.py example which will benefit LLM loading

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* update all the example and README under language-modeling

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-03-22 10:42:39 +00:00
jiqing-feng
8472a224fb
Enable traced model for text-generation task (#22265) 2023-03-22 10:19:26 +00:00
Sylvain Gugger
ebdb185bef
v4.28.0.dev0 2023-03-14 13:49:10 -04:00
bofeng huang
6192549c1f
[examples/speech-recognition] Add SpecAugment to run_speech_recognition_seq2seq.py (#21942)
* Add specaugment to run_speech_recognition_seq2seq.py

* Remove useless argument: text_column

* Fix quality

* Update return_attention_mask condition

* Update specaugment arguments only for whisper models

* Remove SpecAugment arguments from ModelArguments, only leave default values for simplicity

* Apply suggestions from code review

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

* Update apply_spec_augment only for whisper models

* Apply suggestions from code review

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

* Rename return_attention_mask to forward_attention_mask to avoid confusion with wav2vec2 models

---------

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2023-03-08 17:59:31 +01:00
bofeng huang
3c0ce60855
[examples/summarization] deal with max_length and num_beams (#21740)
* Override the decoding parameters of Seq2SeqTrainer

* Fix quality

* Fix max_length parameter

* Fix quality

* Remove redundant parameter max_length

* Separate the preprocess of train and validation to use different max_target_length
2023-02-27 08:18:14 +01:00
Sanchit Gandhi
13489248fa
[Examples] Generalise run audio classification for log-mel models (#21756)
* [Examples] Generalise run audio classification for log-mel models

* batch feature extractor

* make style
2023-02-24 09:19:07 +01:00
Sylvain Gugger
b19d64d852
Respect documentation on passive log level (#21700)
* Respect documentation on passive log level

* Fix test and set log level in examples

* Add doc
2023-02-22 09:39:18 +01:00
Aaron Gokaslan
5e8c8eb5ba
Apply ruff flake8-comprehensions (#21694) 2023-02-22 09:14:54 +01:00
regisss
751f17aa48
Fix typos in contrastive-image-text example README (#21665) 2023-02-16 09:10:25 -05:00