Commit Graph

533 Commits

Author SHA1 Message Date
Klaus Hipp
39fa400969
Fix input data file extension in examples (#28741) 2024-01-29 10:06:31 +00:00
Steven Liu
abe0289e6d
[docs] Fix datasets in guides (#28715)
* change datasets

* fix
2024-01-26 09:29:07 -08:00
bofeng huang
deb2b59073
Fix lr_scheduler in no_trainer training scripts (#27872)
* Fix lr_scheduler

* Fix lr scheduler
2024-01-22 14:22:18 +00:00
jheitmann
f0acf7b6d8
Fix id2label assignment in run_classification.py (#28590) 2024-01-22 11:31:31 +00:00
Amy Roberts
b2748a6efd v4.38.dev.0 2024-01-19 10:43:28 +00:00
Yoach Lacombe
772307be76
Making CTC training example more general (#28582)
* add w2v2bert compatibility

* Update examples/pytorch/speech-recognition/run_speech_recognition_ctc.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-01-18 17:01:49 +00:00
Yoach Lacombe
d2cdefb9ec
Add new meta w2v2-conformer BERT-like model (#28165)
* first commit

* correct default value non causal

* update config and modeling code

* update converting checkpoint

* clean modeling and fix tests

* make style

* add new config parameters to docstring

* fix copied from statements

* Apply suggestions from code review

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

* make position_embeddings_type docstrings clearer

* clean converting script

* remove function not used

* clean modeling file

* apply suggestion for test file + add convert script to not_doctested

* modify tests according to review - cleaner logic and more tests

* Apply nit suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* add checker of valid position embeddings type

* instantiate new layer norm layer with the right eps

* fix freeze_feature_encoder since it can be None in some cases

* add test same output in convert script

* restore wav2vec2conformer and add new model

* create processor and FE + clean

* add new model code

* fix convert script and set default config parameters

* correct model id paths

* make style

* make fix-copies and cleaning files

* fix copied from statements

* complete .md and fixe copies

* clean convert script argument defaults

* fix config parameters docstrings

* fix config docstring

* add copied from and enrich FE tests

* fix copied from and repo-consistency

* add autotokenizer

* make test input length shorter and change docstring code

* fix docstrings and copied from

* add add_adapter to ASR training example

* make testing of adapters more robust

* adapt to multi adapter layers

* refactor input_values->input_features and remove w2v2-bert feature extractor

* remove pretraining model

* remove depreciated features and useless lines

* add copied from and ignore statements to modeling tests

* remove pretraining model #2

* change import in convert script

* change default in convert script

* update readme and remove useless line

* Update tests/models/wav2vec2_bert/test_processor_wav2vec2_bert.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* refactor BERT to Bert for consistency

* remove useless ignore copy statement

* add persistent to buffer in rotary

* add eps in LayerNorm init and remove copied from

* add adapter activation parameters and add copied from statements

* Fix copied statements and add unitest.skip reasons

* add copied statement in test_processor

* refactor processor

* make style

* replace numpy random by torch rand

* remove expected output CTC

* improve converting script with processor class

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* remove gumbel class

* remove tests related to previously deleted class

* Update src/transformers/models/wav2vec2_bert/configuration_wav2vec2_bert.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* correct typos

* remove uused parameters

* update processor to takes both text and audio

* update checkpoints

* update expected output and add ctc expected output

* add label_attention_mask

* replace pt with np in processor tests

* fix typo

* revert to behaviour with labels_attention_mask

---------

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-01-18 13:37:34 +00:00
regisss
0cdcd7a2b3
Remove task arg in load_dataset in image-classification example (#28408)
* Remove `task` arg in `load_dataset` in image-classification example

* Manage case where "train" is not in dataset

* Add new args to manage image and label column names

* Similar to audio-classification example

* Fix README

* Update tests
2024-01-16 08:04:08 +01:00
Timothy Cronin
ff86bc364d
improve dev setup comments and hints (#28495)
* improve dev setup comments and hints

* fix tests for new dev setup hints
2024-01-15 18:36:40 +00:00
Yih-Dar
64bdbd888c
Don't set finetuned_from if it is a local path (#28482)
* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-01-15 11:38:20 +01:00
Joao Gante
4fb3d3a0f6
TF: purge TFTrainer (#28483) 2024-01-12 16:56:34 +00:00
Alex Hedges
95091e1582
Set cache_dir for evaluate.load() in example scripts (#28422)
While using `run_clm.py`,[^1] I noticed that some files were being added
to my global cache, not the local cache. I set the `cache_dir` parameter
for the one call to `evaluate.load()`, which partially solved the
problem. I figured that while I was fixing the one script upstream, I
might as well fix the problem in all other example scripts that I could.

There are still some files being added to my global cache, but this
appears to be a bug in `evaluate` itself. This commit at least moves
some of the files into the local cache, which is better than before.

To create this PR, I made the following regex-based transformation:
`evaluate\.load\((.*?)\)` -> `evaluate\.load\($1,
cache_dir=model_args.cache_dir\)`. After using that, I manually fixed
all modified files with `ruff` serving as useful guidance. During the
process, I removed one existing usage of the `cache_dir` parameter in a
script that did not have a corresponding `--cache-dir` argument
declared.

[^1]: I specifically used `pytorch/language-modeling/run_clm.py` from
v4.34.1 of the library. For the original code, see the following URL:
acc394c4f5/examples/pytorch/language-modeling/run_clm.py.
2024-01-11 15:38:44 +01:00
Lysandre
3ed3e3190c Dev version 2023-12-13 18:29:31 +01:00
Adam Louly
4850aaba6f
fix no sequence length models error (#27522)
* fix no sequence length models error

* block size check

---------

Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-12-11 18:01:26 +00:00
Dave Berenbaum
fe41647afc
uses dvclive_test mode in examples/pytorch/test_accelerate_examples.py (#27763) 2023-11-30 14:52:03 +01:00
Peter Pan
ce31508134
docs: replace torch.distributed.run by torchrun (#27528)
* docs: replace torch.distributed.run by torchrun

 `transformers` now officially support pytorch >= 1.10.
 The entrypoint `torchrun`` is present from 1.10 onwards.

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* Update src/transformers/trainer.py

with @ArthurZucker's suggestion

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

---------

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2023-11-27 16:26:33 +00:00
Mathias Nielsen
f31af3927f
[ examples] fix loading jsonl with load dataset in run translation example (#26924)
* Renamed variable extension to builder_name

* If builder name is jsonl change to json to align with load_datasets

* Apply suggestions from code review

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

---------

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
2023-11-20 15:45:42 +01:00
V.Prasanna kumar
ffbcfc0166
Broken links fixed related to datasets docs (#27569)
fixed the broken links belogs to dataset library of transformers
2023-11-17 13:44:09 -08:00
Matt
2e72bbab2c
Incorrect setting for num_beams in translation and summarization examples (#27519)
* Remove the torch main_process_first context manager from TF examples

* Correctly set num_beams=1 in our examples, and add a guard in GenerationConfig.validate()

* Update src/transformers/generation/configuration_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-11-15 18:18:54 +00:00
Adam Louly
e6522e49a7
Fixing the failure of models without max_position_embeddings attribute. (#27499)
fix max pos issue

Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-11-15 18:16:42 +00:00
Zach Mueller
a85ea4b19a
Fix wav2vec2 params (#27515)
Fix test
2023-11-15 09:24:03 -05:00
Yih-Dar
c8b6052ff6
Final fix of the accelerate installation issue (#27408)
* fix

* [test-all] commit

* fix

* [test-all] commit

* [test-all] commit

* fix

* fix

* fix

* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-11-09 18:52:29 +01:00
Dave Berenbaum
791ec370d1
Adds dvclive callback (#27352)
* dvclive trainer callback

* style fixes

* dvclive link fixes
2023-11-09 12:19:31 +00:00
Zach Mueller
e9adb0c9cf
Change thresh in test (#27378)
Change thresh
2023-11-09 04:44:36 -05:00
Zach Mueller
845aa832b7
Remove unused param from example script tests (#27354)
Unused param
2023-11-08 09:07:32 -05:00
Zach Mueller
efa57cb234
Fix example tests from failing (#27353)
* Fix example tests from failing

* CHange thresh
2023-11-08 07:45:21 -05:00
Hz, Ji
b6dbfee0a2
moving example of benchmarking to legacy dir (#27337)
move example of benchmarking to legacy
2023-11-08 09:27:37 +01:00
Lysandre
bc78fd1274 Dev version 2023-11-02 18:15:36 +01:00
Dong-geon Lee
25e6e9418c
Unify warning styles for better readability (#27184) 2023-10-31 18:12:14 +00:00
Hz, Ji
cd19b19378
make tests of pytorch_example device agnostic (#27081) 2023-10-30 14:56:41 +00:00
Gema Parreño
722e936491
[Typo fix] flag config in WANDB (#27130)
typo fix flag config
2023-10-29 18:22:26 +00:00
Lucain
66b088faf0
Provide alternative when warning on use_auth_token (#27105) 2023-10-27 14:32:54 +02:00
Michal Jamroz
e2d6d5ce57
Normalize only if needed (#26049)
* Normalize only if needed

* Update examples/pytorch/image-classification/run_image_classification.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* if else in one line

* within block

* one more place, sorry for mess

* import order

* Update examples/pytorch/image-classification/run_image_classification.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update examples/pytorch/image-classification/run_image_classification_no_trainer.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-10-24 13:32:03 +01:00
YQ
f71c9ccf59
fix logit-to-multi-hot conversion in example (#26936)
* fix logit to multi-hot converstion

* add comments

* typo
2023-10-23 12:33:05 +02:00
Tom Aarsen
40ea9ab2a1
Add many missing spaces in adjacent strings (#26751)
Add missing spaces in adjacent strings
2023-10-12 10:28:40 +02:00
Zach Mueller
1d6a84749b
Fix checkpoint path in no_trainer scripts (#26733)
checkpoint path
2023-10-11 16:16:27 +02:00
jheitmann
3eceaa3637
Fix source_prefix default value (#26654) 2023-10-10 20:49:10 +02:00
Phuc Van Phan
6015f91a5a
refactor: change default block_size (#26229)
* refactor: change default block_size

* fix: return tf to origin

* fix: change files to origin

* rebase

* rebase

* rebase

* rebase

* rebase

* rebase

* rebase

* rebase

* refactor: add min block_size to files

* reformat: add min block_size for run_clm tf
2023-10-04 15:31:38 +01:00
Lysandre
bd6205919a v4.35.0.dev0 2023-10-03 16:54:37 +02:00
Phuc Van Phan
ba47efbfe4
docs: change assert to raise and some small docs (#26232)
* docs: change assert to raise and some small docs

* docs: add rule and some document

* fix: fix bug

* fix: fix bug

* chorse: revert logging

* chorse: revert
2023-09-28 10:14:17 +02:00
Phuc Van Phan
4fb64e285a
chore: correct update_step and correct gradient_accumulation_steps (#26068) 2023-09-12 18:31:23 +01:00
Phuc Van Phan
5af2c62696
docs: add space to docs (#26067)
* docs: add space to docs

* docs: remove reduntant space
2023-09-11 22:03:26 +01:00
Phuc Van Phan
9cebae64ad
docs: update link huggingface map (#26077) 2023-09-11 12:57:04 +01:00
Joao Gante
9a70d6e56f
Trainer: delegate default generation values to generation_config (#25987) 2023-09-05 14:47:00 +01:00
Susnato Dhar
404ff8fc17
Fix typo (#25966)
* Update feature_extraction_clap.py

* changed all lenght to length
2023-09-05 10:12:25 +02:00
Lysandre
d8e13b3e04 v4.34.dev.0 2023-09-04 15:12:11 -04:00
Zach Mueller
be0e189bd3
Revert frozen training arguments (#25903)
* Revert frozen training arguments

* TODO
2023-09-01 11:24:12 -04:00
Phuc Van Phan
656e17f6f7
correct resume training steps number in progress bar (#25691)
feat: correct update resume update with steps
2023-08-23 20:09:14 +02:00
Sylvain Gugger
5c67682b16
v4.33.0.dev0 2023-08-21 07:07:04 -04:00
Zach Mueller
ca51499248
Make training args fully immutable (#25435)
* Make training args fully immutable

* Working tests, PyTorch

* In test_trainer

* during testing

* Use proper dataclass way

* Fix test

* Another one

* Fix tf

* Lingering slow

* Exception

* Clean
2023-08-15 11:47:47 -04:00
Gema Parreño
0acf56224b
Update run_translation.py broken link example Pytoch (#25461)
* Update run_translation.py

Fixed link

* Update run_translation.py
2023-08-11 15:41:24 +02:00
Yih-Dar
9c7b744795
Fix missing usage of token (#25382)
* add missing tokens

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-08 16:27:24 +02:00
Zach Mueller
01ab39b65f
Load state in else (#25318)
* Load else

* New approach

* Propagate
2023-08-08 05:41:00 -04:00
Phuc Van Phan
5fe36970e5
Adding more information in help parser on train_file and validation_file (#25324)
chorse: adding new doc on train and val
2023-08-07 17:56:13 +02:00
Jackmin801
145109382a
Allow trust_remote_code in example scripts (#25248)
* pytorch examples

* pytorch mim no trainer

* cookiecutter

* flax examples

* missed line in pytorch run_glue

* tensorflow examples

* tensorflow run_clip

* tensorflow run_mlm

* tensorflow run_ner

* tensorflow run_clm

* pytorch example from_configs

* pytorch no trainer examples

* Revert "tensorflow run_clip"

This reverts commit 261f86ac1f.

* fix: duplicated argument
2023-08-07 16:32:25 +02:00
Yih-Dar
149cb0cce2
Add token arugment in example scripts (#25172)
* fix

* fix

* fix

* fix

* fix

* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-02 11:17:31 +02:00
Yih-Dar
d53b8ad780
Update use_auth_token -> token in example scripts (#25167)
* pytorch examples

* tensorflow examples

* flax examples

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-07-28 15:33:45 +02:00
Alan Ji
afa96fffdf
make run_generation more generic for other devices (#25133)
* make run_generation more generic for other devices

* use Accelerate to support any device type it supports.

* make style

* fix error usage of accelerator.prepare_model

* use `PartialState` to make sure everything is running on the right device

---------

Co-authored-by: statelesshz <jihuazhong1@huawei.com>
2023-07-28 08:20:10 -04:00
Lucain
6232c380f2
Fix .push_to_hub and cleanup get_full_repo_name usage (#25120)
* Fix .push_to_hub and cleanup get_full_repo_name usage

* Do not rely on Python bool conversion magic

* request changes
2023-07-28 11:40:08 +02:00
Alan Ji
c879318cc5
replace per_gpu_eval_batch_size with per_device_eval_batch_size in readme of multiple-choice task (#25078)
replace `per_gpu_eval_batch_size` with `per_device_eval_batch_size`
in readme of multiple-choice
2023-07-25 08:11:56 -04:00
Zach Mueller
aa1b09c5d1
Change logic for logging in the examples (#24956)
Change logic
2023-07-20 12:30:10 -04:00
statelesshz
37d8611ac9
replace no_cuda with use_cpu in test_pytorch_examples (#24944)
* replace no_cuda with use_cpu in test_pytorch_examples

* remove codes that never be used

* fix style
2023-07-20 07:09:04 -04:00
ranchlai
8fd8c8e49e
Add multi-label text classification support to pytorch example (#24770)
* Add text classification example

* set the problem type and finetuning task

* ruff reformated

* fix bug for unseting label_to_id for regression

* update README.md

* fixed finetuning task

* update comment

* check if label exists in feature before removing

* add useful logging
2023-07-20 07:02:44 -04:00
Hwijeen Ahn
dd49404a89
check if eval dataset is dict (#24877)
* check if eval dataset is dict

* formatting
2023-07-18 13:33:41 -04:00
Sylvain Gugger
e9ad51306f
4.32.0.dev0 2023-07-17 13:30:44 -04:00
Xiaoli Wang
239ace152b
Fix TypeError: Object of type int64 is not JSON serializable (#24340)
* Fix TypeError: Object of type int64 is not JSON serializable

* Convert numpy.float64 and numpy.int64 to float and int for json serialization

* Black reformatted examples/pytorch/token-classification/run_ner_no_trainer.py

* * make style
2023-06-27 12:15:49 +01:00
Patrick von Platen
1609a436ec
Add MMS CTC Fine-Tuning (#24281)
* Add mms ctc fine tuning

* make style

* More fixes that are needed

* make fix-copies

* make draft for README

* add new file

* move to new file

* make style

* make style

* add quick test

* make style

* make style
2023-06-15 01:10:27 +02:00
Ethan
f7d80cb3d2
Fix steps bugs in no trainer examples (#24197)
Fix step bugs in no trainer + load checkpoint + grad acc
2023-06-12 11:49:55 -04:00
Sylvain Gugger
ba695c1efd
v4.31.0.dev0 2023-06-07 16:49:00 -04:00
Zachary Mueller
cbf6bc2350
Oops, missed one (#24054)
Oops
2023-06-06 13:30:19 -04:00
Zachary Mueller
072188d638
Act on deprecations in Accelerate no_trainer examples (#24053)
Act on deprecation
2023-06-06 13:04:38 -04:00
Sylvain Gugger
3ff443a6d9
Re-enable squad test (#23912)
* Re-enable squad test

* [all-test]

* [all-test] Fix all test command

* Fix the all-test
2023-05-31 13:44:26 -04:00
Sylvain Gugger
00f6ba0e7e
Skip failing test for now 2023-05-31 06:31:33 -04:00
Sylvain Gugger
6e4bc67099
Revamp test selection for the example tests (#23737)
* Revamp test selection for the example tests

* Rename old XLA test and fake modif in run_glue

* Fixes

* Fake Trainer modif

* Remove fake modifs
2023-05-25 09:38:21 -04:00
Wang, Yi
33687a3f61
add GPTJ/bloom/llama/opt into model list and enhance the jit support (#23291)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-05-24 10:57:56 +01:00
Zachary Mueller
b191d7db44
Update all no_trainer with skip_first_batches (#23664) 2023-05-22 14:49:31 -04:00
Boda Sadallah
a7920065f2
fix bug in group_texts function, that was inserting short batches (#23429)
* fix bug in group_texts function, that was inserting short batches

* fully exclude short batches and return empty dict instead

* fix style
2023-05-18 14:22:30 -04:00
Zachary Mueller
8a58809312
Fix translation no_trainer (#23407)
* Fix translation
2023-05-16 13:10:42 -04:00
Yih-Dar
d51296d9c2
skip test_run_squad_no_trainer for now (#23302)
skip

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-05-11 19:26:48 +02:00
Hari
5d02e6bd20
Convert numpy arrays to lists before saving the evaluation metrics as json (#23268)
* convert numpy array to list before writing to json

per_category_iou and per_category_accuracy  are ndarray in the eval_metrics

* code reformatted with make style
2023-05-11 08:54:23 -04:00
Maria Khalusova
91f4c84a19
CTC example: updated trainer parameters to save tokenizer (#23243)
trainer parameters changed to save tokenizer in addition to feature_extractor
2023-05-10 07:45:10 -04:00
Sylvain Gugger
a0c0a78233
v4.30.0.dev0 2023-05-09 14:59:38 -04:00
Sebastian
1a8f61110e
fix: Update run_qa.py to work with deepset/germanquad (#23225)
Call str on id to make sure any ints are converted into the expected format for squad datasets
2023-05-09 09:20:10 -04:00
Ashwin Mathur
fc6c8b0eaa
Add no_trainer scripts to pre-train Vision Transformers (#23156)
* Add run_mim_no_trainer.py draft from #20412

Add parse_args method and copy over other dependencies

Add Method call for sending telemetry

Initialize Accelerator

Make one log on every process

Set seed and Handle repository creation

Initialize dataset and Set validation split

Create Config

Adapt Config

Update Config

Create Feature Extractor

Create model

Set column names

Create transforms

Create mask generator

Create method to preprocess images

Shuffle datasets if needed and set transforms

Create Dataloaders

Add optimizer

Add learning rate scheduler

Prepare everything with our accelerator

Tie weights for TPU training

Recalculate training steps and training epochs

Set accelerator checkpointing steps

Initialize trackers and store configuration

Set total batch size

Fix typo: mlm -> mim

Log info at the start of training

Load in the weights and states from previous save

update the progress_bar if load from checkpoint

Define train loop

Add evaluation loop to training

Add to parse_args method

Push repo to hub

Save accelerator state

End training and save model and feature extractor

Remove unused imports

Fix trailing whitespace

* Update code based on comments, Rename feature_extractor to image_processor

* Fix linting

* Add argument for learning rate

* Add argument for setting number of training epochs

* Remove incorrect logger argument

* Convert max_train_steps to int for tqdm

---------

Co-authored-by: Saad Mahmud <shuvro.mahmud79@gmail.com>
2023-05-05 13:22:49 -04:00
Robert Stone
b6933d76d2
Tidy Pytorch GLUE benchmark example (#23134)
Migration to Evaluate for metric is not quite complete
2023-05-03 15:50:41 -04:00
regisss
bcedd0a471
Save the tokenizer and image preprocessor after training a model with the contrastive image-text example (#23035)
Save tokenizer and image preprocessor
2023-05-02 09:23:16 -04:00
Sylvain Gugger
888c4a2ae0
v4.29.0.dev0 2023-04-12 20:04:29 -04:00
Sylvain Gugger
1b1867d86b
Replace -100s in predictions by the pad token (#22693)
* Replace -100s in predictions by the pad token

* Style

* Try to catch them all
2023-04-11 09:32:20 -04:00
Mikel Penagarikano
d5239bab5b
Sync preprocesses before loading the processor at run_speech_recognition_ctc.py (#21926)
* Update run_speech_recognition_ctc.py

Make sure all processes wait until data is saved before loading the processor from the output_dit

* Make sure all processes wait until data is saved before loading the processor from the output_dit

* Update run_speech_recognition_ctc.py

* Update run_speech_recognition_seq2seq.py
2023-04-05 09:36:04 -04:00
Maziyar Panahi
98268b2e76
Add id2label and label2id to model's config in run_xnil (#22558)
Add id2label and label2id to config in run_xnil
2023-04-04 09:28:57 -04:00
Sabine
173193ccd0
Update Neptune docs (#22452) 2023-03-29 13:15:38 -04:00
Sylvain
ef28df0572 Fix quality due to ruff release 2023-03-22 20:45:08 -04:00
Connor Henderson
8e6c34b390
fix: Allow only test_file in pytorch and flax summarization (#22293)
allow only test_file in pytorch and flax summarization
2023-03-22 10:46:56 +00:00
Wang, Yi
4ccaf268fb
add low_cpu_mem_usage option in run_clm.py example which will benefit… (#22288)
* add low_cpu_mem_usage option in run_clm.py example which will benefit LLM loading

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* update all the example and README under language-modeling

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-03-22 10:42:39 +00:00
jiqing-feng
8472a224fb
Enable traced model for text-generation task (#22265) 2023-03-22 10:19:26 +00:00
Sylvain Gugger
ebdb185bef
v4.28.0.dev0 2023-03-14 13:49:10 -04:00
bofeng huang
6192549c1f
[examples/speech-recognition] Add SpecAugment to run_speech_recognition_seq2seq.py (#21942)
* Add specaugment to run_speech_recognition_seq2seq.py

* Remove useless argument: text_column

* Fix quality

* Update return_attention_mask condition

* Update specaugment arguments only for whisper models

* Remove SpecAugment arguments from ModelArguments, only leave default values for simplicity

* Apply suggestions from code review

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

* Update apply_spec_augment only for whisper models

* Apply suggestions from code review

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

* Rename return_attention_mask to forward_attention_mask to avoid confusion with wav2vec2 models

---------

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2023-03-08 17:59:31 +01:00
bofeng huang
3c0ce60855
[examples/summarization] deal with max_length and num_beams (#21740)
* Override the decoding parameters of Seq2SeqTrainer

* Fix quality

* Fix max_length parameter

* Fix quality

* Remove redundant parameter max_length

* Separate the preprocess of train and validation to use different max_target_length
2023-02-27 08:18:14 +01:00
Sanchit Gandhi
13489248fa
[Examples] Generalise run audio classification for log-mel models (#21756)
* [Examples] Generalise run audio classification for log-mel models

* batch feature extractor

* make style
2023-02-24 09:19:07 +01:00
Sylvain Gugger
b19d64d852
Respect documentation on passive log level (#21700)
* Respect documentation on passive log level

* Fix test and set log level in examples

* Add doc
2023-02-22 09:39:18 +01:00
Aaron Gokaslan
5e8c8eb5ba
Apply ruff flake8-comprehensions (#21694) 2023-02-22 09:14:54 +01:00
regisss
751f17aa48
Fix typos in contrastive-image-text example README (#21665) 2023-02-16 09:10:25 -05:00
Warren Green
fd5320bb57
Add missing arguemtn to run_clip.py (#21588) 2023-02-13 10:27:23 -05:00
steventk-g
c88b11c591
Add _mp_fn to run_mae.py for XLA testing (#21551)
Update run_mae.py
2023-02-10 09:53:55 -05:00
lee1jun
b31cee6727
fix typo in run_speech_recognition_ctc.py (#21528)
Update run_speech_recognition_ctc.py

There should be `# limitations under the License` line at the end of the documentation section.
2023-02-09 09:46:40 -05:00
Stefan Schweter
d3046dad80
[Doc] Minor URL fixes in PyTorch Text Classification Readme (#21511)
docs: fix some references in PyTorch text classification readme
2023-02-08 09:39:52 -05:00
Jeroen Van Der Donckt
bbe98ea9c3
🖊️ fix typo in pytorch semantic segmentation readme (#21492) 2023-02-07 09:39:24 -05:00
Sylvain Gugger
6f79d26442
Update quality tooling for formatting (#21480)
* Result of black 23.1

* Update target to Python 3.7

* Switch flake8 to ruff

* Configure isort

* Configure isort

* Apply isort with line limit

* Put the right black version

* adapt black in check copies

* Fix copies
2023-02-06 18:10:56 -05:00
Stas Bekman
3b9a1dc132
[examples] improve block_size warning message (#21463) 2023-02-06 08:36:12 -08:00
Quentin Lhoest
074d6b75fd
Simplify column_names in run_clm/mlm (#21382)
* simplify column_names in run_clm

* simplify column_names in run_mlm

* minor
2023-01-31 15:23:47 +01:00
Stas Bekman
98d88b23f5
[run_(clm|mlm).py examples] add streaming dataset support (#21343)
* [run_clm example] add streaming dataset support

* unrefactor kwargs

* fix

* fix

* require datasets>=2.0.0

* port to mlm
2023-01-30 14:01:35 -08:00
Sylvain Gugger
7119bb052a
v4.27.0.dev0 2023-01-23 16:52:35 -05:00
Mostafa Elhoushi
5603f78fc4
Add scikit-learn dependency to train langage-modeling (#21229) 2023-01-23 09:54:45 -05:00
amyeroberts
4bc18e7a83
Update examples with image processors (#21155)
* Update examples to use image processors

* Small fixes

* Resolve conflicts
2023-01-19 15:14:58 +00:00
Sylvain Gugger
05e72aa0c4
Adapt repository creation to latest hf_hub (#21158)
* Adapt repository creation to latest hf_hub

* Update all examples

* Fix other tests, add Flax examples

* Address review comments
2023-01-18 11:14:00 -05:00
Observer46
ff8dcb5efa
Fix arguments passed to predict function in QA Seq2seq training script (#21026)
fix args passed to predict function
2023-01-06 07:19:42 -05:00
Roy Hvaara
35a7052b61
[NumPy] Remove references to deprecated NumPy type aliases (#21022)
[NumPy] Remove references to deprecated NumPy type aliases.

This change replaces references to a number of deprecated NumPy type aliases (np.bool, np.int, np.float, np.complex, np.object, np.str) with their recommended replacement (bool, int, float, complex, object, str).

NumPy 1.24 drops the deprecated aliases, so we must remove uses before updating NumPy.

Co-authored-by: Peter Hawkins <phawkins@google.com>

Co-authored-by: Peter Hawkins <phawkins@google.com>
2023-01-05 13:02:10 -05:00
Magnus Pierrau
1d21471c78
Added mask_time_prob and mask_time_length arguments to wav2vec2 pretraining script (#20985)
Added mask_time_prob and mask_time_length arguments to wav2vec2 pretraining script and readme - new branch
2023-01-05 16:24:55 +00:00
Wang, Yi
9c9fe89f84
[run_clm example] add torch_dtype option for model load. (#20971)
* [run_clm example] add torch_dtype option for model load.
for BLOOM 175B model. peak memory will reduce about 350G for inference. the weight of BLOOM in model hub is bfloat16

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* add other type in option

* fix style

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-01-03 09:33:11 -05:00
Márton Makrai
3830b3f74a
Fixes typo in the help text for --max_length (#20883) 2022-12-24 02:07:06 -05:00
NielsRogge
d87e381f93
[Examples] Update big table (#20845)
Update big table

Co-authored-by: Niels Rogge <nielsrogge@Nielss-MBP.localdomain>
2022-12-21 11:34:31 +01:00
Emmanuel Schmidbauer
0526a075c5
run_speech_recognition_seq2seq.py: add cache_dir param to dataset (#20540) 2022-12-07 18:23:16 +00:00
Francisco Kurucz
f821bea0ad
Fix link to speech encoder decoder model in speech recognition readme (#20633) 2022-12-06 15:46:41 -05:00
Wang, Yi
ae06bce888
exclude jit time from the speed metric calculation of evaluation and prediction (#20553)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-12-06 07:37:01 -05:00
Sylvain Gugger
60d1f31bb0
v4.26.0.dev0 2022-12-01 16:19:33 -05:00
Wang, Yi
d752337baa
QnA example: add speed metric (#20522) 2022-12-01 12:04:19 -05:00
Zachary Mueller
9d1ef009b8
Fix flakey test with seed (#20318) 2022-11-18 11:33:25 -05:00
Sanchit Gandhi
c29a2f7c9c
[ASR Examples] Update README for Whisper (#20230)
* [ASR Examples] Update README for seq2seq

* add language info

* add training results

* re-word
2022-11-18 11:24:25 +00:00
Zachary Mueller
441811ecd7
Fix summarization script (#20286) 2022-11-16 15:57:07 -05:00
Jiahao Li
9681f052a1
Fix result saving errors of pytorch examples (#20276) 2022-11-16 09:51:04 -05:00
Zachary Mueller
822ae69c1b
Update reqs to include min gather_for_metrics Accelerate version (#20242)
* Update reqs to include min gather_for_metrics Accelerate version

* Other reqs
2022-11-15 13:28:00 -05:00
Muhammad Sakib Khan Inan
777b1bfe62
New logging support to "Trainer" Class (ClearML Logger) (#20184)
* Init Update

* ClearML Callbacks integration

* update corrections

* args reporting updated

* {'tensorboard': False, 'pytorch': False}

* ClearML Tests added

* add clearml

* output_uri=True in Task.init

* reformatted integrations.py

* reformatted and fixed

* IF-ELSE statement issue on "has_clearml" resolved

* Add clearml in main callback docs

* Add additional clearml documentation

* Update src/transformers/integrations.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Accept suggestion

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Accept suggestion

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Small change in comments

* Make style clearml

* Accept suggestion

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Victor Sonck <victor.sonck@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-15 10:08:59 -05:00
Yih-Dar
cf7b98b807
Fix run_clip.py (#20234)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-11-15 15:45:21 +01:00
Ming Liu
36b063ed4f
Update README.md (#20188)
There is typo in the original hyperlink.

Below is the original version:
Based on the script [`run_translation_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/**run_translationn_no_trainer.py**).
2022-11-14 12:53:02 -05:00
Sanchit Gandhi
af1a7c8ca3
[Examples] Generalise Seq2Seq ASR to handle Whisper (#19519)
* merge conflicts

* bos and eos in datacollator

* (temp) hardcode removal of attention mask

* freeze encoder

* actually freeze encoder

* set max length / num beams according to gen kwargs

* (temp) fix tests

* don't pop attn mask

* override return attention mask config from Hub

* Hub configs updated 🤗

* final fixes

* update type annotations

* backward comp
2022-11-14 17:45:46 +00:00
bhuang
3502c202f9
Update README.md (#20063) 2022-11-04 08:56:54 -04:00
Sylvain Gugger
06886d5a68
Only resize embeddings when necessary (#20043)
* Only resize embeddings when necessary

* Add comment
2022-11-03 12:05:04 -04:00
amyeroberts
a6b7759880
Add Image Processors (#19796)
* Add CLIP image processor

* Crop size as dict too

* Update warning

* Actually use logger this time

* Normalize doesn't change dtype of input

* Add perceiver image processor

* Tidy up

* Add DPT image processor

* Add Vilt image processor

* Tidy up

* Add poolformer image processor

* Tidy up

* Add LayoutLM v2 and v3 imsge processors

* Tidy up

* Add Flava image processor

* Tidy up

* Add deit image processor

* Tidy up

* Add ConvNext image processor

* Tidy up

* Add levit image processor

* Add segformer image processor

* Add in post processing

* Fix up

* Add ImageGPT image processor

* Fixup

* Add mobilevit image processor

* Tidy up

* Add postprocessing

* Fixup

* Add VideoMAE image processor

* Tidy up

* Add ImageGPT image processor

* Fixup

* Add ViT image processor

* Tidy up

* Add beit image processor

* Add mobilevit image processor

* Tidy up

* Add postprocessing

* Fixup

* Fix up

* Fix flava and remove tree module

* Fix image classification pipeline failing tests

* Update feature extractor in trainer scripts

* Update pad_if_smaller to accept tuple and int size

* Update for image segmentation pipeline

* Update src/transformers/models/perceiver/image_processing_perceiver.py

Co-authored-by: Alara Dirik <8944735+alaradirik@users.noreply.github.com>

* Update src/transformers/image_processing_utils.py

Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>

* Update src/transformers/models/beit/image_processing_beit.py

Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>

* PR comments - docstrings; remove accidentally added resize; var names

* Update docstrings

* Add exception if size is not in the right format

* Fix exception check

* Fix up

* Use shortest_edge in tuple in script

Co-authored-by: Alara Dirik <8944735+alaradirik@users.noreply.github.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
2022-11-02 11:57:36 +00:00
Sylvain Gugger
c3a93d8d82
v4.25.0.dev0 2022-10-31 21:48:40 -04:00
Sanchit Gandhi
f38a145418
[ASR] Update 'tasks' for model card (#19986) 2022-10-31 16:50:17 +00:00
regisss
5d2d51a0fb
Fix LR (#19875) 2022-10-26 08:35:53 -04:00
GMFTBY
71786b10c5
Adding the state-of-the-art contrastive search decoding methods for the codebase of generation_utils.py (#19477)
* add: the contrastive search for generaton_utils

* add: testing scripts for contrastive search under examples/text-generation

* update the quality of codes

* revise the docstring; make the generation_contrastive_search.py scripts;

* revise the examples/pytorch/text-generation/run_generation_contrastive_search.py to the auto-APIs format

* revise the necessary documents

* fix: revise the docstring of generation_contrastive_search.py

* Fix the code indentation

* fix: revise the nits and examples in contrastive_search docstring.

* fix the copyright

* delete generation_contrastive_search.py

* revise the logic in contrastive_search

* update the intergration test and the docstring

* run the tests over

* add the slow decorate to the contrastive_search intergrate test

* add more test

* do the style, quality, consistency checks
2022-10-19 10:17:46 +01:00
amyeroberts
31ec424b3d
Add decorator to flaky test (#19674) 2022-10-18 18:51:37 +01:00
Yifan Yang
94d7c3ba44
[Examples] make default preprocessing_num_workers=1 (#19684)
* [Examples] make default preprocessing_num_workers=1

* [Examples] revert changes in research projects
2022-10-17 14:17:01 -04:00
Sanchit Gandhi
eefcecaa35
[Examples] Fix typos in run speech recognition seq2seq (#19514) 2022-10-12 15:33:22 +01:00
FilipposVentirozos
4ed0fa3676
Fix pytorch seq2seq qa (#19258)
* fixed typo for SQuAD

* Fixed the preprocess_validation_function function for the labels to reflect the remaining truncated instances

* Rolled back the trainer_seq2seq_qa.py for UnboundLocalError: local variable 'metrics' referenced before assignment

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-12 08:33:44 -04:00
regisss
bb2cfd1824
Add multi-node conditions in trainer_qa.py and trainer_seq2seq.py (#19502)
* Add multi-node conditions in trainer_qa.py and trainer_seq2seq.py

* Code improvement
2022-10-11 22:48:56 -04:00
Lysandre
10100979ed Dev version 2022-10-10 17:25:40 -04:00
wei zhao
7d5ce6802e
Fix typo in image-classification/README.md (#19424)
Fix link typo of the following content.
PyTorch version, Trainer
PyTorch version, no Trainer
2022-10-10 09:16:58 -04:00
ddobokki
fa4bcd5274
edit: cast attention_mask to long in DataCollatorCTCWithPadding (#19369)
* edit: casting attention_mask to long in DataCollatorCTCWithPadding

* edit: casting attention_mask to long in DataCollatorCTCWithPadding
2022-10-07 10:05:48 -04:00