Commit Graph

13628 Commits

Author SHA1 Message Date
JB (Don)
5ea2595ecd
Add warning for missing attention mask when pad tokens are detected (#25345)
* Add attention mask and pad token warning to many of the models

* Remove changes under examples/research_projects

These files are not maintained by HG.

* Skip the warning check during torch.fx or JIT tracing

* Switch ordering for the warning and input shape assignment

This ordering is a little cleaner for some of the cases.

* Add missing line break in one of the files
2023-08-08 10:49:21 +02:00
Yih-Dar
6ea3ee3cd2
Fix test_model_parallelism (#25359)
* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-08 10:48:45 +02:00
Matthew Hoffman
d4bd33cc9f
Register ModelOutput subclasses as supported torch.utils._pytree nodes (#25358)
* Register ModelOutput subclasses as supported torch.utils._pytree nodes

Fixes #25357 where DDP with static_graph=True does not sync gradients when calling backward() over tensors contained in ModelOutput subclasses

* Add test for torch pytree ModelOutput serialization and deserialization
2023-08-08 08:12:11 +02:00
David Reguera
a23ac36f8c
[DOCS] Add descriptive docstring to MinNewTokensLength (#25196)
* Add descriptive docstring to MinNewTokensLength

It addresses https://github.com/huggingface/transformers/issues/24783

* Refine the differences between `min_length` and `min_new_tokens`

* Remove extra line

* Remove extra arguments in generate

* Add a missing space

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Run the linter

* Add clarification comments

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-08-08 08:09:17 +02:00
Pedro Lira
080a97119c
Add mask2former fp16 support (#25093)
* Add mask2former fp16 support

* Clear consistency/quality issues

* Fix consistency/quality (2)

* Add integration test for mask2former (fp16 case)

* Fix code quality

* Add integration test for maskformer (fp16 case)

* Add integration test for oneformer (fp16 case)

* Remove slow decorator from fp16 tests

* Fix lint

* Remove usage of full inference and value checks for fp16

* Temporarily comment slow for {mask, mask2, one}former

* Add fp16 support to oneformer

* Revert "Temporarily comment slow for {mask, mask2, one}former"

This reverts commit e5371edabd.

* Remove dtype conversion noop
2023-08-07 20:07:29 +01:00
Merve Noyan
5ee9693a1c
Docs: Added benchmarks for torch.compile() for vision models (#24748)
* added benchmarks for compile

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* added more models

* added more models fr

* added visualizations

* minor fix

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/perf_torch_compile.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Added links to models and put charts side by side

* Added batch comparisons

* Added more comparisons

* Fix table

* Added link to wheel

* Update perf_torch_compile.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-08-07 17:18:43 +01:00
Rishab26
676247fd6b
[DOCS] Add NoRepeatNGramLogitsProcessor Example for LogitsProcessor class (#25186)
* Add Description And Example to Docstring

* make style corrections

* make style

* Doc Style Consistent With HF

* Apply make style

* Modify Docstring

* Edit Type in Docstring

* Feedback Incorporated

* Edit Docstring

* make style

* Post Review Changes

* Review Feedback Incorporated

* Styling

* Formatting

* make style

* pep8
2023-08-07 17:02:14 +01:00
Phuc Van Phan
5fe36970e5
Adding more information in help parser on train_file and validation_file (#25324)
chorse: adding new doc on train and val
2023-08-07 17:56:13 +02:00
Sylvain Gugger
baf1daa58e
Migrate Trainer from Repository to upload_folder (#25095)
* First draft

* Deal with progress bars

* Update src/transformers/utils/hub.py

Co-authored-by: Lucain <lucainp@gmail.com>

* Address review comments

* Forgot one

* Pin hf_hub

* Add argument for push all and fix tests

* Fix tests

* Address review comments

---------

Co-authored-by: Lucain <lucainp@gmail.com>
2023-08-07 17:47:22 +02:00
Yih-Dar
c177606fb4
Fix more offload edge cases (#25342)
* fix

* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-07 17:45:41 +02:00
Joao Gante
7d65697da7
Generate: remove Marian hack (#25294)
Remove Marian hack
2023-08-07 15:38:24 +01:00
Jackmin801
145109382a
Allow trust_remote_code in example scripts (#25248)
* pytorch examples

* pytorch mim no trainer

* cookiecutter

* flax examples

* missed line in pytorch run_glue

* tensorflow examples

* tensorflow run_clip

* tensorflow run_mlm

* tensorflow run_ner

* tensorflow run_clm

* pytorch example from_configs

* pytorch no trainer examples

* Revert "tensorflow run_clip"

This reverts commit 261f86ac1f.

* fix: duplicated argument
2023-08-07 16:32:25 +02:00
calpt
65001cb1c8
Loosen output shape restrictions on GPT-style models (#25188)
* Loosen output shape restrictions on GPT-style models

* Use more self-explanatory variables

* Revert "Use more self-explanatory variables"

This reverts commit 5fd9ab3911.
2023-08-07 16:31:15 +02:00
oobabooga
d6bfba76be
Generalize CFG to allow for positive prompts (#25339)
* Generalize CFG to allow for positive prompts

* Add documentation, fix the correct class
2023-08-07 16:25:15 +02:00
Yih-Dar
b0f23036f1
Update TF pin in docker image (#25343)
fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-07 12:32:34 +02:00
Injin Paek
b9da44bd3e
🌐 [i18n-KO] Translated perf_infer_gpu_one.md to Korean (#24978)
* docs: ko: perf_infer_gpu_one

* feat: chatgpt draft

* fix: manual edits

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com>
Co-authored-by: TaeYupNoh <107118671+TaeYupNoh@users.noreply.github.com>

* fix: resolve suggestions

* fix: resolve suggestions

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

---------

Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com>
Co-authored-by: TaeYupNoh <107118671+TaeYupNoh@users.noreply.github.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2023-08-07 08:37:29 +02:00
Guillaume "Vermeille" Sanchez
d533465150
add CFG for .generate() (#24654) 2023-08-06 20:15:24 +01:00
mariecwhite
a6e6b1c622
Remove jnp.DeviceArray since it is deprecated. (#24875)
* Remove jnp.DeviceArray since it is deprecated.

* Replace all instances of jnp.DeviceArray with jax.Array

* Update src/transformers/models/bert/modeling_flax_bert.py

---------

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2023-08-04 18:36:57 +01:00
Sanchit Gandhi
fdd81aea12
[Whisper] Better error message for outdated generation config (#25298) 2023-08-04 15:53:57 +01:00
Sylvain Gugger
fdaef3368b
Document toc check and doctest check scripts (#25319)
* Clean doc toc check and make doctest list better

* Add to Makefile
2023-08-04 16:24:04 +02:00
Yih-Dar
ce6d153a53
Make bark could have tiny model (#25290)
* temp

* update

* update

* update

* small dim

* small dim

* small dim

* fix

* update

* fix

* fix

* fix

* fix

* fix

* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-04 15:13:14 +02:00
Sylvain Gugger
f0fd73a2de
Document check copies (#25291)
* Document check copies better and add tests

* Include header in check for copies

* Manual fixes

* Try autofix

* Fixes

* Clean tests

* Finalize doc

* Remove debug print

* More fixes
2023-08-04 14:56:29 +02:00
Sylvain Gugger
29f04002e6
Deal with nested configs better in base class (#25237)
* Deal better with nested configs

* Fixes

* More fixes

* Fix last test

* Clean up existing configs

* Remove hack in MPT Config

* Update src/transformers/configuration_utils.py

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* Fix setting a nested config via dict in the kwargs

* Adapt common test

* Add test for nested config load with dict

---------

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2023-08-04 14:56:09 +02:00
Sylvain Gugger
aeb5a08abd
Add offline mode for agents (#25226)
* Add offline mode for agents

* Disable second check too
2023-08-04 14:55:58 +02:00
Joao Gante
bff4313b37
Generate: get generation mode as an enum (#25292) 2023-08-04 13:35:10 +01:00
Sylvain Gugger
fab1a0aa82
Give more memory in test_disk_offload (#25315) 2023-08-04 14:10:31 +02:00
Peter Law
67683095a6
Move usage of deprecated logging.warn to logging.warning (#25310)
The former spelling is deprecated and has been discouraged for a
while. The latter spelling seems to be more common in this project
anyway, so this change ought to be safe.

Fixes https://github.com/huggingface/transformers/issues/25283
2023-08-04 12:42:05 +01:00
Victor Geislinger
641adca558
Fix typo: Roberta -> RoBERTa (#25302) 2023-08-03 14:17:30 -07:00
Howard Huang
33da2db5ea
[small] llama2.md typo (#25295)
`groupe` -> `grouped`
2023-08-03 14:17:06 -07:00
Sanchit Gandhi
66c240f3c9
[JAX] Bump min version (#25286)
* [JAX] Bump min version

* make fixup
2023-08-03 16:05:02 +01:00
Roland Szabo
d114a6b71f
Add timeout parameter to load_image function (#25184)
* Add timeout parameter to load_image function.

* Remove line.

* Reformat code

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Add parameter to docs.

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-08-03 15:51:54 +01:00
Yoach Lacombe
6d3f9c1e2e
add generate method to SpeechT5ForTextToSpeech (#25233)
* add generate method to SpeechT5ForTextToSpeech

* update speecht5forTTS docstrings

* Remove defaults to None in generate docstrings

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-03 14:12:07 +01:00
Yoach Lacombe
8455346c5c
Update bark doc (#25234)
* add mention to optimization in Bark docs

* add offload mention in docs

* Apply suggestions from code review

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

* Update bark docs.

* Update bark.md

---------

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2023-08-03 14:08:39 +01:00
Joao Gante
a8817371c9
Docs: separate generate section (#25235)
Separate generate doc section
2023-08-03 13:51:56 +01:00
amyeroberts
30409af6e1
Update InstructBLIP & Align values after rescale update (#25209)
* Update InstructBLIP values
Note: the tests are not independent. Running the test independentely produces different logits compared to running all the integration tests

* Update test values after rescale update

* Remove left over commented out code

* Revert to previous rescaling logic

* Update rescale tests
2023-08-03 11:01:10 +01:00
Tom Aarsen
15082a9dc6
Docs: Update list of report_to logging integrations in docstring (#25281)
* Update list of logging integrations in docstring

Also update type hint

* Also add 'flyte' to report_to callback list

* Revert 'report_to' type hint update

Due to CLI breaking
2023-08-03 11:34:45 +02:00
Yih-Dar
2bd7a27a67
CI with pytest_num_workers=8 for torch/tf jobs (#25274)
n8

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-02 22:00:32 +02:00
Yih-Dar
bd90cda9a6
CI with num_hidden_layers=2 🚀🚀🚀 (#25266)
* CI with layers=2

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-02 20:22:36 +02:00
Patrick von Platen
b28ebb2655
[MMS] Fix mms (#25267)
* [MMS] Fix mms

* [MMS] Fix mms

* fix mms loading

* Apply suggestions from code review

* make style

* Update tests/models/wav2vec2/test_modeling_wav2vec2.py
2023-08-02 18:11:15 +02:00
Kevin Lloyd Bernal
ad8321512d
recommend DeepSpeed's Argument Parsing documentation (#25268) 2023-08-02 11:48:39 -04:00
heuristicwave
bef02fd6b9
🌐 [i18n-KO] Translated perf_infer_gpu_many.md to Korean (#24943)
* doc: ko: perf_infer_gpu_many.mdx

* feat: chatgpt draft

* fix: manual edits

* Update docs/source/ko/perf_infer_gpu_many.md

Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>

---------

Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
2023-08-02 16:06:35 +02:00
Yih-Dar
8edd0da960
Remove pytest_options={"rA": None} in CI (#25263)
fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-02 14:53:05 +02:00
Euan Ong
1baeed5bdf
Fix return_dict_in_generate bug in InstructBlip generate function (#25246)
Fix bug in InstructBlip generate function

Previously, the postprocessing conducted on generated sequences in InstructBlip's generate function assumed these sequences were tensors (i.e. that `return_dict_in_generate == False`).

This commit checks whether the result of the call to the wrapped language model `generate()` is a tensor, and if not attempts to postprocess the sequence attribute of the returned results object.
2023-08-02 13:43:54 +01:00
Ashish Thomas Chempolil
eec0d84e6a
[DOCS] Add example and modified docs of EtaLogitsWarper (#25125)
* added example and modified docs for EtaLogitsWarper

* make style

* fixed styling issue on 544

* removed error info and added set_seed

* Update src/transformers/generation/logits_process.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/generation/logits_process.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* updated the results

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-08-02 11:55:56 +01:00
Yupeng Jia
8021c684ec
Fix some bugs for two stage training of deformable detr (#25045)
* Update modeling_deformable_detr.py

Fix bugs for two stage training

* Update modeling_deformable_detr.py

* Add test_two_stage_training to DeformableDetrModelTest

---------

Co-authored-by: yupeng.jia <yupeng.jia@momenta.ai>
2023-08-02 11:30:36 +01:00
amyeroberts
1b35409768
Update rescale tests - cast to float after rescaling to reflect #25229 (#25259)
Rescale tests - cast to float after rescaling to reflect #25229
2023-08-02 11:29:55 +01:00
Sourab Mangrulkar
904e7e0f3c
resolving zero3 init when using accelerate config with Trainer (#25227)
* resolving zero3 init when using accelerate config with Trainer

* refactor

* fix

* fix import
2023-08-02 15:07:27 +05:30
Yih-Dar
149cb0cce2
Add token arugment in example scripts (#25172)
* fix

* fix

* fix

* fix

* fix

* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-02 11:17:31 +02:00
YQ
c6a8768dab
add pathname and line number to logging formatter in debug mode (#25203)
* add pathname and lineno to logging formatter in debug mode

* use TRANSFORMERS_VERBOSITY="detail" to print pathname and lineno
2023-08-02 09:44:43 +01:00
YQ
2230d149f0
fix get_keys_to_not_convert() to return correct modules for full precision inference (#25105)
* add test for `get_keys_to_not_convert`

* add minimum patch to keep mpt lm_head from 8bit quantization

* add reivsion to
2023-08-02 04:21:52 -04:00