Commit Graph

15708 Commits

Author SHA1 Message Date
Yih-Dar
fbb41cd420
consistent job / pytest report / artifact name correspondence (#30392)
* better names

* run better names

* update

* update

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-24 22:32:42 +02:00
Zach Mueller
6ad9c8f743
Non blocking support to torch DL's (#30465)
* Non blocking support

* Check for optimization

* Doc
2024-04-24 16:24:23 -04:00
Zach Mueller
5c57463bde
Enable fp16 on CPU (#30459)
* Check removing flag for torch

* LLM oops

* Getting there...

* More discoveries

* Change

* Clean up and prettify

* Logic check

* Not
2024-04-24 15:38:52 -04:00
jeffhataws
d1d94d798f
Neuron: When save_safetensor=False, no need to move model to CPU (#29703)
save_safetensor=True is default as of release 4.35.0, which then
required TPU hotfix https://github.com/huggingface/transformers/pull/27799
(issue https://github.com/huggingface/transformers/issues/27578).
However, when the flag save_safetensor is set to False (compatibility mode),
moving the model to CPU causes generation of too many graphs
during checkpoint https://github.com/huggingface/transformers/issues/28438.
This PR disable moving of model to CPU when save_safetensor=False.
2024-04-24 18:22:08 +01:00
Arthur
661190b44d
[research_project] Most of the security issues come from this requirement.txt (#29977)
update most of decision transformers research project
2024-04-24 17:56:45 +02:00
Yih-Dar
d0d430f14a
Fix wrong indent in utils/check_if_new_model_added.py (#30456)
fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-24 17:44:12 +02:00
Gustavo de Rosa
c9693db2fc
Phi-3 (#30423)
* chore(root): Initial commit of Phi-3 files.

* fix(root): Fixes Phi-3 missing on readme.

* fix(root): Ensures files are consistent.

* fix(phi3): Fixes unit tests.

* fix(tests): Fixes style of phi-3 test file.

* chore(tests): Adds integration tests for Phi-3.

* fix(phi3): Removes additional flash-attention usage, .e.g, swiglu and rmsnorm.

* fix(phi3): Fixes incorrect docstrings.

* fix(phi3): Fixes docstring typos.

* fix(phi3): Adds support for Su and Yarn embeddings.

* fix(phi3): Improves according first batch of reviews.

* fix(phi3): Uses up_states instead of y in Phi3MLP.

* fix(phi3): Uses gemma rotary embedding to support torch.compile.

* fix(phi3): Improves how rotary embedding classes are defined.

* fix(phi3): Fixes inv_freq not being re-computed for extended RoPE.

* fix(phi3): Adds last suggestions to modeling file.

* fix(phi3): Splits inv_freq calculation in two lines.
2024-04-24 17:32:09 +02:00
Yih-Dar
42fed15c81
Add paths filter to avoid the chance of being triggered (#30453)
* trigger

* remove the last job

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-24 16:58:54 +02:00
Eduardo Pacheco
d26c14139c
[SegGPT] Fix loss calculation (#30421)
* Fixed main train issues

* Added loss test

* Update src/transformers/models/seggpt/modeling_seggpt.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Added missing labels arg in SegGptModel forward

* Fixed typo

* Added slow test to test loss calculation

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-24 15:24:34 +01:00
Marc Sun
37fa1f654f
fix jamba slow foward for multi-gpu (#30418)
* fix jamba slow foward for multi-gpu

* remove comm

* oups

* style
2024-04-24 14:19:08 +02:00
Anton Vlasjuk
5d64ae9d75
fix uncaught init of linear layer in clip's/siglip's for image classification models (#30435)
* fix clip's/siglip's _init_weights to reflect linear layers in "for image classification"

* trigger slow tests
2024-04-24 13:03:30 +01:00
Fanli Lin
16c8e176f9
[tests] make test device-agnostic (#30444)
* make device-agnostic

* clean code
2024-04-24 11:21:27 +01:00
Arthur
9a4a119c10
[Llava] + CIs fix red cis and llava integration tests (#30440)
* nit

* nit and fmt skip

* fixup

* Update src/transformers/convert_slow_tokenizer.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* set to true

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-24 10:51:35 +02:00
Pavel Iakubovskii
767e351840
Fix YOLOS image processor resizing (#30436)
* Add test for square image that fails

* Fix for square images

* Extend test cases

* Fix resizing in tests

* Style fixup
2024-04-24 09:50:17 +01:00
Arthur
89c510d842
Add llama3 (#30334)
* nuke

* add co-author

* add co-author

* update card

* fixup and fix copies to please our ci

* nit fixup

* super small nits

* remove tokenizer_path from call to `write_model`

* always safe serialize by default

---------

Co-authored-by: pcuenca <pcuenca@users.noreply.github.com>
Co-authored-by: xenova <xenova@users.noreply.github.com>
2024-04-24 10:11:19 +02:00
Yih-Dar
fc34f842cc
New model PR needs green (slow tests) CI (#30341)
* You should not pass

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-04-24 09:52:55 +02:00
Lysandre Debut
c6bba94040
Remove mentions of models in the READMEs and link to the documentation page in which they are featured. (#30420)
* REAMDEs

* REAMDEs v2
2024-04-24 09:38:31 +02:00
Lysandre Debut
d4e92f1a21
Remove add-new-model in favor of add-new-model-like (#30424)
* Remove add-new-model in favor of add-new-model-like

* nits
2024-04-24 09:38:18 +02:00
Lysandre Debut
0eb8fbcdac
Remove task guides auto-update in favor of links towards task pages (#30429) 2024-04-24 09:38:10 +02:00
Arthur
e34da3ee3c
[LlamaTokenizerFast] Refactor default llama (#28881)
* push legacy to fast as well

* super strange

* Update src/transformers/convert_slow_tokenizer.py

* make sure we are BC

* fix Llama test

* nit

* revert

* more test

* style

* update

* small update w.r.t tokenizers

* nit

* don't split

* lol

* add a test for `add_prefix_space=False`

* fix gemma tokenizer as well

* update

* fix gemma

* nicer failures

* fixup

* update

* fix the example for legacy = False

* use `huggyllama/llama-7b` for the PR doctest

* nit

* use from_slow

* fix llama
2024-04-23 23:12:59 +02:00
Jiewen Tan
12c39e5693
Fix use_cache for xla fsdp (#30353)
* Fix use_cache for xla fsdp

* Fix linters
2024-04-23 18:01:35 +01:00
Steven Basart
b8b1e442e3
Rename torch.run to torchrun (#30405)
torch.run does not exist anywhere as far as I can tell.
2024-04-23 09:04:17 -07:00
Matt
696ededd2b
Remove old TF port docs (#30426)
* Remove old TF port guide

* repo-consistency

* Remove some translations as well for consistency

* Remove some translations as well for consistency
2024-04-23 16:06:20 +01:00
Yih-Dar
416fdbad7a
Fix LayoutLMv2 init issue and doctest (#30278)
* fix

* try suggestion

* update

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-23 15:33:17 +02:00
Younes Belkada
d179b9dc78
FIX: re-add bnb on docker image (#30427)
Update Dockerfile
2024-04-23 15:32:54 +02:00
Pedro Cuenca
4b63d0139e
Make EosTokenCriteria compatible with mps (#30376) 2024-04-23 15:23:52 +02:00
Wing Lian
57fc00f36c
fix for itemsize => element_size() for torch backwards compat (#30133)
* fix for itemsize => element_size() for torch backwards compat

* improve handling of element counting

* Update src/transformers/modeling_utils.py

* fixup

* Update src/transformers/modeling_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Younes Belkada <younesbelkada@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-23 15:00:28 +02:00
Raushan Turganbay
77b59dce9f
Fix on "cache position" for assisted generation (#30068)
* clean commit history I hope

* get kv seq length correctly

* PR suggestions

* Update src/transformers/testing_utils.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* add comment

* give gpt bigcode it's own overriden method

* remove code

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2024-04-23 16:23:36 +05:00
Joao Gante
31921d8d5e
Jax: scipy version pin (#30402)
scipy pin for jax
2024-04-23 10:42:17 +01:00
Fanli Lin
2d61823fa2
[tests] add require_torch_sdpa for test that needs sdpa support (#30408)
* add cuda flag

* check for sdpa

* add bitsandbytes
2024-04-23 10:39:38 +01:00
Nick Doiron
04ac3245e4
fix: link to HF repo/tree/revision when a file is missing (#30406)
fix: link to HF repo tree when a file is missing
2024-04-23 10:05:57 +01:00
Russell Klopfer
179ab098da
remove redundant logging from longformer (#30365) 2024-04-23 09:57:03 +01:00
Eduardo Pacheco
c651ea982b
[Grounding DINO] Add support for cross-attention in GroundingDinoMultiHeadAttention (#30364)
* Added cross attention support

* Fixed dtypes

* Fixed assumption

* Moved to decoder
2024-04-23 09:56:14 +01:00
Raushan Turganbay
408453b464
Add inputs embeds in generation (#30269)
* Add inputs embeds in generation

* always scale embeds

* fix-copies

* fix failing test

* fix copies once more

* remove embeds for models with scaling

* second try to revert

* codestyle
2024-04-23 13:14:48 +05:00
Arthur
6c1295a0d8
show -rs to show skip reasons (#30318) 2024-04-23 08:05:42 +02:00
Steven Liu
e74d793a3c
[docs] LLM inference (#29791)
* first draft

* feedback

* static cache snippet

* feedback

* feedback
2024-04-22 12:41:51 -07:00
zhong zhuang
b4c18a830a
[FEAT]: EETQ quantizer support (#30262)
* [FEAT]: EETQ quantizer support

* Update quantization.md

* Update docs/source/en/main_classes/quantization.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update docs/source/en/quantization.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update docs/source/en/quantization.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/integrations/__init__.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/integrations/__init__.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/integrations/eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/integrations/eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/integrations/eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/quantization/eetq_integration/test_eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/quantizers/auto.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/quantizers/auto.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/quantizers/auto.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/quantizers/quantizer_eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/quantization/eetq_integration/test_eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/quantizers/quantizer_eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/quantization/eetq_integration/test_eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/quantization/eetq_integration/test_eetq.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* [FEAT]: EETQ quantizer support

* [FEAT]: EETQ quantizer support

* remove whitespaces

* update quantization.md

* style

* Update docs/source/en/quantization.md

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* add copyright

* Update quantization.md

* Update docs/source/en/quantization.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/quantization.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Address the comments by amyeroberts

* style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-22 20:38:58 +01:00
Kamil Akesbi
569743f510
Add sdpa and fa2 the Wav2vec2 family. (#30121)
* add sdpa to wav2vec.
Co-authored-by: kamilakesbi <kamil@huggingface.co>
Co-authored-by: jp1924 <jp42maru@gmail.com>

* add fa2 to wav2vec2

* add tests

* fix attention_mask compatibility with fa2

* minor dtype fix

* replace fa2 slow test

* fix fa2 slow test

* apply code review + add fa2 batch test

* add sdpa and fa2 to hubert

* sdpa and fa2 to data2vec_audio

* sdpa and fa2 to Sew

* sdpa to unispeech + unispeech sat

* small fix

* attention mask in tests

Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

* add_speedup_benchmark_to_doc

---------

Co-authored-by: kamil@huggingface.co <kamil.akesbi@gmail.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2024-04-22 18:30:38 +01:00
Younes Belkada
367a0dbd53
FIX / PEFT: Pass device correctly to peft (#30397)
pass device correctly to peft
2024-04-22 18:13:19 +02:00
Pavel Iakubovskii
13b3b90ab1
Fix DETA save_pretrained (#30326)
* Add class_embed to tied weights for DETA

* Fix test_tied_weights_keys for DETA model

* Replace error raise with assert statement
2024-04-22 17:11:13 +01:00
Joao Gante
6c7335e053
Jamba: fix left-padding test (#30389)
fix test
2024-04-22 17:02:55 +01:00
hoshi-hiyouga
f3b3533e19
Fix layerwise GaLore optimizer hard to converge with warmup scheduler (#30372)
Update optimization.py
2024-04-22 17:00:26 +01:00
Matt
0d84901cb7
Terminator strings for generate() (#28932)
* stash commit (will discard all of this)

* stash commit

* First commit - needs a lot of testing!

* Add a test

* Fix imports and make the tests actually test something

* Tests pass!

* Rearrange test

* Add comments (but it's still a bit confusing)

* Stop storing the tokenizer

* Comment fixup

* Fix for input_ids with a single sequence

* Update tests to test single sequences

* make fixup

* Fix incorrect use of isin()

* Expand tests to catch more cases

* Expand tests to catch more cases

* make fixup

* Fix length calculation and update tests

* Handle Ġ as a space replacement too

* Update src/transformers/generation/stopping_criteria.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Add optimizations from Joao's suggestion

* Remove TODO

* Update src/transformers/generation/stopping_criteria.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update tests/generation/test_stopping_criteria.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* make fixup

* Rename some variables and remove some debugging clauses for clarity

* Add tests for the sub-methods

* Clarify one test slightly

* Add stop_strings to GenerationConfig

* generate() supports stop_string arg, asks for tokenizer if not provided

* make fixup

* Cleanup code and rename variables for clarity

* Update tokenizer error

* Update tokenizer passing, handle generation on GPU

* Slightly more explanation cleanup

* More comment cleanup

* Factor out the token cleanup so it's more obvious what we're doing, and we can change it later

* Careful with that cleanup!

* Cleanup + optimizations to _get_matching_positions

* More minor performance tweaks

* Implement caching and eliminate some expensive ops (startup time: 200ms -> 9ms)

* Remove the pin_memory call

* Parallelize across all stop strings!

* Quick fix for tensor devices

* Update embeddings test for the new format

* Fix test imports

* Manual patching for BERT-like tokenizers

* Return a bool vector instead of a single True/False

* Better comment

* Better comment

* Add tests from @zucchini-nlp

* Amy's list creation nit

* tok_list -> token_list

* Push a big expanded docstring (should we put it somewhere else?)

* Expand docstrings

* Docstring fixups

* Rebase

* make fixup

* Make a properly general method for figuring out token strings

* Fix naming throughout the functions

* Move cache, refactor, fix tests

* Add comment

* Remove finished TODO

* Remove finished TODO

* make fixup

* Update src/transformers/generation/stopping_criteria.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update and shorten docstring

* Update tests to be shorter/clearer and test specific cases

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-22 14:13:04 +01:00
Matt
0e9d44d7a1
Update docstrings for text generation pipeline (#30343)
* Update docstrings for text generation pipeline

* Fix docstring arg

* Update docstring to explain chat mode

* Fix doctests

* Fix doctests
2024-04-22 14:01:30 +01:00
Arthur
2d92db8458
Llama family, fix use_cache=False generation (#30380)
* nit to make sure cache positions are not sliced

* fix other models

* nit

* style
2024-04-22 14:42:57 +02:00
Howard Liberty
f16caf44bb
Add FSDP config for CPU RAM efficient loading through accelerate (#30002)
* Add FSDP config for CPU RAM efficient loading

* Style fix

* Update src/transformers/training_args.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/transformers/training_args.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Add sync_module_states and cpu_ram_efficient_loading validation logic

* Update src/transformers/training_args.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Style

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-22 13:15:28 +01:00
Raushan Turganbay
9138935784
GenerationConfig: warn if pad token is negative (#30187)
* warn if pad token is negative

* Update src/transformers/generation/configuration_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/generation/configuration_utils.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update src/transformers/generation/configuration_utils.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2024-04-22 11:31:38 +01:00
Jacky Lee
8b02bb6e74
Enable multi-device for more models (#30379)
* feat: support for vitmatte

* feat: support for vivit

* feat: support for beit

* feat: support for blip :D

* feat: support for data2vec
2024-04-22 10:57:27 +01:00
Merve Noyan
b20b017949
Nits for model docs (#29795)
* Update llava_next.md

* Update seggpt.md
2024-04-22 10:41:03 +01:00
NielsRogge
8c12690cec
[Grounding DINO] Add resources (#30232)
* Add resources

* Address comments

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-19 21:03:07 +02:00