Commit Graph

19383 Commits

Author SHA1 Message Date
Guang Yang
9470c00042
Llama3 and Llama2 are ExecuTorch compatible (#34101)
Llama3_1b and Llama2_7b are ExecuTorch compatible

Co-authored-by: Guang Yang <guangyang@fb.com>
2024-10-17 17:33:19 +02:00
Name
7f5088503f
removes decord (#33987)
* removes decord dependency

optimize

np

Revert "optimize"

This reverts commit faa136b51ec4ec5858e5b0ae40eb7ef89a88b475.

helpers as documentation

pydoc

missing keys

* make fixup

* require_av

---------

Co-authored-by: ad <hi@arnaudiaz.com>
2024-10-17 17:27:34 +02:00
Sebastian Schoennenbeck
f2846ad2b7
Fix for tokenizer.apply_chat_template with continue_final_message=True (#34214)
* Strip final message

* Do full strip instead of rstrip

* Retrigger CI

---------

Co-authored-by: Matt <rocketknight1@gmail.com>
2024-10-17 15:45:07 +01:00
Christopher McGirr
b57c7bce21
fix(Wav2Vec2ForCTC): torch export (#34023)
* fix(Wav2Vec2ForCTC): torch export

Resolves the issue described in #34022 by implementing the
masking of the hidden states using an elementwise multiplication
rather than indexing with assignment.

The torch.export functionality seems to mark the tensor as frozen
even though the update is legal.

This change is a workaround for now to allow the export of the
model as a FxGraph. Further investigation is required to find
the real solution in pytorch.

* [run-slow] hubert, unispeech, unispeech_sat, wav2vec2
2024-10-17 15:41:55 +01:00
Yih-Dar
fce1fcfe71
Ping team members for new failed tests in daily CI (#34171)
* ping

* fix

* fix

* fix

* remove runner

* update members

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-10-17 16:11:52 +02:00
Amos You
aa3e35ac67
Fix warning message for fp32_cpu_offloading in bitsandbytes configs (#34079)
* change cpu offload warning for fp8 quantization

* change cpu offload warning for fp4 quantization

* change cpu offload variable name for fp8 and fp4 quantization
2024-10-17 15:11:33 +02:00
larin92
6d2b203339
Update trainer._get_eval_sampler() to support group_by_length arg (#33514)
Update 'trainer._get_eval_sampler()' to support 'group_by_length' argument

Trainer didn't support grouping by length for evaluation, which made evaluation slow with 'eval_batch_size'>1.

Updated 'trainer._get_eval_sampler()' method was based off of 'trainer._get_train_sampler()'.
2024-10-17 14:43:29 +02:00
Marc Sun
3f06f95ebe
Revert "Fix FSDP resume Initialization issue" (#34193)
Revert "Fix FSDP resume Initialization issue (#34032)"

This reverts commit 4de1bdbf63.
2024-10-16 15:25:18 -04:00
Reza Rahemtola
3a10c6192b
Avoid using torch's Tensor or PIL's Image in chat template utils if not available (#34165)
* fix(utils): Avoid using torch Tensor or PIL Image if not available

* Trigger CI

---------

Co-authored-by: Matt <rocketknight1@gmail.com>
2024-10-16 16:01:18 +01:00
Yoni Gozlan
bd5dc10fd2
Fix wrong name for llava onevision and qwen2_vl in tokenization auto (#34177)
* nit fix wrong llava onevision name in tokenization auto

* add qwen2_vl and fix style
2024-10-16 16:48:52 +02:00
steveepreston
cc7d8b87e1
Revert accelerate error caused by 46d09af (#34197)
Revert `accelerate` bug
2024-10-16 16:13:41 +02:00
alpertunga-bile
98bad9c6d6
[fix] fix token healing tests and usage errors (#33931)
* auto-gptq requirement is removed & model is changed & tokenizer pad token is assigned

* values func is changed with extensions & sequence key value bug is fixed

* map key value check is added in ExtensionsTree

* empty trimmed_ids bug is fixed

* tail_id IndexError is fixed

* empty trimmed_ids bug fix is updated for failed test

* too much specific case for specific tokenizer is removed

* input_ids check is updated

* require auto-gptq import is removed

* key error check is changed with empty list check

* empty input_ids check is added

* empty trimmed_ids fix is checked with numel function

* usage change comments are added

* test changes are commented

* comment style and quality bugs are fixed

* test comment style and quality bug is fixed
2024-10-16 14:22:55 +02:00
Yoach Lacombe
9ba021ea75
Moshi integration (#33624)
* clean mimi commit

* some nits suggestions from Arthur

* make fixup

* first moshi WIP

* converting weights working + configuration + generation configuration

* finalize converting script - still missing tokenizer and FE and processor

* fix saving model w/o default config

* working generation

* use GenerationMixin instead of inheriting

* add delay pattern mask

* fix right order: moshi codes then user codes

* unconditional inputs + generation config

* get rid of MoshiGenerationConfig

* blank user inputs

* update convert script:fix conversion, add  tokenizer, feature extractor and bf16

* add and correct Auto classes

* update modeling code, configuration and tests

* make fixup

* fix some copies

* WIP: add integration tests

* add dummy objects

* propose better readiblity and code organisation

* update tokenization tests

* update docstrigns, eval and modeling

* add .md

* make fixup

* add MoshiForConditionalGeneration to ignore Auto

* revert mimi changes

* re

* further fix

* Update moshi.md

* correct md formating

* move prepare causal mask to class

* fix copies

* fix depth decoder causal

* fix and correct some tests

* make style and update .md

* correct config checkpoitn

* Update tests/models/moshi/test_tokenization_moshi.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update tests/models/moshi/test_tokenization_moshi.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* make style

* Update src/transformers/models/moshi/__init__.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fixup

* change firm in copyrights

* udpate config with nested dict

* replace einsum

* make style

* change split to True

* add back splt=False

* remove tests in convert

* Update tests/models/moshi/test_modeling_moshi.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* add default config repo + add model to FA2 docstrings

* remove logits float

* fix some tokenization tests and ignore some others

* make style tokenization tests

* update modeling with sliding window + update modeling tests

* [run-slow] moshi

* remove prepare for generation frol CausalLM

* isort

* remove copied from

* ignore offload tests

* update causal mask and prepare 4D mask aligned with recent changes

* further test refine + add back prepare_inputs_for_generation for depth decoder

* correct conditional use of prepare mask

* update slow integration tests

* fix multi-device forward

* remove previous solution to device_map

* save_load is flaky

* fix generate multi-devices

* fix device

* move tensor to int

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Marc Sun <marc@huggingface.co>
2024-10-16 11:21:49 +02:00
Raushan Turganbay
d087165db0
IDEFICS: support inputs embeds (#34043)
* support embeds

* use cache from config

* style...

* fix tests after rebase
2024-10-16 09:25:26 +02:00
Chulhwa (Evan) Han
9d6998c759
🌐 [i18n-KO] Translated blip-2.md to Korean (#33516)
* docs: ko: model_doc/blip-2

* feat: nmt draft

* Apply suggestions from code review

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* Update docs/source/ko/model_doc/blip-2.md

Co-authored-by: Yijun Lee <119404328+yijun-lee@users.noreply.github.com>

---------

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>
Co-authored-by: Yijun Lee <119404328+yijun-lee@users.noreply.github.com>
2024-10-15 11:21:22 -07:00
Yijun Lee
554ed5d1e0
🌐 [i18n-KO] Translated trainer_utils.md to Korean (#33817)
* docs: ko: trainer_utils.md

* feat: nmt draft

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>

---------

Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
2024-10-15 11:21:05 -07:00
Yijun Lee
8c33cf4eec
🌐 [i18n-KO] Translated gemma2.md to Korean (#33937)
* docs: ko: gemma2.md

* feat: nmt draft

* fix: manual edits

* fix: resolve suggestions
2024-10-15 11:20:46 -07:00
Jiwook Han
67acb0b123
🌐 [i18n-KO] Translated vivit.md to Korean (#33935)
* docs: ko: model_doc/vivit.md

* feat: nmt draft

* fix: manual edits

* fix: manual edits
2024-10-15 10:31:44 -07:00
laurentd-lunit
0f49deacbf
[feat] LlavaNext add feature size check to avoid CUDA Runtime Error (#33608)
* [feat] add feature size check to avoid CUDA Runtime Error

* [minor] add error handling to all llava models

* [minor] avoid nested if else

* [minor] add error message to Qwen2-vl and chameleon

* [fix] token dimension for check

* [minor] add feature dim check for videos too

* [fix] dimension check

* [fix] test reference values

---------

Co-authored-by: Raushan Turganbay <raushan@huggingface.co>
2024-10-15 16:19:18 +02:00
Marc Sun
d00f1ca860
Fix optuna ddp hp search (#34073) 2024-10-15 15:42:07 +02:00
Yoni Gozlan
65442718c4
Add support for inheritance from class with different suffix in modular (#34077)
* add support for different suffix in modular

* add dummy example, pull new changes for modular

* nide lines order change
2024-10-15 14:55:09 +02:00
Joao Gante
d314ce70bf
Generate: move logits to same device as input_ids (#34076)
tmp commit
2024-10-15 14:32:09 +02:00
Subhalingam D
5ee9e786d1
Fix default behaviour in TextClassificationPipeline for regression problem type (#34066)
* update code

* update docstrings

* update tests
2024-10-15 13:06:20 +01:00
Shikhar Mishra
4de1bdbf63
Fix FSDP resume Initialization issue (#34032)
* Fix FSDP Initialization for resume training

* Added init_fsdp function to work with dummy values

* Fix FSDP initialization for resuming training

* Added CUDA decorator for tests

* Added torch_gpu decorator to FSDP tests

* Fixup for failing code quality tests
2024-10-15 13:48:10 +02:00
Prakarsh Kaushik
293e6271c6
Add sdpa for Vivit (#33757)
* chore:add sdpa to vivit

* fix:failing slow test_inference_interpolate_pos_encoding(failing on main branch too)

* chore:fix nits

* ci:fix repo consistency failure

* chore:add info and benchmark to model doc

* [run_slow] vivit

* chore:revert interpolation test fix for new issue

* [run_slow] vivit

* [run_slow] vivit

* [run_slow] vivit

* chore:add fallback for output_attentions being True

* [run_slow] vivit

* style:make fixup

* [run_slow] vivit
2024-10-15 11:27:54 +02:00
Raushan Turganbay
23874f5948
Idefics: enable generation tests (#34062)
* add idefics

* conflicts after merging main

* enable tests but need to fix some

* fix tests

* no print

* fix/skip some slow tests

* continue not skip

* rebasing broken smth, this is the fix
2024-10-15 11:17:14 +02:00
Victor Muštar
dd4216b766
Update README.md with Enterprise Hub (#34150) 2024-10-15 10:45:22 +02:00
Arthur
fa3f2db5c7
Add documentation for docker (#33156)
* initial commit

* nit
2024-10-14 11:58:45 +02:00
Lysandre Debut
5114c9b9e9
Specify that users should be careful with their own files (#34153)
* Informative

* style
2024-10-14 11:40:39 +02:00
Diogo Miguel Silva
013d3ac2b5
Fixed error message in mllama (#34106) 2024-10-14 10:30:35 +02:00
Vladislav Bronzov
cb5ca3265f
Add GGUF for starcoder2 (#34094)
* add starcoder2 arch support for gguf

* fix q6 test
2024-10-14 10:22:49 +02:00
PengWeixuan
4c439173df
Fix a typo (#34148)
Correct a typo

"If you want you tokenizer..."->"If you want your tokenizer...."
2024-10-14 10:15:25 +02:00
Anton Vlasjuk
7434c0ed21
Mistral-related models for QnA (#34045)
* mistral qna start

* mixtral qna

* oops

* qwen2 qna

* qwen2moe qna

* add missing input embed methods

* add copied to all methods, can't directly from llama due to the prefix

* make top level copied from
2024-10-14 08:53:32 +02:00
Joao Gante
37ea04013b
Generate: Fix modern llm generate calls with synced_gpus (#34095) 2024-10-12 16:45:52 +01:00
Luc Georges
617b21273a
fix(ci): benchmarks dashboard was failing due to missing quotations (#34100) 2024-10-11 19:52:06 +02:00
Luc Georges
144852fb6b
refactor: benchmarks (#33896)
* refactor: benchmarks

Based on a discussion with @LysandreJik & @ArthurZucker, the goal of
this PR is to improve transformers' benchmark system.

This is a WIP, for the moment the infrastructure required to make things
work is not ready. Will update the PR description when it is the case.

* feat: add db init in benchmarks CI

* fix: pg_config is missing in runner

* fix: add psql to the runner

* fix: connect info from env vars + PR comments

* refactor: set database as env var

* fix: invalid working directory

* fix: `commit_msg` -> `commit_message`

* fix: git marking checked out repo as unsafe

* feat: add logging

* fix: invalid device

* feat: update grafana dashboard for prod grafana

* feat: add `commit_id` to header table

* feat: commit latest version of dashboard

* feat: move measurements into json field

* feat: remove drop table migration queries

* fix: `torch.arrange` -> `torch.arange`

* fix: add missing `s` to `cache_position` positional argument

* fix: change model

* revert: `cache_positions` -> `cache_position`

* fix: set device for `StaticCache`

* fix: set `StaticCache` dtype

* feat: limit max cache len

* fix script

* raise error on failure!

* not try catch

* try to skip generate compilation

* update

* update docker image!

* update

* update again!@

* update

* updates

* ???

* ??

* use `torch.cuda.synchronize()`

* fix json

* nits

* fix

* fixed!

* f**k

* feat: add TTNT panels

* feat: add try except

---------

Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
2024-10-11 18:03:29 +02:00
Yih-Dar
80bee7b114
Avoid many test failures for LlavaNextVideoForConditionalGeneration (#34070)
* skip

* [run-slow] llava_next_video

* skip

* [run-slow] video_llava, llava_next_video

* skip

* [run-slow] llava_next_video

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-10-11 17:41:50 +02:00
Joao Gante
37ac078535
Generate: move prepare_inputs_for_generation in encoder-decoder llms (#34048) 2024-10-11 16:11:18 +01:00
Raushan Turganbay
fd70464fa7
Fix flaky tests (#34069)
* fix mllama only

* allow image token index
2024-10-11 14:41:46 +01:00
Dmytro Mishkin
3a24ba82ad
Fix NaNs in cost_matrix for mask2former (#34074)
Fix NaNs in cost_matrix

Sometimes that happens :(
2024-10-11 15:35:55 +02:00
Yih-Dar
7b06473b8f
avoid many failures for ImageGPT (#34071)
* skip

* [run-slow] imagegpt

* skip

* [run-slow] imagegpt

* [run-slow] imagegpt,video_llava

* skip

* [run-slow] imagegpt,video_llava

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-10-11 15:24:01 +02:00
Lucain
1c66be8062
Fix PushToHubMixin when pusing to a PR revision (#34090) 2024-10-11 15:06:15 +02:00
Lysandre Debut
409dd2d19c
Fix failing conversion (#34010)
* Fix

* Tests

* Typo

* Typo
2024-10-11 14:59:23 +02:00
Yoach Lacombe
9dca0c9116
Fix DAC slow tests (#34088)
* Fix DAC slow tests and fix decode

* [run-slow] dac
2024-10-11 14:43:03 +02:00
Lysandre Debut
f052e94bcc
Fix flax failures (#33912)
* Few fixes here and there

* Remove typos

* Remove typos
2024-10-11 14:38:35 +02:00
Joao Gante
e878eaa9fc
Tests: upcast logits to float() (#34042)
upcast
2024-10-11 11:51:49 +01:00
Yih-Dar
4b9bfd32f0
Update SSH workflow file (#34084)
* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-10-11 10:53:12 +02:00
Raushan Turganbay
be9aeba581
Idefics: fix position ids (#33907)
* fix position ids

* fix labels also

* fix copies

* oops, not that one

* dont deprecate
2024-10-11 10:28:34 +02:00
Guang Yang
7d97cca8dd
Generate using exported model and enable gemma2-2b in ExecuTorch (#33707)
* Generate using exported model and enable gemma2-2b in ExecuTorch

* [run_slow] gemma, gemma2

* truncate expected output message

* Bump required torch version to support gemma2 export

* [run_slow] gemma, gemma2

---------

Co-authored-by: Guang Yang <guangyang@fb.com>
2024-10-11 10:16:31 +02:00
Matthew Hoffman
70b07d97cf
Default synced_gpus to True when using FullyShardedDataParallel (#33483)
* Default synced_gpus to True when using FullyShardedDataParallel

Fixes #30228

Related:

* https://github.com/pytorch/pytorch/issues/100069
* https://github.com/pytorch/pytorch/issues/123962

Similar to DeepSpeed ZeRO Stage 3, when using FSDP with multiple GPUs and differently sized data per rank, the ranks reach different synchronization points at the same time, leading to deadlock

To avoid this, we can automatically set synced_gpus to True if we detect that a PreTrainedModel is being managed by FSDP using _is_fsdp_managed_module, which was added in 2.0.0 for torch.compile: https://github.com/pytorch/pytorch/blob/v2.0.0/torch/distributed/fsdp/_dynamo_utils.py

* Remove test file

* ruff formatting

* ruff format

* Update copyright year

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Add test for FSDP-wrapped model generation

Before #33483, these tests would have hung for 10 minutes before crashing due to a timeout error

* Ruff format

* Move argparse import

* Remove barrier

I think this might cause more problems if one of the workers was killed

* Move import into function to decrease load time

https://github.com/huggingface/transformers/pull/33483#discussion_r1787972735

* Add test for accelerate and Trainer

https://github.com/huggingface/transformers/pull/33483#discussion_r1790309675

* Refactor imports

* Ruff format

* Use nullcontext

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-10-10 14:09:04 -04:00