Joao Gante
62c7ea0201
CI: avoid human error, automatically infer generative models ( #33212 )
...
* tmp commit
* move tests to the right class
* remove ALL all_generative_model_classes = ...
* skip tf roberta
* skip InstructBlipForConditionalGenerationDecoderOnlyTest
* videollava
* reduce diff
* reduce diff
* remove on vlms
* fix a few more
* manual rebase bits
* more manual rebase
* remove all manual generative model class test entries
* fix up to ernie
* a few more removals
* handle remaining cases
* recurrent gemma
* it's better here
* make fixup
* tf idefics is broken
* tf bert + generate is broken
* don't touch tf :()
* don't touch tf :(
* make fixup
* better comments for test skips
* revert tf changes
* remove empty line removal
* one more
* missing one
2025-02-13 16:27:11 +01:00
Yaswanth Gali
7aee036e54
Iterative generation using Input embeds and past_key_values
( #35890 )
...
* Iterative generation using input embeds
* ruff fix
* Added Testcase
* Updated comment
* ♻️ Refactored testcase
* Skip test for these models
* Continue generation using input embeds and cache
* Skip generate_continue_from_embeds test
* Refactor `prepare_input_for_generation` func
* Continue generation using input embeds and cache
* Modular changes fix
* Overwrite 'prepare_inputs_for_generation' function
2025-02-06 11:06:05 +01:00
Arthur
b912f5ee43
use torch.testing.assertclose instead to get more details about error in cis ( #35659 )
...
* use torch.testing.assertclose instead to get more details about error in cis
* fix
* style
* test_all
* revert for I bert
* fixes and updates
* more image processing fixes
* more image processors
* fix mamba and co
* style
* less strick
* ok I won't be strict
* skip and be done
* up
2025-01-24 16:55:28 +01:00
Yih-Dar
05de764e9c
Aurevoir PyTorch 1 ( #35358 )
...
* fix
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-12-20 14:36:31 +01:00
Yih-Dar
ab98f0b0a1
avoid calling gc.collect
and cuda.empty_cache
( #34514 )
...
* update
* update
* update
* update
* update
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-10-31 16:36:13 +01:00
Joao Gante
a7734238ff
Generation tests: update imagegpt input name, remove unused functions ( #33663 )
2024-09-24 16:40:48 +01:00
amyeroberts
1de7dc7403
Skip tests properly ( #31308 )
...
* Skip tests properly
* [test_all]
* Add 'reason' as kwarg for skipTest
* [test_all] Fix up
* [test_all]
2024-06-26 21:59:08 +01:00
Yih-Dar
bd90cda9a6
CI with num_hidden_layers=2
🚀 🚀 🚀 ( #25266 )
...
* CI with layers=2
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-02 20:22:36 +02:00
Yih-Dar
cf561d7cf1
Add torch >=1.12
requirement for Tapas
( #24251 )
...
* fix
* fix
* fix
* Update src/transformers/models/tapas/modeling_tapas.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-06-13 19:19:40 +02:00
Joao Gante
918a06e25d
Generate: add test to check KV format ( #23403 )
...
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-16 19:28:19 +01:00
Yih-Dar
975159bb61
Update tiny models and a few fixes ( #22928 )
...
* run_check_tiny_models
* update summary
* update mixin
* update pipeline_model_mapping
* update pipeline_model_mapping
* Update for gpt_bigcode
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-04-24 14:45:22 +02:00
Joel Lamy-Poirier
e0921c6b53
Add GPTBigCode model (Optimized GPT2 with MQA from Santacoder & BigCode) ( #22575 )
...
* Add model with cli tool
* Remove unwanted stuff
* Add new code
* Remove inference runner
* Style
* Fix checks
* Test updates
* make fixup
* fix docs
* fix doc
* fix test
* hopefully fix pipeline tests
* refactor
* fix CIs
* add comment
* rename to `GPTBigCodeForCausalLM`
* correct readme
* make fixup + docs
* make fixup
* fixes
* fixes
* Remove pruning
* Remove import
* Doc updates
* More pruning removal
* Combine copies
* Single MQA implementation, remove kv cache pre-allocation and padding
* Update doc
* Revert refactor to match gpt2 style
* Merge back key and value caches, fix some type hints
* Update doc
* Fix position ids pith padding (PR 21080)
* Add conversion script temporarily
* Update conversion script
* Remove checkpoint conversion
* New model
* Fix MQA test
* Fix copies
* try fix tests
* FIX TEST!!
* remove `DoubleHeadsModel`
* add MQA tests
* add slow tests
* clean up
* add CPU checker
* final fixes
* fixes
- fix GPU issue
- fixed slow tests
- skip disk offload
* fix final issue
* Simplify and comment baddbmm fix
* Remove unnecessary code
* Transpose tweaks
* Use beta=1 on cpu, improve tests
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
2023-04-10 10:57:21 +02:00