transformers/tests
Pablo Montalvo caa0ff0bf1
Add fuyu model (#26911)
* initial commit

* add processor, add fuyu naming

* add draft processor

* fix processor

* remove dropout to fix loading of weights

* add image processing fixes from Pedro

* fix

* fix processor

* add basic processing fuyu test

* add documentation and TODO

* address comments, add tests, add doc

* replace assert with torch asserts

* add Mixins and fix tests

* clean imports

* add model tester, clean imports

* fix embedding test

* add updated tests from pre-release model

* Processor: return input_ids used for inference

* separate processing and model tests

* relax test tolerance for embeddings

* add test for logit comparison

* make sure fuyu image processor is imported in the init

* fix formattingh

* more formatting issues

* and more

* fixups

* remove some stuff

* nits

* update init

* remove the fuyu file

* Update integration test with release model

* Update conversion script.

The projection is not used, as confirmed by the authors.

* improve geenration

* Remove duplicate function

* Trickle down patches to model call

* processing fuyu updates

* remove things

* fix prepare_inputs_for_generation to fix generate()

* remove model_input

* update

* add generation tests

* nits

* draft leverage automodel and autoconfig

* nits

* fix dtype patch

* address comments, update READMEs and doc, include tests

* add working processing test, remove refs to subsequences

* add tests, remove Sequence classification

* processing

* update

* update the conversion script

* more processing cleanup

* safe import

* take out ModelTesterMixin for early release

* more cl;eanup

* more cleanup

* more cleanup

* and more

* register a buffer

* nits

* add postprocessing of generate output

* nits

* updates

* add one working test

* fix test

* make fixup works

* fixup

* Arthur's updates

* nits

* update

* update

* fix processor

* update tests

* passe more fixups

* fix

* nits

* don't import torch

* skip fuyu config for now

* fixup done

* fixup

* update

* oups

* nits

* Use input embeddings

* no buffer

* update

* styling processing fuyu

* fix test

* update licence

* protect torch import

* fixup and update not doctested

* kwargs should be passed

* udpates

* update the impofixuprts in the test

* protect import

* protecting imports

* protect imports in type checking

* add testing decorators

* protect top level import structure

* fix typo

* fix check init

* move requires_backend to functions

* Imports

* Protect types

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: ArthurZucker <arthur.zucker@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Lysandre <lysandre@huggingface.co>
2023-10-18 15:24:11 -07:00
..
benchmark
bettertransformer Fixed malapropism error (#26660) 2023-10-09 11:04:57 +02:00
deepspeed fix the deepspeed tests (#26021) 2023-09-13 10:26:53 +05:30
extended remove SharedDDP as it is deprecated (#25702) 2023-10-06 16:03:11 +02:00
fixtures [WIP] add SpeechT5 model (#18922) 2023-02-03 12:43:46 -05:00
fsdp Skip TrainerIntegrationFSDP::test_basic_run_with_cpu_offload if torch < 2.1 (#26764) 2023-10-12 18:22:09 +02:00
generation [Assistant Generation] Improve Encoder Decoder (#26701) 2023-10-11 15:52:20 +02:00
models Add fuyu model (#26911) 2023-10-18 15:24:11 -07:00
optimization Make schedulers picklable by making lr_lambda fns global (#21768) 2023-03-02 12:08:43 -05:00
peft_integration [PEFT] Final fixes (#26559) 2023-10-03 14:53:09 +02:00
pipelines Add many missing spaces in adjacent strings (#26751) 2023-10-12 10:28:40 +02:00
quantization 🚨🚨🚨 [Quantization] Store the original dtype in the config as a private attribute 🚨🚨🚨 (#26761) 2023-10-16 19:56:53 +02:00
repo_utils Docstring check (#26052) 2023-10-04 15:13:37 +02:00
sagemaker Add many missing spaces in adjacent strings (#26751) 2023-10-12 10:28:40 +02:00
tokenization [Tokenizer] Fix slow and fast serialization (#26570) 2023-10-18 16:30:53 +02:00
tools Add support for for loops in python interpreter (#24429) 2023-06-26 09:58:14 -04:00
trainer enable optuna multi-objectives feature (#25969) 2023-09-12 18:01:22 +01:00
utils Fix failing MusicgenTest .test_pipeline_text_to_audio (#26586) 2023-10-06 15:53:59 +02:00
__init__.py
test_backbone_common.py [AutoBackbone] Add test (#26094) 2023-09-18 23:47:54 +02:00
test_configuration_common.py Deal with nested configs better in base class (#25237) 2023-08-04 14:56:09 +02:00
test_configuration_utils.py Deal with nested configs better in base class (#25237) 2023-08-04 14:56:09 +02:00
test_feature_extraction_common.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_feature_extraction_utils.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_image_processing_common.py Input data format (#25464) 2023-08-16 17:45:02 +01:00
test_image_processing_utils.py Run hub tests (#24807) 2023-07-13 15:25:45 -04:00
test_image_transforms.py Add input_data_format argument, image transforms (#25462) 2023-08-11 15:09:31 +01:00
test_modeling_common.py [FA-2] Final fix for FA2 dtype (#26846) 2023-10-18 19:48:55 +02:00
test_modeling_flax_common.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_modeling_flax_utils.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_modeling_tf_common.py Skip test_onnx_runtime_optimize for now (#25560) 2023-08-17 11:23:16 +02:00
test_modeling_tf_utils.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_modeling_utils.py skip flaky hub tests (#26594) 2023-10-04 17:47:55 +02:00
test_pipeline_mixin.py Emergency PR to skip conversational tests to fix CI (#26906) 2023-10-18 15:33:43 +01:00
test_sequence_feature_extraction_common.py Fix typo (#25966) 2023-09-05 10:12:25 +02:00
test_tokenization_common.py [Tokenizer] Fix slow and fast serialization (#26570) 2023-10-18 16:30:53 +02:00
test_tokenization_utils.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00