Arthur
4f27ee936a
[Mamba doc
] Post merge updates ( #29472 )
...
* post merge update
* nit
* oups
2024-03-11 09:46:24 +01:00
Fanli Lin
3f6973db06
[tests] use the correct n_gpu
in TrainerIntegrationTest::test_train_and_eval_dataloaders
for XPU ( #29307 )
...
* fix n_gpu
* fix style
2024-03-08 10:52:25 -05:00
Jonatan Kłosko
608fa5496c
Make sliding window size inclusive in eager attention ( #29519 )
...
* Make sliding window size inclusive in eager attention
* Fix tests
2024-03-08 12:53:17 +00:00
Fanli Lin
1ea3ad1aec
[tests] use torch_device
instead of auto
for model testing ( #29531 )
...
* use torch_device
* skip for XPU
* Update tests/generation/test_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-03-08 11:21:43 +00:00
Wang, Yi
8ee1d47203
fix image-to-text batch incorrect output issue ( #29342 )
...
* fix image-to-text batch incorrect output issue
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* add ci test
Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
* update ci test
Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
2024-03-08 11:11:10 +00:00
Fanli Lin
8e589c83b6
[tests] add the missing require_sacremoses
decorator ( #29504 )
...
* add sacremoses check
* fix style
* for FlaubertTokenizer
* HerbertTokenizer fix
* add typeHint
* Update src/transformers/testing_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* make less skipped
* make quality
* remove import
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-03-08 10:13:54 +00:00
Joao Gante
bc764f4263
Generate: left-padding test, revisited ( #29515 )
...
* left-padding test revisited
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-03-08 10:06:46 +00:00
Nick DeGroot
b338a6c3b8
Fix VisionEncoderDecoder
Positional Arg ( #29497 )
...
* 🐛 Fix vision encoder decoder positional arg
* ✅ Add test for VisionEncoderDecoder with LayoutLMv3 encoder
---------
Co-authored-by: Nick DeGroot <1966472+nickthegroot@users.noreply.github.com>
2024-03-07 20:45:51 +00:00
amyeroberts
4ed9ae623d
test_generation_config_is_loaded_with_model - fall back to pytorch model for now ( #29521 )
...
* Fall back to pytorch model for now
* Fix up
2024-03-07 17:30:28 +00:00
Raushan Turganbay
923733c22b
Flava multimodal add attention mask ( #29446 )
...
* flava multimodal add attn mask
* make style
* check mask is not None
2024-03-07 12:45:47 +01:00
Lysandre Debut
f6133d767a
Revert "Automatic safetensors conversion when lacking these files (#2… ( #29507 )
...
Revert "Automatic safetensors conversion when lacking these files (#29390 )"
This reverts commit a69cbf4e64
.
2024-03-07 12:12:41 +01:00
Joao Gante
ffe60fdcd6
v4.39 deprecations 🧼 ( #29492 )
2024-03-07 10:44:43 +00:00
regisss
979fccc90f
Enable BLIP for auto VQA ( #29499 )
...
* Enable BLIP for auto VQA
* Make style
* Add VQA to BLIP pipeline tests
2024-03-07 10:28:01 +01:00
Joao Gante
700d48fb2d
Generate: get generation mode from the generation config instance 🧼 ( #29441 )
2024-03-06 11:18:35 +00:00
Joao Gante
41f7b7ae4b
Generate: add tests for caches with pad_to_multiple_of
( #29462 )
2024-03-06 10:57:04 +00:00
Fanli Lin
00bf44270f
[FIX] offload_weight()
takes from 3 to 4 positional arguments but 5 were given ( #29457 )
...
* use require_torch_gpu
* enable on XPU
* fix
2024-03-06 03:58:42 +01:00
Lysandre Debut
a69cbf4e64
Automatic safetensors conversion when lacking these files ( #29390 )
...
* Automatic safetensors conversion when lacking these files
* Remove debug
* Thread name
* Typo
* Ensure that raises do not affect the main thread
2024-03-05 13:37:55 +01:00
Arthur
fb1c62e973
[Add Mamba
] Adds support for the Mamba
models ( #28094 )
...
* initial-commit
* start cleaning
* small nits
* small nits
* current updates
* add kernels
* small refactoring little step
* add comments
* styling
* nit
* nits
* Style
* Small changes
* Push dummy mambda simple slow
* nit
* Use original names
* Use original names and remove norm
* Updates for inference params
* Style nd updates
* nits
* Match logits
* Add a test
* Add expected generated text
* nits doc, imports and styling
* style
* oups
* dont install kernels, invite users to install the required kernels
* let use use the original packages
* styling
* nits
* fix some copieds
* update doc
* fix-copies
* styling done
* nits
* fix import check
* run but wrong cuda ress
* mamba CUDA works :)
* fix the fast path
* config naming nits
* conversion script is not required at this stage
* finish fixing the fast path: generation make sense now!
* nit
* Let's start working on the CIs
* style
* better style
* more nits
* test nit
* quick fix for now
* nits
* nit
* nit
* nit
* nits
* update test rest
* fixup
* update test
* nit
* some fixes
* nits
* update test values
* fix styling
* nit
* support peft
* integrations tests require torchg
* also add slow markers
* styling
* chose forward wisely
* nits
* update tests
* fix gradient checkpointing
* fixup
* nit
* fix doc
* check copies
* fix the docstring
* fix some more tests
* style
* fix beam search
* add init schene
* update
* nit
* fix
* fixup the doc
* fix the doc
* fixup
* tentative update but slow is no longer good
* nit
* should we always use float32?
* nits
* revert wrong changes
* res in float32
* cleanup
* skip fmt for now
* update generation values
* update test values running original model
* fixup
* update tests + rename inference_params to cache_params + make sure training does not use cache_params
* small nits
* more nits
* fix final CIs
* style
* nit doc
* I hope final doc nits
* nit
* 🫠
* final touch!
* fix torch import
* Apply suggestions from code review
Co-authored-by: Lysandre Debut <hi@lysand.re>
* Apply suggestions from code review
* fix fix and fix
* fix base model prefix!
* nit
* Update src/transformers/models/mamba/__init__.py
* Update docs/source/en/model_doc/mamba.md
Co-authored-by: Lysandre Debut <hi@lysand.re>
* nit
---------
Co-authored-by: Lysandre Debut <hi@lysand.re>
2024-03-05 20:01:06 +09:00
Arthur
4d892b7297
[Udop imports
] Processor tests were not run. ( #29456 )
...
* fix udop imports
* sort imports
2024-03-05 11:01:08 +01:00
Arthur
57d007b912
Revert-commit 0d52f9f582
( #29455 )
...
* style
* revert with RP
* nit
* exact revert
2024-03-05 10:39:42 +01:00
Arthur Zucker
0d52f9f582
more fix
2024-03-05 18:27:25 +09:00
Arthur
132852203a
[UdopTokenizer
] Fix post merge imports ( #29451 )
...
* update
* ...
* nits
* arf
* 🧼
* beat the last guy
* style everyone
2024-03-05 09:42:52 +01:00
Fanli Lin
fa7f3cf336
[tests] enable test_pipeline_accelerate_top_p on XPU ( #29309 )
...
* use torch_device
* Update tests/pipelines/test_pipelines_text_generation.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fix style
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-03-05 09:16:05 +01:00
Ilyas Moutawwakil
4fc708f98c
Exllama kernels support for AWQ models ( #28634 )
...
* added exllama kernels support for awq models
* doc
* style
* Update src/transformers/modeling_utils.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* refactor
* moved exllama post init to after device dispatching
* bump autoawq version
* added exllama test
* style
* configurable exllama kernels
* copy exllama_config from gptq
* moved exllama version check to post init
* moved to quantization dockerfile
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-03-05 03:22:48 +01:00
NielsRogge
836921fdeb
Add UDOP ( #22940 )
...
* First draft
* More improvements
* More improvements
* More fixes
* Fix copies
* More improvements
* More fixes
* More improvements
* Convert checkpoint
* More improvements, set up tests
* Fix more tests
* Add UdopModel
* More improvements
* Fix equivalence test
* More fixes
* Redesign model
* Extend conversion script
* Use real inputs for conversion script
* Add image processor
* Improve conversion script
* Add UdopTokenizer
* Add fast tokenizer
* Add converter
* Update README's
* Add processor
* Add fully fledged tokenizer
* Add fast tokenizer
* Use processor in conversion script
* Add tokenizer tests
* Fix one more test
* Fix more tests
* Fix tokenizer tests
* Enable fast tokenizer tests
* Fix more tests
* Fix additional_special_tokens of fast tokenizer
* Fix tokenizer tests
* Fix more tests
* Fix equivalence test
* Rename image to pixel_values
* Rename seg_data to bbox
* More renamings
* Remove vis_special_token
* More improvements
* Add docs
* Fix copied from
* Update slow tokenizer
* Update fast tokenizer design
* Make text input optional
* Add first draft of processor tests
* Fix more processor tests
* Fix decoder_start_token_id
* Fix test_initialization
* Add integration test
* More improvements
* Improve processor, add test
* Add more copied from
* Add more copied from
* Add more copied from
* Add more copied from
* Remove print statement
* Update README and auto mapping
* Delete files
* Delete another file
* Remove code
* Fix test
* Fix docs
* Remove asserts
* Add doc tests
* Include UDOP in exotic model tests
* Add expected tesseract decodings
* Add sentencepiece
* Use same design as T5
* Add UdopEncoderModel
* Add UdopEncoderModel to tests
* More fixes
* Fix fast tokenizer
* Fix one more test
* Remove parallelisable attribute
* Fix copies
* Remove legacy file
* Copy from T5Tokenizer
* Fix rebase
* More fixes, copy from T5
* More fixes
* Fix init
* Use ArthurZ/udop for tests
* Make all model tests pass
* Remove UdopForConditionalGeneration from auto mapping
* Fix more tests
* fixups
* more fixups
* fix the tokenizers
* remove un-necessary changes
* nits
* nits
* replace truncate_sequences_boxes with truncate_sequences for fix-copies
* nit current path
* add a test for input ids
* ids that we should get taken from c9f7a32f57
* nits converting
* nits
* apply ruff
* nits
* nits
* style
* fix slow order of addition
* fix udop fast range as well
* fixup
* nits
* Add docstrings
* Fix gradient checkpointing
* Update code examples
* Skip tests
* Update integration test
* Address comment
* Make fixup
* Remove extra ids from tokenizer
* Skip test
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update year
* Address comment
* Address more comments
* Address comments
* Add copied from
* Update CI
* Rename script
* Update model id
* Add AddedToken, skip tests
* Update CI
* Fix doc tests
* Do not use Tesseract for the doc tests
* Remove kwargs
* Add original inputs
* Update casting
* Fix doc test
* Update question
* Update question
* Use LayoutLMv3ImageProcessor
* Update organization
* Improve docs
* Update forward signature
* Make images optional
* Remove deprecated device argument
* Add comment, add add_prefix_space
* More improvements
* Remove kwargs
---------
Co-authored-by: ArthurZucker <arthur.zucker@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-03-04 18:49:02 +01:00
Donggeun Yu
ed74d97871
DeformableDETR support bfloat16 ( #29232 )
...
* Update ms_deform_attn_cuda.cu
* Update ms_deform_attn_cuda.cuh
* Update modeling_deformable_detr.py
* Update src/transformers/models/deformable_detr/modeling_deformable_detr.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update modeling_deformable_detr.py
* python utils/check_copies.py --fix_and_overwrite
* Fix dtype missmatch error
* Update test_modeling_deformable_detr.py
* Update test_modeling_deformable_detr.py
* Update modeling_deformable_detr.py
* Update modeling_deformable_detr.py
* Support DeformableDETR with bfloat16
* Add test code
* Use AT_DISPATCH_FLOATING_TYPES_AND2
Use AT_DISPATCH_FLOATING_TYPES_AND2
* Update tests/models/deformable_detr/test_modeling_deformable_detr.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/deformable_detr/test_modeling_deformable_detr.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Fix not found require_torch_bf16 function
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-03-04 14:18:09 +00:00
Zach Mueller
1681a6d452
🚨 Fully revert atomic checkpointing 🚨 ( #29370 )
...
Fully revert atomic checkpointing
2024-03-04 06:17:42 -05:00
Nick DeGroot
8ef9862864
Fix OneFormer post_process_instance_segmentation
for panoptic tasks ( #29304 )
...
* 🐛 Fix oneformer instance post processing when using panoptic task type
* ✅ Add unit test for oneformer instance post processing panoptic bug
---------
Co-authored-by: Nick DeGroot <1966472+nickthegroot@users.noreply.github.com>
2024-03-04 11:04:49 +00:00
Fanli Lin
aade711d1e
[tests] enable automatic speech recognition pipeline tests on XPU ( #29308 )
...
* use require_torch_gpu
* enable on XPU
2024-03-04 08:24:38 +01:00
Zach Mueller
1a7c117df9
Fix deprecated arg issue ( #29372 )
...
* Fix deprecated arg issue
* Trainer check too
* Check for dict or dataclass
* Simplify, make config always AcceleratorConfig
* Upstream to Trainer
2024-03-01 12:00:29 -05:00
Marc Sun
cec773345a
Fix llama + gemma accelete tests ( #29380 )
2024-03-01 10:32:36 -05:00
amyeroberts
f1b1379f37
[YOLOS
] Fix - return padded annotations ( #29300 )
...
* Fix yolos processing
* Add back slow marker - protects for pycocotools in slow
* Slow decorator goes above copied from header
2024-03-01 09:42:13 +00:00
Sanchit Gandhi
0a0a279e99
🚨 🚨 [Whisper Tok] Update integration test ( #29368 )
...
* [Whisper Tok] Update integration test
* make style
2024-03-01 09:22:31 +00:00
Younes Belkada
50db7ca4e8
FIX [quantization
/ ESM
] Fix ESM 8bit / 4bit with bitsandbytes ( #29329 )
...
* fix ESM 8bit
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fixup
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-03-01 03:01:53 +01:00
Yih-Dar
44fe1a1cc4
Avoid using uncessary get_values(MODEL_MAPPING)
( #29362 )
...
* more fixes
* more fixes
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-02-29 17:19:17 +08:00
Younes Belkada
b647acdb53
FIX [CI
] require_read_token
in the llama FA2 test ( #29361 )
...
Update test_modeling_llama.py
2024-02-29 04:49:01 +01:00
Younes Belkada
8d8ac9c2df
FIX [CI
]: Fix failing tests for peft integration ( #29330 )
...
fix failing tests for peft integration
2024-02-29 03:56:16 +01:00
Younes Belkada
1aee9afd1c
FIX [CI
/ starcoder2
] Change starcoder2 path to correct one for slow tests ( #29359 )
...
change starcoder2 path to correct one
2024-02-29 03:52:13 +01:00
Arthur
8a8a0a4ae0
[Llama ROPE
] Fix torch export but also slow downs in forward ( #29198 )
...
* remove control flow
* update gptneox
* update ....
* nits
* Actually let's just break. Otherwise we are silently failing which imo is not optimal
* version BC
* fix tests
* fix eager causal
* nit
* add a test
* style
* nits
* nits
* more nits for the test
* update and fix
* make sure cuda graphs are not skipped
* read token is needed for meta llama
* update!
* fiixup
* compile test should be slow
* fix thet fix copies
* stle 🫠
2024-02-28 10:45:53 +01:00
Younes Belkada
ad00c482c7
FIX [Gemma
/ CI
] Make sure our runners have access to the model ( #29242 )
...
* pu hf token in gemma tests
* update suggestion
* add to flax
* revert
* fix
* fixup
* forward contrib credits from discussion
---------
Co-authored-by: ArthurZucker <ArthurZucker@users.noreply.github.com>
2024-02-28 06:25:23 +01:00
RaymondLi0
63caa370e6
Starcoder2 model - bis ( #29215 )
...
* Copy model
* changes
* misc
* fixes
* add embed and residual dropout (#30 )
* misc
* remove rms norm and gated MLP
* remove copied mentions where its not a copy anymore
* remove unused _shape
* copied from mistral instead
* fix copies
* fix copies
* add not doctested
* fix
* fix copyright
* Update docs/source/en/model_doc/starcoder2.md
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/starcoder2/configuration_starcoder2.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/starcoder2/configuration_starcoder2.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fix doc
* revert some changes
* add fa2 tests
* fix styling nit
* fix
* push dummy docs
---------
Co-authored-by: Joel Lamy-Poirier <joel.lamy-poirier@servicenow.com>
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-02-28 01:24:34 +01:00
Raushan Turganbay
ddf7ac4237
Token level timestamps for long-form generation in Whisper ( #29148 )
2024-02-27 18:15:26 +00:00
Andrei Panferov
e3fc90ae68
Cleaner Cache dtype
and device
extraction for CUDA graph generation for quantizers compatibility ( #29079 )
...
* input_layernorm as the beacon of hope
* cleaner dtype extraction
* AQLM + CUDA graph test
* is available check
* shorter text test
2024-02-27 09:32:39 +01:00
FredericOdermatt
871ba71dfa
GenerationConfig validate both constraints and force_words_ids ( #29163 )
...
GenerationConfig validate both options for constrained decoding: constraints and force_words_ids
2024-02-27 01:43:52 +01:00
Eduardo Pacheco
3fcfbe7549
Adding SegGPT ( #27735 )
...
* First commit
* Improvements
* More improvements
* Converted original checkpoint to HF checkpoint
* Fix style
* Fixed forward
* More improvements
* More improvements
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Remove asserts
* Remove unnecessary attributes
* Changed model name to camel case
* Improve forward doc
* Improve tests
* More improvements
* Fix copies
* Fix doc
* Make SegGptImageProcessor more flexible
* Added few-shot test
* Fix style
* Update READMEs and docs
* Update READMEs
* Make inputs required
* Add SegGptForImageSegmentation
* Make tests pass
* Rename to out_indicies
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Fixed naming convention
* Copying SegGptMlp from modeling_sam.py
* Some minor improvements
* Remove mlp_ratio
* Fix docstrings
* Fixed docstring match
* Objects defined before use
* Storing only patch_size and beta for SegGptLoss
* removed _prepare_inputs method
* Removed modified from headers
* Renamed to output_indicies
* Removed unnecessary einsums
* Update tests/models/seggpt/test_modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/seggpt/test_modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/seggpt/test_modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Fixing issues
* Raise error as soon as possible
* More fixes
* Fix merge
* Added palette to SegGptImageProcessor
* Fixed typo
* Fixed shape typo
* Added permute before doing palette to class mapping
* Fixed style
* Fixed and added tests
* Fixed docstrings
* Matching SegFormer API for post_processing_semantic_segmentation
* Fixed copies
* Fixed SegGptImageProcessor to handle both binary and RGB masks
* Updated docstrings of SegGptImageProcessor
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/seggpt.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/configuration_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/convert_seggpt_to_hf.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/seggpt/test_image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/seggpt/test_modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Object definitions above & fix style
* Renamed output_indices to intermediate_feature_indices
* Removed unnecessary check on bool_masked_pos
* Loss first in the outputs
* Added validation for do_normalize
* Improved SegGptImageProcessor and added new tests
* Added comment
* Added docstrings to SegGptLoss
* Reimplemented ensemble condition logic in SegGptEncoder
* Update src/transformers/models/seggpt/__init__.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/seggpt/convert_seggpt_to_hf.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/seggpt/configuration_seggpt.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Updated docstrings to use post_process_semantic_segmentation
* Fixed typo on docstrings
* moved pixel values test to test_image_processing_seggpt
* Addressed comments
* Update src/transformers/models/seggpt/configuration_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/image_processing_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/configuration_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Updated docstrings for SegGptLoss
* Address comments
* Added SegGpt example to model docs
* Update src/transformers/models/seggpt/modeling_seggpt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* moved patchify and unpatchify
* Rename checkpoint
* Renamed intermediate_features to intermediate_hidden_states for consistency
* Update src/transformers/models/seggpt/configuration_seggpt.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Replaced post_process_masks for post_process_semantic_segmentation in the docs
---------
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Niels <niels.rogge1@gmail.com>
Co-authored-by: Eduardo Pacheco <eduardo.pacheco@limehome.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-02-26 18:17:19 +00:00
Raushan Turganbay
8f2f0f0f85
Track each row separately for stopping criteria ( #29116 )
2024-02-26 16:06:16 +00:00
Merve Noyan
7c4995f93d
Add feature extraction mapping for automatic metadata update ( #28944 )
...
* add feature extraction mapping
* added prefix
* ruff check
* minor fix
* Update modeling_auto.py
* fix typo
* remove prefix to make variable public/importable
* Update src/transformers/models/auto/modeling_auto.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fixes
* addressed comments
* nit
* fix-copies
* remove from tests
* this should fix
* Update tests/models/convnextv2/test_modeling_convnextv2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* nits
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-02-26 10:35:37 +00:00
Matt
371b572e55
Allow remote code repo names to contain "." ( #29175 )
...
* stash commit
* stash commit
* It works!
* Remove unnecessary change
* We don't actually need the cache_dir!
* Update docstring
* Add test
* Add test with custom cache dir too
* Update model repo path
2024-02-23 12:46:31 +00:00
Sanchit Gandhi
2a9b1f80c4
[Gemma] Fix eager attention ( #29187 )
...
* fix modelling code
* add tests
* fix tests
* add some logit tests
* style
* fix fix
2024-02-22 01:07:52 +01:00
Arthur
594c1277b2
[ gemma
] Adds support for Gemma 💎 ( #29167 )
...
* inital commit
* update
* update conversion checkpoint
* update conversion script
* nits
* some fixes
* nits
* merge
* fix permute
* nits
* fix
* nits
* nits
* nits
* fix rope
* fix both rope
* nites
* style
* make sure flax works
* fix flax init code
* fix foward
* nits
* print flax generation out
* current code
* nits
* SIIIIIIIIIIIIIIIIIII
* update
* add new tokenizer
* correct fast tokenizer
* fix conversion
* more comments
* fix modeling and conversion
* nits and nits
* nits testing
* add some tokenization tests
* add some edge cases
* add slow tests and fix them
* fixup
* fix copies for modeling
* fix copies
* add 7B slow tests
* fix
* fix
* fix tests
* make tokenizer cis go green
* styling
* last tokenizer nits
* update jax tests
* fix flax for 7b
* add jit testing 🤗
* cleanups
* isolated nit, inv_freq for rotary_emb.inv_freq
* propagate to jax
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* adjust test
* fix conversion script
* change name
* correct file names
* update conversion script
* Fix bos and eos token ids in the model configuration (#3 )
* update modelling
* update conversion script
* add static cache for gemma
* fix sdpa generate
* fix batched
* multiple fixes
* fix FA2
* final fix
* Rename a few missing strings and filenames (#4 )
* merge with upstream main
* fix copies
* fix copies
* fix fixup
* fix fixup
* fix
* fix
* final tests
* fix fx gemma tests
* fix fx bf16/fp16 tests
* update slow fx tests
* fx slow tests: one logits, one generation
* move jit test standalone
* Apply suggestions from code review
* nits
* tokenizer updates
* more tokenization updates: custom GemmaSentencepieceExtrator
* style
* Update src/transformers/cache_utils.py
* Update src/transformers/models/gemma/__init__.py
* Update tests/models/gemma/test_modeling_flax_gemma.py
* small nits
* style
* update tokenization test
* fix the rotary embedding
* with style
* fix slow tests
* WARNING this commit might be very important for precisions
* Update tests/models/gemma/test_modeling_flax_gemma.py
* Update src/transformers/models/gemma/configuration_gemma.py
Co-authored-by: Lysandre Debut <hi@lysand.re>
* Update src/transformers/models/gemma/modeling_flax_gemma.py
Co-authored-by: Lysandre Debut <hi@lysand.re>
* small nits here and there!
* forgotten nit
* remove on the fly computation of inv_freq
* revert previous change, let's be safe and for now re-compute freq cis to make sure it's in float
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/transformers/models/gemma/convert_gemma_weights_to_hf.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/transformers/models/gemma/convert_gemma_weights_to_hf.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_flax_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* nit conversion script link
* fix some tests
* add not doctest and pr doctest
* repo consistency
* fix last CIs 🚀
* update all readmes
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: sanchit-gandhi <sanchit@huggingface.co>
Co-authored-by: Lysandre Debut <hi@lysand.re>
2024-02-21 14:21:28 +01:00