* fix for ragged list
* unpin numba
* make style
* np.object -> object
* propagate changes to tokenizer as well
* np.long -> "long"
* revert tokenization changes
* check with tokenization changes
* list/tuple logic
* catch numpy
* catch else case
* clean up
* up
* better check
* trigger ci
* Empty commit to trigger CI
* mixed precision support via accelerate
* fix issues
* fix for the sharded ddp case
* fix flax and tf failing tests
* `refactor the place to create `Accelerator` object
* move ddp prep to accelerate
* fix 😅
* resolving comments
* move fsdp handling to accelerate
* fixex
* fix saving
* shift torch dynamo handling to accelerate
* shift deepspeed integration and save & load utils to accelerate
* fix accelerate launcher support
* oops
* fix 🐛
* save ckpt fix
* Trigger CI
* nasty 🐛😅
* as deepspeed needs grad_acc fixes, transfer grad_acc to accelerate
* make tests happy
* quality ✨
* loss tracked needs to account for grad_acc
* fixing the deepspeed tests
* quality ✨
* 😅😅😅
* tests 😡
* quality ✨
* Trigger CI
* resolve comments and fix the issue with the previous merge from branch
* Trigger CI
* accelerate took over deepspeed integration
---------
Co-authored-by: Stas Bekman <stas@stason.org>
* Add tf code for efficientformer
* Fix return dict bug - return last hidden state after last stage
* Fix corresponding return dict bug
* Override test tol
* Change default values of training to False
* Set training to default False X3
* Rm axis from ln
* Set init in dense projection
* Rm debug stuff
* Make style; all tests pass.
* Modify year to 2023
* Fix attention biases codes
* Update the shape list logic
* Add a batch norm eps config
* Remove extract comments in test files
* Add conditional attn and hidden states return for serving output
* Change channel dim checking logic
* Add exception for withteacher model in training mode
* Revert layer count for now
* Add layer count for conditional layer naming
* Transpose for conv happens only in main layer
* Make tests smaller
* Make style
* Update doc
* Rm from_pt
* Change to actual expect image class label
* Remove stray print in tests
* Update image processor test
* Remove the old serving output logic
* Make style
* Make style
* Complete test
* Let's try autodetecting serving sigs
* Don't clobber existing sigs
* Change shapes for multiplechoice models
* Make default dummy inputs smarter too
* Fix missing f-string
* Let's YOLO a serving output too
* Read __class__.__name__ properly
* Don't just pass naked lists in there and expect it to be okay
* Code cleanup
* Update default serving sig
* Clearer error messages
* Further updates to the default serving output
* make fixup
* Update the serving output a bit more
* Cleanups and renames, raise errors appropriately when we can't infer inputs
* More renames
* we're building in a functional context again, yolo
* import DUMMY_INPUTS from the right place
* import DUMMY_INPUTS from the right place
* Support cross-attention in the dummies
* Support cross-attention in the dummies
* Complete removal of dummy/serving overrides in BERT
* Complete removal of dummy/serving overrides in RoBERTa
* Obliterate lots and lots of serving sig and dummy overrides
* merge type hint changes
* Fix for token_type_ids with vocab_size 1
* Add missing property decorator
* Fix T5 and hopefully some models that take conv inputs
* More signature pruning
* Fix T5's signature
* Fix Wav2Vec2 signature
* Fix LongformerForMultipleChoice input signature
* Fix BLIP and LED
* Better default serving output error handling
* Fix BART dummies
* Fix dummies for cross-attention, esp encoder-decoder models
* Fix visionencoderdecoder signature
* Fix BLIP serving output
* Small tweak to BART dummies
* Cleanup the ugly parameter inspection line that I used in a few places
* committed a breakpoint again
* Move the text_dims check
* Remove blip_text serving_output
* Add decoder_input_ids to the default input sig
* Remove all the manual overrides for encoder-decoder model signatures
* Tweak longformer/led input sigs
* Tweak default serving output
* output.keys() -> output
* make fixup
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Don't forget the imports
* Add the imports to tests too
* make fixup
* Refactor tests that depended on get_type_hints
* Better test refactor
* Fix an old hidden bug in the test_keras_fit input creation code
* Fix for the Deit tests
* Added lion and paged optimizers and made original tests pass.
* Added tests for paged and lion optimizers.
* Added and fixed optimizer tests.
* Style and quality checks.
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
* Added lion and paged optimizers and made original tests pass.
* Added tests for paged and lion optimizers.
* Added and fixed optimizer tests.
* Style and quality checks.
* Initial draft. Some tests fail.
* Fixed dtype bug.
* Fixed bug caused by torch_dtype='auto'.
* All test green for 8-bit and 4-bit layers.
* Added fix for fp32 layer norms and bf16 compute in LLaMA.
* Initial draft. Some tests fail.
* Fixed dtype bug.
* Fixed bug caused by torch_dtype='auto'.
* All test green for 8-bit and 4-bit layers.
* Added lion and paged optimizers and made original tests pass.
* Added tests for paged and lion optimizers.
* Added and fixed optimizer tests.
* Style and quality checks.
* Fixing issues for PR #23479.
* Added fix for fp32 layer norms and bf16 compute in LLaMA.
* Reverted variable name change.
* Initial draft. Some tests fail.
* Fixed dtype bug.
* Fixed bug caused by torch_dtype='auto'.
* All test green for 8-bit and 4-bit layers.
* Added lion and paged optimizers and made original tests pass.
* Added tests for paged and lion optimizers.
* Added and fixed optimizer tests.
* Style and quality checks.
* Added missing tests.
* Fixup changes.
* Added fixup changes.
* Missed some variables to rename.
* revert trainer tests
* revert test trainer
* another revert
* fix tests and safety checkers
* protect import
* simplify a bit
* Update src/transformers/trainer.py
* few fixes
* add warning
* replace with `load_in_kbit = load_in_4bit or load_in_8bit`
* fix test
* fix tests
* this time fix tests
* safety checker
* add docs
* revert torch_dtype
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* multiple fixes
* update docs
* version checks and multiple fixes
* replace `is_loaded_in_kbit`
* replace `load_in_kbit`
* change methods names
* better checks
* oops
* oops
* address final comments
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* First commit
* Add auto-translation with GPT-4
* make fixup
* Add a functional layernorm for TF
* Add all the auxiliary imports etc.
* Add the extra processor and tests
* rebase to main
* Add all the needed fixes to the GPT code
* make fixup
* Make convolutions channels-last so they run on CPU
* make fixup
* Fix final issues
* Fix other models affected by test change
* Clarify comment on the sparse_prompt_embeddings check
* Refactor functional_layernorm, use shape_list in place of .shape in some places
* Remove deprecated torch-alike code
* Update tests/models/sam/test_modeling_tf_sam.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/sam/test_modeling_tf_sam.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Refactor processor with common methods and separated private methods
* make fixup
* Quietly delete the file that didn't do anything (sorry Sylvain)
* Refactor the processor tests into one file
* make fixup
* Clean up some unnecessary indirection
* Fix TF mask postprocessing
* Add more processor equivalence tests
* Refactor generate_crop_boxes to use framework-neutral np code
* Make the serving output correctly conditional
* Fix error message line length
* Use dict keys rather than indices internally in both TF and PT SAM call/forward
* Return dicts internally in the call/forward methods
* Revert changes to common tests and just override check_pt_tf_outputs
* Revert changes to other model tests
* Clarify comments for functional layernorm
* Add missing transpose from PT code
* Removed unused copied from in PT code
* Remove overrides for tests that don't exist in TF
* Fix transpose and update tests for PT and TF to check pred_masks
* Add training flag
* Update tests to use TF checkpoints
* Update index.mdx
* Add missing cross-test decorator
* Remove optional extra asterisks
* Revert return_dict changes in PT code
* Update src/transformers/models/sam/modeling_tf_sam.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Remove None return annotations on init methods
* Update tests/models/sam/test_processor_sam.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Fix input_boxes shapes
* make fixup
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* initial working additions
* clean and rename, add cond stripping initial prompt to decode
* cleanup, edit create_initial_prompt_ids, add tests
* repo consistency, flip order of conditional
* fix error, move the processor fn to the tokenizer
* repo consistency, update test ids to corresponding tokenizer
* use convert_tokens_to_ids not get_vocab...
* use actual conditional in generate
* make sytle
* initial address comments
* initial working add new params to pipeline
* first draft of sequential generation for condition_on_previous_text
* add/update tests, make compatible with timestamps
* make compatible with diff. input kwargs and max length
* add None check
* add temperature check
* flip temp check operand
* refocusing to prev pr scope
* remove the params too
* make style
* edits, move max length incorporating prompt to whisper
* address comments
* remove asr pipeline prompt decoding, fix indexing
* address comments (more tests, validate prompt)
* un-comment out tests (from debug)
* remove old comment
* address comments
* fix typo
* remove timestamp token from test
* make style
* cleanup
* copy method to fast tokenizer, set max_new_tokens for test
* prompt_ids type just pt
* address Amy's comments
* make style
* Remove nestedness in tool config
* Really do it
* Use remote tools descriptions
* Work
* Clean up eval
* Changes
* Tools
* Tools
* tool
* Fix everything
* Use last result/assign for evaluation
* Prompt
* Remove hardcoded selection
* Evaluation for chat agents
* correct some spelling
* Small fixes
* Change summarization model (#23172)
* Fix link displayed
* Update description of the tool
* Fixes in chat prompt
* Custom tools, custom prompt
* Tool clean up
* save_pretrained and push_to_hub for tool
* Fix init
* Tests
* Fix tests
* Tool save/from_hub/push_to_hub and tool->load_tool
* Clean push_to_hub and add app file
* Custom inference API for endpoints too
* Clean up
* old remote tool and new remote tool
* Make a requirements
* return_code adds tool creation
* Avoid redundancy between global variables
* Remote tools can be loaded
* Tests
* Text summarization tests
* Quality
* Properly mark tests
* Test the python interpreter
* And the CI shall be green.
* fix loading of additional tools
* Work on RemoteTool and fix tests
* General clean up
* Guard imports
* Fix tools
* docs: Fix broken link in 'How to add a model...' (#23216)
fix link
* Get default endpoint from the Hub
* Add guide
* Simplify tool config
* Docs
* Some fixes
* Docs
* Docs
* Docs
* Fix code returned by agent
* Try this
* Match args with signature in remote tool
* Should fix python interpreter for Python 3.8
* Fix push_to_hub for tools
* Other fixes to push_to_hub
* Add API doc page
* Docs
* Docs
* Custom tools
* Pin tensorflow-probability (#23220)
* Pin tensorflow-probability
* [all-test]
* [all-test] Fix syntax for bash
* PoC for some chaining API
* Text to speech
* J'ai pris des libertés
* Rename
* Basic python interpreter
* Add agents
* Quality
* Add translation tool
* temp
* GenQA + LID + S2T
* Quality + word missing in translation
* Add open assistance, support f-strings in evaluate
* captioning + s2t fixes
* Style
* Refactor descriptions and remove chain
* Support errors and rename OpenAssistantAgent
* Add setup
* Deal with typos + example of inference API
* Some rename + README
* Fixes
* Update prompt
* Unwanted change
* Make sure everyone has a default
* One prompt to rule them all.
* SD
* Description
* Clean up remote tools
* More remote tools
* Add option to return code and update doc
* Image segmentation
* ControlNet
* Gradio demo
* Diffusers protection
* Lib protection
* ControlNet description
* Cleanup
* Style
* Remove accelerate and try to be reproducible
* No randomness
* Male Basic optional in token
* Clean description
* Better prompts
* Fix args eval in interpreter
* Add tool wrapper
* Tool on the Hub
* Style post-rebase
* Big refactor of descriptions, batch generation and evaluation for agents
* Make problems easier - interface to debug
* More problems, add python primitives
* Back to one prompt
* Remove dict for translation
* Be consistent
* Add prompts
* New version of the agent
* Evaluate new agents
* New endpoints agents
* Make all tools a dict variable
* Typo
* Add problems
* Add to big prompt
* Harmonize
* Add tools
* New evaluation
* Add more tools
* Build prompt with tools descriptions
* Tools on the Hub
* Let's chat!
* Cleanup
* Temporary bs4 safeguard
* Cache agents and clean up
* Blank init
* Fix evaluation for agents
* New format for tools on the Hub
* Add method to reset state
* Remove nestedness in tool config
* Really do it
* Use remote tools descriptions
* Work
* Clean up eval
* Changes
* Tools
* Tools
* tool
* Fix everything
* Use last result/assign for evaluation
* Prompt
* Remove hardcoded selection
* Evaluation for chat agents
* correct some spelling
* Small fixes
* Change summarization model (#23172)
* Fix link displayed
* Update description of the tool
* Fixes in chat prompt
* Custom tools, custom prompt
* Tool clean up
* save_pretrained and push_to_hub for tool
* Fix init
* Tests
* Fix tests
* Tool save/from_hub/push_to_hub and tool->load_tool
* Clean push_to_hub and add app file
* Custom inference API for endpoints too
* Clean up
* old remote tool and new remote tool
* Make a requirements
* return_code adds tool creation
* Avoid redundancy between global variables
* Remote tools can be loaded
* Tests
* Text summarization tests
* Quality
* Properly mark tests
* Test the python interpreter
* And the CI shall be green.
* Work on RemoteTool and fix tests
* fix loading of additional tools
* General clean up
* Guard imports
* Fix tools
* Get default endpoint from the Hub
* Simplify tool config
* Add guide
* Docs
* Some fixes
* Docs
* Docs
* Fix code returned by agent
* Try this
* Docs
* Match args with signature in remote tool
* Should fix python interpreter for Python 3.8
* Fix push_to_hub for tools
* Other fixes to push_to_hub
* Add API doc page
* Fixes
* Doc fixes
* Docs
* Fix audio
* Custom tools
* Audio fix
* Improve custom tools docstring
* Docstrings
* Trigger CI
* Mode docstrings
* More docstrings
* Improve custom tools
* Fix for remote tools
* Style
* Fix repo consistency
* Quality
* Tip
* Cleanup on doc
* Cleanup toc
* Add disclaimer for starcoder vs openai
* Remove disclaimer
* Small fixed in the prompts
* 4.29
* Update src/transformers/tools/agents.py
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
* Complete documentation
* Small fixes
* Agent evaluation
* Note about gradio-tools & LC
* Clean up agents and prompt
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Note about gradio-tools & LC
* Add copyrights and address review comments
* Quality
* Add all language codes
* Add remote tool tests
* Move custom prompts to other docs
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* TTS tests
* Quality
---------
Co-authored-by: Lysandre <hi@lyand.re>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
Co-authored-by: Connor Henderson <connor.henderson@talkiatry.com>
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
Co-authored-by: Lysandre <lysandre@huggingface.co>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* First draft of RWKV-4
* Add support for generate
* Style post-rebase
* Properly use state
* Write doc
* Fix doc
* More math
* Add model to README, dummies and clean config
* Fix init
* multiple fixes:
- fix common tests
- fix configuraion default values
- add CI test for checking state computation
- fix some CI tests
* correct tokenizer
* some tweaks
- fix config docstring
- fix failing tests
* fix CI tests
- add output_attention / output_hidden_states
- override test_initialization
- fix failing CIs
* fix conversion script
- fix sharded case
- add new arguments
* add slow tests + more fixes on conversion script
* add another test
* final fixes
* change single name variable
* add mock attention mask for pipeline to work
* correct eos token id
* fix nits
* add checkpoints
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* add `tie_word_embeddings` in docstring
* change tensor name
* fix final nits
* Trigger CI
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* first draft - gives index error in question_answering.py
* maturing
* no labels
* pipeline should know about QA
* fixing checks
* formatting
* fixed docstring
* initial commit
* formatting
* adding the class to many places
* towards less unhappy checks
* nearly there
* and gpt neox for qa
* use right model
* forgot this one
* base_model_prefix is "gpt_neox" for GPTNeoX* models
* unnecessary stuff
* Update src/transformers/models/gpt_neox/modeling_gpt_neox.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* format
* Update src/transformers/models/gpt_neox/modeling_gpt_neox.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* removed gpt2 stuff
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* first draft - gives index error in question_answering.py
* maturing
* no labels
* pipeline should know about QA
* fixing checks
* formatting
* fixed docstring
* initial commit
* formatting
* adding the class to many places
* towards less unhappy checks
* nearly there
* Update src/transformers/models/gpt_neo/modeling_gpt_neo.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* avoid error
* moving to device of star/end_logits
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* first draft - gives index error in question_answering.py
* maturing
* no labels
* pipeline should know about QA
* fixing checks
* formatting
* fixed docstring
* make sure legacy code executes
* comment
* like this
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
* Add Trainer support for ReduceLROnPlateau
Fixes#16503
* Remove training argument and add default instance
---------
Co-authored-by: mmeloux <maxime.meloux@loria.fr>
Adds FocalNet by Microsoft to transformers
---------
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
Co-authored-by: alaradirik <alaradirik@gmail.com>
* initial work
* Add other classes
* Refactor code
* Move warning and fix dynamic pipeline
* Issue warning when necessary
* Add test
* Do not skip auto tests
* Fix failing tests
* Refactor and address review comments
* Address review comments
* wrong argument name
* append eos_token_id
* all tokenizers need mask and ctc_blank tokens
* remove reduction factor from feature extractor
* add proper TTS loss
* did shifting the wrong way around
* mask out padded portions
* remove logits again (don't really need it)
* fix unit tests
* fixup
* pad also returns the decoder attention mask, since that's useful to have
* clean up feature extractor logic
* pad can handle TTS task too
* remove stop_labels from loss calculation
* simplify logic
* fixup
* do -100 masking properly
* small STFT optimization (calculate mel filterbanks only once)
* replace torchaudio fbanks with audio_utils
* remove torchaudio dependency
* simplify & speed up the STFT
* don't serialize window and mel filters
* output cross attentions when generating speech
* add guided attention loss
* fix failing test
* Update src/transformers/models/speecht5/feature_extraction_speecht5.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/speecht5/modeling_speecht5.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* change type annotation of attention_mask to LongTensor
* extract loss into class
* remove unused frame_signal_scale argument
* use config object in loss class
* fix type annotations in doc comments
* change optional to just bool
* implement missing tokenizer method
* add deprecation warning
* Update src/transformers/models/speecht5/feature_extraction_speecht5.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/speecht5/feature_extraction_speecht5.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* add deprecation warning for stop_labels
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Add model to doc tests
* Remove generate and replace by prepare_inputs_for_generation
* More fixes
* Remove print statements
* Update integration tests
* Fix generate
* Remove model from auto mapping
* Use auto processor
* Fix integration tests
* Fix test
* Add inference code snippet
* Remove is_encoder_decoder
* Update docs
* Remove notebook link
* Fix docstrings for TFBLIP
* Fix missing line in TF port!
* Use values from torch tests now other bugs fixed
* Use values from torch tests now other bugs fixed
* Fix doctest string
* resolve conflicts
* rebase and make style
* test
* test
* test
* rebase and make style
* rebase and make style
* tests
* tests
* rewrite some functions
* rebase and make style
* fix load_tf_weights_in_cpmant
* reformat some unrelated files
* upgrade quality
* fix some bugs & docstring
* add models and tests
* solve conflicts
* resolve conflicts
* resolve conflicts
* resolve conflicts
* resolve conflicts
* tests
* resolve conflicts
* resolve conflicts
* fix load_tf_weights_in_cpmant
* reformat some unrelated files
* upgrade quality
* fix some bugs & docstring
* save resolution
* make style
* delete redefinition code
* reformat function
* reformat
* resolve conflicts
* resolve conflicts
* resolve conflicts
* resolve conflicts
* resolve conflicts
* tests
* resolve conflicts
* resolve conflicts
* fix load_tf_weights_in_cpmant
* reformat some unrelated files
* upgrade quality
* resolve conflicts
* resolve conflicts
* resolve conflicts
* resolve conflicts
* resolve conflicts
* fix load_tf_weights_in_cpmant
* reformat some unrelated files
* upgrade quality
* resolve conflicts
* make style
* fix bugs and refactor
* modify docstrings and make style
* unify import format in __init__.py
* fix import-altclp bug
* fix copies to update index.md
* fix unused config parameters
* fix unused config parameters
* fix unused config parameters
* update README_ja.md
* dummy commit for unit test
* fix attention mask
* add CPMAntTokenizer&-Fast to auto-mapping
* drop redundant changes in README_ko
* fix defaults in docstring
* fix use_cache and some docstring
* add missing args in tokenizer
* modify tester inheritance
* add is_jieba_available
* fix some bugs
* make style and fix-copies
* add doctests
* skip integration tests
* add is_jieba_available
* fix bugs in common tests
* adjust docstrings and make style
* add argument docstring
* adjust code to some specifications
* make style and fix-copies
* add fast tokenization test
* dummy commit for unit test
* dummy commit for unit test
* dummy commit for unit test
* normalize some comments and names
* Bert->CPMAnt
* camel names and drop redundant codes
* make style and fix-coies
* add CpmTokenizerFast _import_structure
* drop cpmanttokenizerfast in model_doc
* fix some problems
* fix CPMAnt tokenization for common test
* make style and fixup
* fix copies and fixup
* fix bugs in tokenization test
* dummy commit for connection failure in unittest
* fix copies
* drop trailing comma
* fix decorator in tests
* dummy commit for connection failure in unittest
---------
Co-authored-by: Gong Baitao <gongbaitao11@gmail.com>
* Add out_indices to backbones, deprecate out_features
* Update - can specify both out_features and out_indices but not both
* Add backbone mixin tests
* Test tidy up
* Add test_backbone for convnext
* Remove redefinition of method
* Update for Dinat and Nat backbones
* Update tests
* Smarter indexing
* Add checks on config creation for backbone
* PR comments
* Adding Llama FastTokenizer support.
- Requires https://github.com/huggingface/tokenizers/pull/1183 version
- Only support byte_fallback for llama, raise otherwise (safety net).
- Lots of questions are special tokens
How to test:
```python
from transformers.convert_slow_tokenizer import convert_slow_tokenizer
from transformers import AutoTokenizer
from tokenizers import Tokenizer
tokenizer = AutoTokenizer.from_pretrained("huggingface/llama-7b")
if False:
new_tokenizer = Tokenizer.from_file("tok.json")
else:
new_tokenizer = convert_slow_tokenizer(tokenizer)
new_tokenizer.save("tok.json")
strings = [
"This is a test",
"生活的真谛是",
"生活的真谛是[MASK]。",
# XXX: This one is problematic because of special tokens
# "<s> Something something",
]
for string in strings:
encoded = tokenizer(string)["input_ids"]
encoded2 = new_tokenizer.encode(string).ids
assert encoded == encoded2, f"{encoded} != {encoded2}"
decoded = tokenizer.decode(encoded)
decoded2 = new_tokenizer.decode(encoded2)
assert decoded.strip() == decoded2, f"{repr(decoded)} != {repr(decoded2)}"
```
The converter + some test script.
The test script.
Tmp save.
Adding Fast tokenizer + tests.
Adding the tokenization tests.
Correct combination.
Small fix.
Fixing tests.
Fixing with latest update.
Rebased.
fix copies + normalized added tokens + copies.
Adding doc.
TMP.
Doc + split files.
Doc.
Versions + try import.
Fix Camembert + warnings -> Error.
Fix by ArthurZucker.
Not a decorator.
* Fixing comments.
* Adding more to docstring.
* Doc rewriting.
* Fix inverted conditional in TF common test!
* Make the same change in the PT tests file
* Make sure hidden states for GPT2 have the same output shape in PT/TF
* Minor fix to PT implementation of token classification loss
* Skip loss equivalence test for TFHubert because it keeps overflowing to inf
* Compute LM loss for TF the (weird) way it's computed in PT
* Skip loss equivalence test for Wav2Vec2 for the same reason as Hubert
* Fix - don't try to access the hidden states property when output is a tuple
* Initial commit
* more stash commit
* Yet another stash commit
* yet more stash commit
* Mostly working except for docs / repo consistency
* Stop importing model list from torch file
* Add TF BLIP models to docs
* Add auto classes
* Move get_text_features and get_image_features
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip_text.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/blip/test_modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/blip/test_modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/models/blip/test_modeling_tf_blip_text.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip_text.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Use channels_last convolutions in TF (better performance + compatibility)
* Remove _shape function
* Move multi-line statement to one line in PT + TF
* Specify tf.keras.layers instead of importing from it
* Remove test_gradient_checkpointing and empty test_training methods
* move some multi-line statements to one line
* Update docstring for generate
* Remove pruned heads set
* Remove self.seq_len_dim
* Fixed issues with loss computation, should resolve some tests. Also ensured that the PT version follows the config for output_attentions and output_hidden_states
* ensure original model follows config in more cases
* Skip the same cross-attention tests in the PT tests - didn't realize we did it twice!
* Add training args throughout the models and layers
* make fixup
* Fix docstring for inputs_embeds
* Add docstring for is_decoder
* Add docstrings to text models
* Remove redundant computation
* Add unpack_inputs / keras_serializable
* Add modeling_tf_blip to doctests
* Add config classes for keras serialization
* Changes to allow model porting with pt-to-tf
* Quick fix to decoder head and test tweaks
* Revert an issue with masking the embeddings outputs
* Allow missing keys in some equivalence tests (for unused layers)
* Add tf-pt equivalence tests back in
* Update src/transformers/models/blip/modeling_tf_blip.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip_text.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/blip/modeling_tf_blip_text.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* make fixup
* Refactor invert_attention_mask out into tf_utils
* Re-enable cross-tests on the PT side too
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* fix RoFormerEncoder postion embedding when generate as decoder
* make fixup
* add test case for check generate with past key values
* remove duplicating code