* Keras callback to push to hub each epoch, or after N steps
* Reworked the callback to use Repository
* Use an Enum for save_strategy
* Style pass
* Correct type for tokenizer
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Adding print message to the final upload
* Adding print message to the final upload
* Change how we wait for the last process to finish
* is_done is a property, not a method, derp
* Docstrings and documentation
* Style pass
* Style edit
* Docstring reformat
* Docstring rewrite
* Replacing print with internal logger
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Make gradient_checkpointing a training argument
* Update src/transformers/modeling_utils.py
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Update src/transformers/configuration_utils.py
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Fix tests
* Style
* document Gradient Checkpointing as a performance feature
* Small rename
* PoC for not using the config
* Adapt BC to new PoC
* Forgot to save
* Rollout changes to all other models
* Fix typo
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas@stason.org>
* beit-flax
* updated FLAX_BEIT_MLM_DOCSTRING
* removed bool_masked_pos from classification
* updated Copyright
* code refactoring: x -> embeddings
* updated test: rm from_pt
* Update docs/source/model_doc/beit.rst
* model code dtype updates and
other changes according to review
* relative_position_bias
revert back to pytorch design
* Init FNet
* Update config
* Fix config
* Update model classes
* Update tokenizers to use sentencepiece
* Fix errors in model
* Fix defaults in config
* Remove position embedding type completely
* Fix typo and take only real numbers
* Fix type vocab size in configuration
* Add projection layer to embeddings
* Fix position ids bug in embeddings
* Add minor changes
* Add conversion script and remove CausalLM vestiges
* Fix conversion script
* Fix conversion script
* Remove CausalLM Test
* Update checkpoint names to dummy checkpoints
* Add tokenizer mapping
* Fix modeling file and corresponding tests
* Add tokenization test file
* Add PreTraining model test
* Make style and quality
* Make tokenization base tests work
* Update docs
* Add FastTokenizer tests
* Fix fast tokenizer special tokens
* Fix style and quality
* Remove load_tf_weights vestiges
* Add FNet to main README
* Fix configuration example indentation
* Comment tokenization slow test
* Fix style
* Add changes from review
* Fix style
* Remove bos and eos tokens from tokenizers
* Add tokenizer slow test, TPU transforms, NSP
* Add scipy check
* Add scipy availabilty check to test
* Fix tokenizer and use correct inputs
* Remove remaining TODOs
* Fix tests
* Fix tests
* Comment Fourier Test
* Uncomment Fourier Test
* Change to google checkpoint
* Add changes from review
* Fix activation function
* Fix model integration test
* Add more integration tests
* Add comparison steps to MLM integration test
* Fix style
* Add masked tokenization fix
* Improve mask tokenization fix
* Fix index docs
* Add changes from review
* Fix issue
* Fix failing import in test
* some more fixes
* correct fast tokenizer
* finalize
* make style
* Remove additional tokenization logic
* Set do_lower_case to False
* Allow keeping accents
* Fix tokenization test
* Fix FNet Tokenizer Fast
* fix tests
* make style
* Add tips to FNet docs
Co-authored-by: patrickvonplaten <patrick.v.platen@gmail.com>
* Enabling dataset iteration on pipelines.
Enabling dataset iteration on pipelines.
Unifying parameters under `set_parameters` function.
Small fix.
Last fixes after rebase
Remove print.
Fixing text2text `generate_kwargs`
No more `self.max_length`.
Fixing tf only conversational.
Consistency in start/stop index over TF/PT.
Speeding up drastically on TF (nasty bug where max_length would increase
a ton.)
Adding test for support for non fast tokenizers.
Fixign GPU usage on zero-shot.
Fix working on Tf.
Update src/transformers/pipelines/base.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Update src/transformers/pipelines/base.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Small cleanup.
Remove all asserts + simple format.
* Fixing audio-classification for large PR.
* Overly explicity null checking.
* Encapsulating GPU/CPU pytorch manipulation directly within `base.py`.
* Removed internal state for parameters of the pipeline.
Instead of overriding implicitly internal state, we moved
to real named arguments on every `preprocess`, `_forward`,
`postprocess` function.
Instead `_sanitize_parameters` will be used to split all kwargs
of both __init__ and __call__ into the 3 kinds of named parameters.
* Move import warnings.
* Small fixes.
* Quality.
* Another small fix, using the CI to debug faster.
* Last fixes.
* Last fix.
* Small cleanup of tensor moving.
* is not None.
* Adding a bunch of docs + a iteration test.
* Fixing doc style.
* KeyDataset = None guard.
* RRemoving the Cuda test for pipelines (was testing).
* Even more simple iteration test.
* Correct import .
* Long day.
* Fixes in docs.
* [WIP] migrating object detection.
* Fixed the target_size bug.
* Fixup.
* Bad variable name.
* Fixing `ensure_on_device` respects original ModelOutput.
* [docs] update dead quickstart link on resuing past for GPT2
Thed dead link have been replaced by two links of forward and call methods of the GPT2 class for torch and tensorflow respectively.
* [docs] fix formatting for gpt2 page update
* refactor GPT Config to allow dyn. properties
* make attribute_map a class attribute
* remove old code
* update unit test to test config: Add test for common properties setter
* update unit test to test config: Add test for common properties passed as parameters to __init__
* update to black code format
* Allow that setters are not defined for certain config classes
* update config classes to implement attribute_map
* bugfix lxmert config - id2labels was not defined when num_labels was set
* update broken configs - add attribute_maps
* update bart config
* update black codestyle
* update documentation on common config attributes
* update GPTJ config to new attribute map
* update docs on common attributes
* gptj config: add max_position_embeddings
* gptj config: format with black
* update speech to text 2 config
* format doc file to max_len 119
* update config template
* [docs] Update perplexity.rst to use negative log likelihood
Model `forward` returns the negative log likelihood. The document correctly defines and calculates perplexity, but the description and variable names are inconsistent, which might cause confusion.
* [docs] restyle perplexity.rst
* fix_torch_device_generate_test
* remove @
* up
* correct some bugs
* correct model
* finish speech2text extension
* up
* up
* up
* up
* Update utils/custom_init_isort.py
* up
* up
* update with tokenizer
* correct old tok
* correct old tok
* fix bug
* up
* up
* add more tests
* up
* fix docs
* up
* fix some more tests
* add better config
* correct some more things
"
* fix tests
* improve docs
* Apply suggestions from code review
* Apply suggestions from code review
* final fixes
* finalize
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* apply suggestions Lysandre and Sylvain
* apply nicos suggestions
* upload everything
* finish
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
Co-authored-by: your_github_username <your_github_email>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Add the audio classification pipeline
* Remove autoconfig exception
* Mark ffmpeg test as slow
* Rearrange pipeline tests
* Add small test
* Replace asserts with ValueError
* Adding a TF variant of the DataCollatorForTokenClassification to get feedback
* Added a Numpy variant and a post_init check to fail early if a missing import is found
* Fixed call to Numpy variant
* Added a couple more of the collators
* Update src/transformers/data/data_collator.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Fixes, style pass, finished DataCollatorForSeqToSeq
* Added all the LanguageModeling DataCollators, except SOP and PermutationLanguageModeling
* Adding DataCollatorForPermutationLanguageModeling
* Style pass
* Add missing `__call__` for PLM
* Remove `post_init` checks for frameworks because the imports inside them were making us fail code quality checks
* Remove unused imports
* First attempt at some TF tests
* A second attempt to make any of those tests actually work
* TF tests, round three
* TF tests, round four
* TF tests, round five
* TF tests, all enabled!
* Style pass
* Merging tests into `test_data_collator.py`
* Merging tests into `test_data_collator.py`
* Fixing up test imports
* Fixing up test imports
* Trying shuffling the conditionals around
* Commenting out non-functional old tests
* Completed all tests for all three frameworks
* Style pass
* Fixed test typo
* Style pass
* Move standard `__call__` method to mixin
* Rearranged imports for `test_data_collator`
* Fix data collator typo "torch" -> "pt"
* Fixed the most embarrassingly obvious bug
* Update src/transformers/data/data_collator.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Renaming mixin
* Updating docs
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Dalton Walker <dalton_walker@icloud.com>
Co-authored-by: Andrew Romans <andrew.romans@hotmail.com>