* Adding TF translation example
* Fixes and style pass for TF translation example
* Remove unused postprocess_text copied from run_summarization
* Adding README
* Review fixes
* Move changes to model.config to after we've initialized the model
* fix_torch_device_generate_test
* remove @
* correct greedy search
* save intertmed
* add final logits bias
* correct
* up
* add more tests
* fix another bug
* finish tests
* finish marian tests
* up
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
* change deprecation to be easily actionable
* Update src/transformers/tokenization_utils_base.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* rework as suggested
* one warning together
* fix format
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Add option to load a pretrained model with mismatched shapes
* Fail at loading when mismatched shapes in Flax
* Fix tests
* Update src/transformers/modeling_flax_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Address review comments
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Wrong model is used, should be character instead of subword
In the original Google repo for CANINE there was mixup in the model names in the README.md, which was fixed 2 weeks ago. Since this transformer model was created before, it probably resulted in wrong use in this example.
s = subword, c = character
* canine.rst style fix
* Update docs/source/model_doc/canine.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Styling canine.rst
* Added links to model cards.
* Fixed links to model cards.
Co-authored-by: Jeroen Steggink <978411+jsteggink@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* README Translation for Chinese (Simplified)
* update link
* h3->h4
* html refactor
* update model list
* fix
* Add a translation guide
* format
* update
* typo
* Refine wording
* Pass model_kwargs when loading a model in pipeline
* Add test for model_kwargs parameter of pipeline()
* Rewrite test to not download model
* Fix failing style checks
* Base test
* More test
* Fix mistake
* Add a docstring change
* Add doc ignore
* Simplify logic for unk token in Unigram tokenizers
* Remove changes from otehr branch
* This will reduce "Already borrowed error":
Original issue https://github.com/huggingface/tokenizers/issues/537
The original issue is caused by transformers calling many times
mutable functions on the rust tokenizers.
Rust needs to guarantee that only 1 agent has a mutable reference
to memory at a given time (for many reasons which don't need explaining
here). Usually, the rust compiler can guarantee that this property is
true at compile time.
Unfortunately, this is impossible for Python to do that, so PyO3, the
bridge between rust and python used by `tokenizers`, will change the
compile guarantee for a dynamic guarantee, so if multiple agents try
to have multiple mutable borrows at the same time, then the runtime will
yell with "Already borrowed".
The proposed fix here in transformers, is simply to reduce the actual
number of calls that really need mutable borrows. By reducing them,
we reduce the risk of running into "Already borrowed" error.
The caveat is now we add a call to read the current configuration of the
`_tokenizer`, so worst case we have 2 calls instead of 1, and best case
we simply have 1 + a Python comparison of a dict (should be negligible).
* Adding a test.
* trivial error :(.
* Update tests/test_tokenization_fast.py
Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com>
* Adding reference to original issues in the tests.
* Update the tests with fast tokenizer.
Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com>
* [model.from_pretrained] raise exception early on failed load
Currently if `load` pretrained weights fails in `from_pretrained`, we first print a whole bunch of successful messages and then fail - this PR puts the exception first to avoid all the misleading messages.
* style
Co-authored-by: Suraj Patil <surajp815@gmail.com>