* kinda works
* update
* add tests
* update
* use special tokens in processors
* typo
* fix copies
* fix
* fix moshi after rebase
* update
* fix tests
* update
* Update docs/source/en/main_classes/tokenizer.md
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* update docs
* test for load time adding tokens
* fix some more tests which are now fetched better
* one more fix
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* _decode signature change and quick return
* added bunch of decoding tests
* signature match and return
* added tests for decoding
* merged decoding test
* more tests for special tokens
* cosmetics
* fixed param
* ruffed the file
* refinement for single special tokens
* added test for single special tokens
* slight change to test name
Co-authored-by: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
* minor change test name for skip tokens
Co-authored-by: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
* killed already defined var
Co-authored-by: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
* minor update with vars
Co-authored-by: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
* killed already defined var once more
Co-authored-by: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
---------
Co-authored-by: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
* test(tokenizers): add a test showing conflict with sentencepiece
This is due to the fact that protobuf C implementation uses a global
pool for all added descriptors, so if two different files add
descriptors, they will end up conflicting.
* fix(tokenizers): mitigate sentencepiece/protobuf conflict
When sentencepiece is available, use that protobuf instead of the
internal one.
* chore(style): fix with ruff
* save total_vocab_size = vocab_size + user added tokens to speed up operation
* updating length when added_tokens_decoder is set
* add test len(tokenizer)
* Result of black 23.1
* Update target to Python 3.7
* Switch flake8 to ruff
* Configure isort
* Configure isort
* Apply isort with line limit
* Put the right black version
* adapt black in check copies
* Fix copies