* preserve type of `additional_special_tokens` in `special_token_map`
* format
* Update src/transformers/tokenization_utils_base.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Base test
* More test
* Fix mistake
* Add a docstring change
* Add doc ignore
* Add changes
* Add recursive dep search
* Add recursive dep search
* save
* Finalize test mapping
* Fix bug
* Print prettier
* Ignore comments and empty lines
* Make script runnable from anywhere
* Need dev install
* Like that
* Adapt
* Add as artifact
* Try on torch tests
* Fix yaml error
* Install GitPython
* Apply everywhere
* Be more defensive
* Revert to all tests if something is wrong
* Install GitPython
* Test if there are tests before launching.
* Fixes
* Fixes
* Fixes
* Fixes
* Bash syntax is horrible
* Be less stupid
* Try differently
* Typo
* Typo
* Typo
* Style
* Better name
* Escape quotes
* Ignore black unhelpful re-formatting
* Not a docstring
* Deal with inits in dependency map
* Run all tests once PR is merged.
* Add last job
* Apply suggestions from code review
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Stronger dependencies gather
* Ignore empty lines too!
* Clean up
* Fix quality
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* fix_torch_device_generate_test
* remove @
* correct greedy search
* save intertmed
* add final logits bias
* correct
* up
* add more tests
* fix another bug
* finish tests
* finish marian tests
* up
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
* Add option to load a pretrained model with mismatched shapes
* Fail at loading when mismatched shapes in Flax
* Fix tests
* Update src/transformers/modeling_flax_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Address review comments
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Pass model_kwargs when loading a model in pipeline
* Add test for model_kwargs parameter of pipeline()
* Rewrite test to not download model
* Fix failing style checks
* This will reduce "Already borrowed error":
Original issue https://github.com/huggingface/tokenizers/issues/537
The original issue is caused by transformers calling many times
mutable functions on the rust tokenizers.
Rust needs to guarantee that only 1 agent has a mutable reference
to memory at a given time (for many reasons which don't need explaining
here). Usually, the rust compiler can guarantee that this property is
true at compile time.
Unfortunately, this is impossible for Python to do that, so PyO3, the
bridge between rust and python used by `tokenizers`, will change the
compile guarantee for a dynamic guarantee, so if multiple agents try
to have multiple mutable borrows at the same time, then the runtime will
yell with "Already borrowed".
The proposed fix here in transformers, is simply to reduce the actual
number of calls that really need mutable borrows. By reducing them,
we reduce the risk of running into "Already borrowed" error.
The caveat is now we add a call to read the current configuration of the
`_tokenizer`, so worst case we have 2 calls instead of 1, and best case
we simply have 1 + a Python comparison of a dict (should be negligible).
* Adding a test.
* trivial error :(.
* Update tests/test_tokenization_fast.py
Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com>
* Adding reference to original issues in the tests.
* Update the tests with fast tokenizer.
Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com>
* Fixing the pipeline optimization by rescaling the logits first.
* Add test for target equivalence
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* Laying down building stone for more flexible ONNX export capabilities
* Ability to provide a map of config key to override before exporting.
* Makes it possible to export BART with/without past keys.
* Supports simple mathematical syntax for OnnxVariable.repeated
* Effectively apply value override from onnx config for model
* Supports export with additional features such as with-past for seq2seq
* Store the output path directly in the args for uniform usage across.
* Make BART_ONNX_CONFIG_* constants and fix imports.
* Support BERT model.
* Use tokenizer for more flexibility in defining the inputs of a model.
* Add TODO as remainder to provide the batch/sequence_length as CLI args
* Enable optimizations to be done on the model.
* Enable GPT2 + past
* Improve model validation with outputs containing nested structures
* Enable Roberta
* Enable Albert
* Albert requires opset >= 12
* BERT-like models requires opset >= 12
* Remove double printing.
* Enable XLM-Roberta
* Enable DistilBERT
* Disable optimization by default
* Fix missing setattr when applying optimizer_features
* Add value field to OnnxVariable to define constant input (not from tokenizers)
* Add T5 support.
* Simplify model type retrieval
* Example exporting token_classification pipeline for DistilBERT.
* Refactoring to package `transformers.onnx`
* Solve circular dependency & __main__
* Remove unnecessary imports in `__init__`
* Licences
* Use @Narsil's suggestion to forward the model's configuration to the ONNXConfig to avoid interpolation.
* Onnx export v2 fixes (#12388)
* Tiny fixes
Remove `convert_pytorch` from onnxruntime-less runtimes
Correct reference to model
* Style
* Fix Copied from
* LongFormer ONNX config.
* Removed optimizations
* Remvoe bad merge relicas.
* Remove unused constants.
* Remove some deleted constants from imports.
* Fix unittest to remove usage of PyTorch model for onnx.utils.
* Fix distilbert export
* Enable ONNX export test for supported model.
* Style.
* Fix lint.
* Enable all supported default models.
* GPT2 only has one output
* Fix bad property name when overriding config.
* Added unittests and docstrings.
* Disable with_past tests for now.
* Enable outputs validation for default export.
* Remove graph opt lvls.
* Last commit with on-going past commented.
* Style.
* Disabled `with_past` for now
* Remove unused imports.
* Remove framework argument
* Remove TFPreTrainedModel reference
* Add documentation
* Add onnxruntime tests to CircleCI
* Add test
* Rename `convert_pytorch` to `export`
* Use OrderedDict for dummy inputs
* WIP Wav2Vec2
* Revert "WIP Wav2Vec2"
This reverts commit f665efb04c92525c3530e589029f0ae7afdf603e.
* Style
* Use OrderedDict for I/O
* Style.
* Specify OrderedDict documentation.
* Style :)
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Adding support for `pipeline("automatic-speech-recognition")`.
- Ugly `"config"` choice for AutoModel. It would be great to have the
possibility to have something like `AutoModelFor` that would implement
the same logic (Load the config, check Architectures and load the first
one)
* Remove `model_id` was not needed in the end.
* Rebased !
* Remove old code.
* Rename `nlp`.
* Copy BART to MBart and rename some stuff
* Add copy statements pointing to FlaxBart
* Update/add some common files
* Update shift_tokens_rigth + fix imports
* Fix shift_tokens_right method according to MBart implementation
* Update shift_tokens_right in tests accordingly
* Fix the import issue and update docs file
* make style quality
* Do some minor changes according to patil-suraj suggestions
* Change the order of normalization layer and attention
* Add some copu statementes
* Update generate method and add integration test for mBart
* Make a few updates after a review
Besides, add `lang_code_to_id` to MBartTokenizeFast
* fix-copies; make style quality
* Apply suggestions from code review
* Apply suggestions from code review
* Apply suggestions from code review
* fix output type, style
* add copied from
* resolve conflicts
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* First pass
* More progress
* Add support for local attention
* More improvements
* More improvements
* Conversion script working
* Add CanineTokenizer
* Make style & quality
* First draft of integration test
* Remove decoder test
* Improve tests
* Add documentation
* Mostly docs improvements
* Add CanineTokenizer tests
* Fix most tests on GPU, improve upsampling projection
* Address most comments by @dhgarrette
* Remove decoder logic
* Improve Canine tests, improve docs of CanineConfig
* All tokenizer tests passing
* Make fix-copies and fix tokenizer tests
* Fix test_model_outputs_equivalence test
* Apply suggestions from @sgugger's review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Address some more comments
* Add support for hidden_states and attentions of shallow encoders
* Define custom CanineModelOutputWithPooling, tests pass
* First pass
* More progress
* Add support for local attention
* More improvements
* More improvements
* Conversion script working
* Add CanineTokenizer
* Make style & quality
* First draft of integration test
* Remove decoder test
* Improve tests
* Add documentation
* Mostly docs improvements
* Add CanineTokenizer tests
* Fix most tests on GPU, improve upsampling projection
* Address most comments by @dhgarrette
* Remove decoder logic
* Improve Canine tests, improve docs of CanineConfig
* All tokenizer tests passing
* Make fix-copies and fix tokenizer tests
* Fix test_model_outputs_equivalence test
* Apply suggestions from @sgugger's review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Address some more comments
* Make conversion script work for Canine-c too
* Fix tokenizer tests
* Remove file
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* [WIP] Easily train a new fast tokenizer from a given one
* Fix test
* Roll out to other tokenizers and add tests
* Fix bug with unk id and add emoji to test
* Really use something different in test
* Implement special tokens map
* Map special tokens in the Transformers tokenizers
* Fix test
* Make test more robust
* Fix test for BPE
* More robust map and test
Co-authored-by SaulLu
* Test file
* Stronger tests
Co-authored-by: SaulLu <lucilesaul.com@gmail.com>
* Map unk token for Wordpiece and address review comment
* Fix lowercase test and address review comment
* Fix all tests
* Simplify test
* Fix tests for realsies
* Easily train a new fast tokenizer from a given one - tackle the special tokens format (str or AddedToken) (#12420)
* Propose change in tests regarding lower case
* add new test for special tokens types
* put back the test part about decoding
* add feature: the AddedToken is re-build with the different mapped content
* Address review comment: simplify AddedToken building
Co-authored-by: sgugger <sylvain.gugger@gmail.com>
* Update src/transformers/tokenization_utils_fast.py
Co-authored-by: sgugger <sylvain.gugger@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: SaulLu <lucilesaul.com@gmail.com>
Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com>
* Clean push to hub API
* Create working dir if it does not exist
* Different tweak
* New API + all models + test Flax
* Adds the Trainer clean up
* Update src/transformers/file_utils.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Address review comments
* (nit) output types
* No need to set clone_from when folder exists
* Update src/transformers/trainer.py
Co-authored-by: Julien Chaumond <julien@huggingface.co>
* Add generated_from_trainer tag
* Update to new version
* Fixes
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Julien Chaumond <julien@huggingface.co>
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* copy pytorch-t5
* init
* boom boom
* forward pass same
* make generation work
* add more tests
* make test work
* finish normal tests
* make fix-copies
* finish quality
* correct slow example
* correct slow test
* version table
* upload models
* Update tests/test_modeling_flax_t5.py
* correct incorrectly deleted line
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
* Add output args to greedy search
* Fix critical typo + make style quality
* Handle generate_beam_search
* Add dict_specific tests and fix the placement of encoder outputs
* Add specific outputs
* Update doc
* Fix typo
* Adjust handling encoder_outputs + Fix generating for T5
* Fix generate for RAG
* Fix handling ouptut_attentions when target_mapping is not None
Take care of situations when target_mapping is provided
as there are 2-tuple of attentions
Change from:
if inputs["output_attentions"]:
attentions = tuple(tf.transpose(t, perm(2, 3, 0, 1)) for t in attentions)
to:
if inputs["output_attentions"]:
if inputs["target_mapping"] is not None:
# when target_mapping is provided, there are 2-tuple of attentions
attentions = tuple(
tuple(tf.transpose(attn_stream, perm=(2, 3, 0, 1)) for attn_stream in t) for t in attentions
)
else:
attentions = tuple(tf.transpose(t, perm=(2, 3, 0, 1)) for t in attentions)
* Rename kwargs to model_kwargs
* make style quality
* Move imports in test_modeling_tf_common.py
Move ModelOutput-related imports in test_modeling_tf_common.py
into the `is_tf_available():` statement.
* Rewrite nested if-statements
* Fix added tests
* Optimizing away the `fill-mask` pipeline.
- Don't send anything to the tokenizer unless needed. Vocab check is
much faster
- Keep BC by sending data to the tokenizer when needed. User handling warning messages will see performance benefits again
- Make `targets` and `top_k` work together better `top_k` cannot be
higher than `len(targets)` but can be smaller still.
- Actually simplify the `target_ids` in case of duplicate (it can happen
because we're parsing raw strings)
- Removed useless code to fail on empty strings. It works only if empty
string is in first position, moved to ignoring them instead.
- Changed the related tests as only the tests would fail correctly
(having incorrect value in first position)
* Make tests compatible for 2 different vocabs... (at the price of a
warning).
Co-authored-by: @EtaoinWu
* ValueError working globally
* Update src/transformers/pipelines/fill_mask.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* `tokenizer.vocab` -> `tokenizer.get_vocab()` for more compatiblity +
fallback.
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* boom boom
* remove flax clip example
* allow loading head model with base model weights
* add test
* fix imports
* disable save, load test for clip
* add test_save_load_to_base
* AutoTokenizer: infer the class from the tokenizer config if possible
* Add tests
* Update src/transformers/models/auto/tokenization_auto.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [WIP] Add TFWav2Vec2Model
Work in progress for adding a tensorflow version of Wav2Vec2
* feedback changes
* small fix
* Test Feedback Round 1
* Add SpecAugment and CTC Loss
* correct spec augment mask creation
* docstring and correct copyright
* correct bugs
* remove bogus file
* finish tests correction
* del unnecessary layers
* Update src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* correct final bug
* Feedback Changes
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* feature for tokenizer without slow/legacy version
* format
* modify common test
* add tests
* add PreTrainedTokenizerFast to AutoTokenizer
* format
* change tokenizer common test in order to be able to run test without a slow version
* update tokenizer fast test in order to use `rust_tokenizer_class` attribute instead of `tokenizer_class`
* add autokenizer test
* replace `if self.tokenizer_class is not None` with ` if self.tokenizer_class is None`
* remove obsolete change in comment
* Update src/transformers/tokenization_utils_base.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update src/transformers/tokenization_utils_fast.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* change `get_main_tokenizer` into `get_tokenizers`
* clarify `get_tokenizers` method
* homogenize with `test_slow_tokenizer` and `test_rust_tokenizer`
* add `test_rust_tokenizer = False` to tokenizer which don't define a fast version
* `test_rust_tokenizer = False` for BertJapaneseTokenizer
* `test_rust_tokenizer = False` for BertJapaneseCharacterTokenizationTest
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Start working on FlaxBart
* Create modeling_flax_bart.py
* Write FlaxBartAttention
* Add FlaxBartEncoderLayer
* Add FlaxBartDecoderLayer and some typing
* Add helepr function for FlaxBart
* shift_tokens_right
* _make_causal_mask
* _expand_mask
* Add PositionalEmbedding and fix init_std naming
* Add FlaxBartPretrainedModel
* Add FlaxBartEncoder
* Add FlaxBartEncoder
* Add FlaxBartEncoder among modules to be imported
* YET WE CANNOT INITIALIZE THAT!! :(
* Make BartEncoder working
Change BartEncoder to instance of nn.Module so far
* Add FlaxBartDecoder
* Add FlaxBartModel
* TODO to make model run -> Prepapre model inputs
* Resolve padding
* Add FlaxBartModel
* Add FlaxBartModel into importable modules
* Remove FlaxBartEncoder and FlaxBartDecoder from importable modules
* make style; not properly working
* make style; make quality not pass due to some import I left
* Remove TODO for padding_idx in nn.Embed so far
* Add FlaxBartForConditionalGeneration
* Incorporate Flax model output classes, i.e. return_dict
* Add another models and incorporate use_cache arg
* Add FlaxBartForSequenceClassification and FlaxBartForQuestionAnswering
* Incorporate use_cache arg from PyTorch implementation
* Add all necessary Flax output utils
* Add FlaxBartForCausalLM; not working yet'
* Add minor improvements; still lacks some functionality
* Update docs, src and tests
* Add support of FlaxBart to docs/source
* Fix some bugs in FlaxBart souce code
* Add some neccessary tests for FlaxBart models - jit_compilation not passing
* Fix tests and add test_head_masking
* Fix tests for @jax.jit computation
* Add test_head_masking
* Migrate FlaxBart tests from jax.numpy to numpy
* Remove FlaxBartForCausalLM
* Clean repo
* fix bart model weight structure
* Fix FlaxBartForSequenceClassification
Slicing is not possible to use below jit, therefore, selecting sentence
representation from hidden_states must be changed.
* Allow FlaxBartForSequenceClassification for testing pt_flax equivalence
* Allow testing for FlaxBartForQA for pt_flax equivalence
* Add a comment to FlaxBartForSequenceClassification + change noise from 1e-3 to 1e-6
* remove past_key_values
* remove inputs_mebeds and make input_ids required
* add position ids
* re-write attention layer
* fix dataclass
* fix pos embeds and attention output
* fix pos embeds
* expose encode method
* expose decode method
* move docstring to top
* add cache for causal attn layer
* remove head masking for now
* s2s greedy search first pass
* boom boom
* fix typos
* fix greedy generate for bart
* use encoder, decoder layers instead of num_hidden_layers
* handle encoder_outputs
* cleanup
* simplify decoding
* more clean-up
* typos
* Change header + add {decoder_,}position_ids into 2 models
* add BartConfig
* fix existing tests
* add encode, decode methods
* Fix shift_tokens_right for JIT compilation + clarify one condition
* fix decode
* encoder => encode
* simplify generate
* add tests for encode and decode
* style
* add tests for cache
* fix equivalence tests
* sample generate now works with seq2seq
* generation tests
* initialize dense layers
* docstring and cleanup
* quality
* remove get/set input_embeddings
* address Patricks suggestions
* decode for every model, remove encoder_outputs from call
* update tests accordingly
* decode returns only decoder outputs and logits
* fix arguments
* doc encode, decode methods
* correct base_model_prefix
* fix test for seq classif model
* fix docs
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Fix megatron_gpt2 attention block's causal mask.
* compatibility with checkpoints created with recent versions of Megatron-LM
* added integration test for the released Megatron-GPT2 model
* code style changes
* added option to megatron conversion script to read from config file
Co-authored-by: Guido Novati <gnovati@nvidia.com>
* adding vit for flax
* added test for Flax-vit and some bug-fixes
* overrided methods where variable changes were necessary for flax_vit test
* added FlaxViTForImageClassification for test
* Update src/transformers/models/vit/modeling_flax_vit.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* made changes suggested in PR
* Adding jax-vit models for autoimport
* swapping num_channels and height,width dimension
* fixing the docstring for torch-like inputs for VIT
* add model to main init
* add docs
* doc, fix-copies
* docstrings
* small test fixes
* fix docs
* fix docstr
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* style
Co-authored-by: jayendra <jayendra@infocusp.in>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix a condition in test_generate_with_head_masking
* Fix usage of head_mask in bigbirg_pegasus
* Fix head masking for speech2text
* Resolve copy mismatch + drop unwanted print statement
* Fix the condition
* Squash all commits of modeling_detr_v7 branch into one
* Improve docs
* Fix tests
* Style
* Improve docs some more and fix most tests
* Fix slow tests of ViT, DeiT and DETR
* Improve replacement of batch norm
* Restructure timm backbone forward
* Make DetrForSegmentation support any timm backbone
* Fix name of output
* Address most comments by @LysandreJik
* Give better names for variables
* Conditional imports + timm in setup.py
* Address additional comments by @sgugger
* Make style, add require_timm and require_vision to testsé
* Remove train_backbone attribute of DetrConfig, add methods to freeze/unfreeze backbone
* Add png files to fixtures
* Fix type hint
* Add timm to workflows
* Add `BatchNorm2d` to the weight initialization
* Fix retain_grad test
* Replace model checkpoints by Facebook namespace
* Fix name of checkpoint in test
* Add user-friendly message when scipy is not available
* Address most comments by @patrickvonplaten
* Remove return_intermediate_layers attribute of DetrConfig and simplify Joiner
* Better initialization
* Scipy is necessary to get sklearn metrics
* Rename TimmBackbone to DetrTimmConvEncoder and rename DetrJoiner to DetrConvModel
* Make style
* Improve docs and add 2 community notebooks
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* add test_vocab_size for sentencepiece tok.
* add test_get_vocab for sentencepiece tok.
* add test_convert_token_and_id for sentencepiece tok.
* add test_tokenize_and_convert_tokens_to_string for all tok.
* improve test_tokenize_and_convert_tokens_to_string for sp. tok.
* add common tokenizer integration tests
- for albert
- for barthez
* add tokenizer integration tests to bert gen.
* add most tokenizer integration tests
* fix camembert tokenizer integration test
* add tokenizer integration test to marian
* add tokenizer integration test to reformer
* add typing and doc to tokenizer_integration_test_util
* fix tokenizer integration test of reformer
* improve test_sentencepiece_tokenize_and_convert_tokens_to_string
* empty commit to trigger CI
* fix tokenizer integration test of reformer
* remove code not needed anymore
* empty commit to trigger CI
* empty commit to trigger CI
* initial
* code quality test
* code quality
* added test functions in test_modeling_rag.py and test_retrieval_rag.py to test end2end retreiver
* minor change in test_modeling_rag
* fixed tests
* Update examples/research_projects/rag-end2end-retriever/README.md
typo corrected as suggested by lhoestq
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* Update examples/research_projects/rag-end2end-retriever/finetune_rag.py
type change suggested by lhoestq
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* Update src/transformers/models/rag/retrieval_rag.py
Adding this change as mentioned by lhoestq.
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* completed the minor changes suggested by the reviewers
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* Added logic to return attention from flax-bert model and added test cases to check that
* Added new line at the end of file to test_modeling_flax_common.py
* fixing code style
* Fixing Roberta and Elextra models too from cpoying bert
* Added temporary hack to not run test_attention_outputs for FlaxGPT2
* Returning attention weights from GPT2 and changed the tests accordingly.
* last fixes
* bump flax dependency
Co-authored-by: jayendra <jayendra@infocusp.in>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adding new argument `max_new_tokens` for generate.
This is a proposal to add a new argument `max_new_tokens` to `generate`.
This include a `MaxNewTokensCriteria` that enables callers that don't
know about the token length ahead (like pipelines callers) to manage
more easily the length of their generated output.
* Adding a test for the user warning when both`max_length` and
`max_new_tokens` are used together.
* Removed redundant `no_grad`.
* Fix Bart
* Fix Blenderbot{,_small}
* Fix LED
* Fix Marian
* Fix MBart
* Fix Pegasus
* Fix T5
* Add test for generation with head_mask
* Add a common TF test
* Override a test for the LED model as head masking is not yet properly implemented
* Remove all head_masks from input preparation for LED
* Drop masking for T5 as it needs a bit of refactor
* [Trainer] Report both steps and num samples per second
* Fix batch number
* Update src/transformers/trainer_utils.py
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Address review comments
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* add separator for windows
* fixes test_is_copy_consistent on Windows
* fixing writing encoding issue on extended test (for Windows)
* resolving comments
Cleaner and more scalable implementation of symbolic tracing with torch.fx, and provides support for new architectures:
- ALBERT
- DistilBERT
- MobileBERT
- MegatronBERT
- GPT2
- GPT Neo
Co-authored-by: Michael Benayoun <michael@huggingface.co>
* Add missing head masking for generate() function
* Add head_mask, decoder_head_mask and cross_attn_head_mask
into prepare_inputs_for_generation for generate() function
for multiple encoder-decoder models.
* Add test_genereate_with_head_masking
* [WIP] Update the new test and handle special cases
* make style
* Omit ProphetNet test so far
* make fix-copies
* [TokenClassification] Label realignment for subword aggregation
Tentative to replace https://github.com/huggingface/transformers/pull/11622/files
- Added `AggregationStrategy`
- `ignore_subwords` and `grouped_entities` arguments are now fused
into `aggregation_strategy`. It makes more sense anyway because
`ignore_subwords=True` with `grouped_entities=False` did not have a
meaning anyway.
- Added 2 new ways to aggregate which are MAX, and AVERAGE
- AVERAGE requires a bit more information than the others, for now this
case is slightly specific, we should keep that in mind for future
changes.
- Testing has been modified to reflect new argument, and to check the
correct deprecation and the new aggregation_strategy.
- Put the testing argument and testing results for aggregation_strategy,
close together, so that readers can understand what is supposed to
happen.
- `aggregate` is now only tested on a small model as it does not mean
anything to test it globally for all models.
- Previous tests are unchanged in desired output.
- Added a new test case that showcases better the difference between the
FIRST, MAX and AVERAGE strategies.
* Wrong framework.
* Addressing three issues.
1- Tags might not follow B-, I- convention, so any tag should work now
(assumed as B-TAG)
2- Fixed an issue with average that leads to a substantial code change.
3- The testing suite was not checking for the "index" key for "none"
strategy. This is now fixed.
The issue is that "O" could not be chosen by AVERAGE strategy because
those tokens were filtered out beforehand, so their relative scores were
not counted in the average. Now filtering on
ignore_labels will happen at the very end of the pipeline fixing
that issue.
It's a bit hard to make sure this stays like that because we do
not have a end-to-end test for that behavior
* Formatting.
* Adding formatting to code + cleaner handling of B-, I- tags.
Co-authored-by: Francesco Rubbo <rubbo.francesco@gmail.com>
Co-authored-by: elk-cloner <rezakakhki.rk@gmail.com>
* Typo.
Co-authored-by: Francesco Rubbo <rubbo.francesco@gmail.com>
Co-authored-by: elk-cloner <rezakakhki.rk@gmail.com>
* Add 3D attention mask to T5 model (#9643)
Added code for 3D attention mask in T5 model. Similar to BERT model.
* Add test for 3D attention mask
Added test for 3D attention mask: test_decoder_model_past_with_3d_attn_mask()
3D attention mask of the shape [Batch_size, Seq_length, Seq_length] both for
attention mask and decoder attention mask. Test is passing.
* improve slow class tok usage at xlm rob
* add subword regularization for barthez
* improve barthez tok. test
* fix tokenizer tests
* add subword regularization for camembert
* add subword regularization for deberta v2 tokenizer
* add more doc to deberta v2 tokenizer
* add subword regularization for speech to text tok.
* fix sp_model_kwargs type in speech 2 text tok.
* add subword regularization for M2M100 tok.
* add more concrete type hints
* fix tests for m2m100 and s2t tok.
* add missing Any import
* fix syntax error in m2m100 tok.
* fix unpickle of m2m100 and s2t tok.
* fix test of m2m100 and s2t tok.
* improve unpickle of deberta v2 tok.
* add test for pickle of barthez & camembert
* fix pickle of barthez & camembert
* add test for deberta v2 tok. pickle
* fix m2m100 tok. pickle
* fix s2t tok. pickle
* add subword regularization to albert tok.
* refactor subword reg. test into TokenizerTesterMixin
improve albert tok. test
remove sample argument form albert tok.
check subword reg. using TokenizerTesterMixin
improve tok. tests
improve xlm roberta tok. tests
improve xlm roberta tok. tests
* add subword regularization for big bird t.
* improve xlm roberta tok. test
* add subword regularization for mbart50 tok.
* add subword regularization for pegasus tok.
* add subword regularization for reformer tok.
* add subword regularization for T5 tok.
* fix t5 tok. test formatting
* add subword regularization for xlm_proph. tok.
* add subword regularization for xlnet tok.
* add subword regularization for gert_gen tok.
* add typing to tokenizers
* add typing to xlm rob. tok
* add subword regularization for marian tok.
* add reverse tok. test
* fix marian tok test
* fix marian tok test
* fix casing in tok. tests
* fix style of tok. common test
* fix deberta v2 tok test
* add type annotations to tok. tests
* add type annotations to tok. __init__
* add typing to kokenizer
* add type annotations to tok. __init__
* don't specify the default when it's None
* fix barthez tok. doc
* move sentencepiece tok. tests to TokenizerTesterMixin
* fix unused imports
* fix albert tok. test
* add comment to sentencepiece test options
* fix Any import at big bird tok.
* fix Any import at xlm prophetnet tok.
* empty commit to trigger CI
* Improve docs of DeiT and ViT, add community notebook
* Add gitignore for test_samples
* Add notebook with Trainer
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Autogenerate model cards from the Trainer
* ModelCard deprecated
* Fix test
* Style
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Address review comments
* Quality
* With all metadata
* Metadata
* Post-merge conflict mess
* Data args and all examples
* Default license and languages when possible
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* added fix to decode function. added test to qa pipeline tests
* completed topk docstring
* fixed formatting with black
* applied style_doc to fix line length
* Added Big Bird Fast Tokenizer initial file
* style fixes
* flake fixes
* Added big bird fast tokenizer to init files
* Added big bird fast to Auto tokenization
* fix styles
* minor quality fixes
* Added initial test code
* Fix SpmConverter when precompiled_charsmap doesn't exist
* fixed post processor
* minor style fix
* minor fix input names
* Actually fix identity normalization
* style
* Added token type ids to fast tokenizer
* style
* flake fix
* fix copies
Co-authored-by: Anthony MOI <m.anthony.moi@gmail.com>
* Add the ImageClassificationPipeline
* Code review
Co-authored-by: patrickvonplaten <patrick.v.platen@gmail.com>
* Have `load_image` at the module level
Co-authored-by: patrickvonplaten <patrick.v.platen@gmail.com>
* Set generator in dataloader
* Use generator in all random samplers
* Checkpoint all RNG states
* Final version
* Quality
* Test
* Address review comments
* Quality
* Remove debug util
* Add python and numpy RNGs
* Split states in different files in distributed
* Quality
* local_rank for TPUs
* Only use generator when accepted
* Add test
* Set seed to avoid flakiness
* Make test less flaky
* Quality
* add electra model to flax
* Remove Electra Next Sentence Prediction model added by mistake
* fix parameter sharing and loosen equality threshold
* fix styling issues
* add mistaken removen imports
* fix electra table
* Add FlaxElectra to automodels and fixe docs
* fix issues pointed out the PR
* fix flax electra to comply with latest changes
* remove stale class
* add copied from
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add flax roberta
* make style
* correct initialiazation
* modify model to save weights
* fix copied from
* fix copied from
* correct some more code
* add more roberta models
* Apply suggestions from code review
* merge from master
* finish
* finish docs
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
* Rebase with master
* Minor bug fix in docs
* Copy files from adding_luke_v2 and improve docs
* change the default value of use_entity_aware_attention to True
* remove word_hidden_states
* fix head models
* fix tests
* fix the conversion script
* add integration tests for the pretrained large model
* improve docstring
* Improve docs, make style
* fix _init_weights for pytorch 1.8
* improve docs
* fix tokenizer to construct entity sequence with [MASK] entity when entities=None
* Make fix-copies
* Make style & quality
* Bug fixes
* Add LukeTokenizer to init
* Address most comments by @patil-suraj and @LysandreJik
* rename _compute_extended_attention_mask to get_extended_attention_mask
* add comments to LukeSelfAttention
* fix the documentation of the tokenizer
* address comments by @patil-suraj, @LysandreJik, and @sgugger
* improve docs
* Make style, quality and fix-copies
* Improve docs
* fix docs
* add "entity_span_classification" task
* update example code for LukeForEntitySpanClassification
* improve docs
* improve docs
* improve the code example in luke.rst
* rename the classification layer in LukeForEntityClassification from typing to classifier
* add bias to the classifier in LukeForEntitySpanClassification
* update docs to use fine-tuned hub models in code examples of the head models
* update the example sentences
* Make style & quality
* Add require_torch to tokenizer tests
* Add require_torch to tokenizer tests
* Address comments by @sgugger and add community notebooks
* Make fix-copies
Co-authored-by: Ikuya Yamada <ikuya@ikuya.net>
* prep for deepspeed==0.3.16
* new version
* too soon
* support and test fp32 mode
* troubleshooting doc start
* workaround no longer needed
* add fp32 doc
* style
* cleanup, add tf32 note
* clarify
* release was made
* Adding `AutomaticSpeechRecognitionPipeline`.
- Because we added everything to enable this pipeline, we probably
should add it to `transformers`.
- This PR tries to limit the scope and focuses only on the pipeline part
(what should go in, and out).
- The tests are very specific for S2T and Wav2vec2 to make sure both
architectures are supported by the pipeline. We don't use the mixin for
tests right now, because that requires more work in the `pipeline`
function (will be done in a follow up PR).
- Unsure about the "helper" function `ffmpeg_read`. It makes a lot of
sense from a user perspective, it does not add any additional
dependencies (as in hard dependency, because users can always use their
own load mechanism). Meanwhile, it feels slightly clunky to have so much
optional preprocessing.
- The pipeline is not done to support streaming audio right now.
Future work:
- Add `automatic-speech-recognition` as a `task`. And add the
FeatureExtractor.from_pretrained within `pipeline` function.
- Add small models within tests
- Add the Mixin to tests.
- Make the logic between ForCTC vs ForConditionalGeneration better.
* Update tests/test_pipelines_automatic_speech_recognition.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Adding docs + main import + type checking + LICENSE.
* Doc style !.
* Fixing TYPE_HINT.
* Specifying waveform shape in the docs.
* Adding asserts + specify in the documentation the shape of the input
np.ndarray.
* Update src/transformers/pipelines/automatic_speech_recognition.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adding require to tests + move the `feature_extractor` doc.
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* removed max_len
* removed max_length from BeamSearchScorer
* correct max length
* finish
* del vim
* finish & add test
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add cross_attn_head_mask to BART
* Fix cross_attentions in TFBart-like models
* This commit enables returning of `cross_attentions`
for TFBart-like models
* It also fixes attention head masking in cross-attenion module
* Update TF model templates
* Fix missing , in TF model templates
* Fix typo: congig -> config
* removes the creation of separate config objects and uses the existing ones instead+overwrite resize_token_embeddings from parent class because it is not working for the EncoderDecoderModel
* rollback to current version of the huggingface master branch
* reworked version that ties the encoder and decoder config of the parent encoderdecoder instance
* overwrite of resize_token_embeddings throws an error now
* review comment suggestion
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* implemented warning in case encoderdecoder is created with differing configs of encoderdecoderconfig and decoderconfig or encoderconfig
* added test to avoid diverging configs of wrapper class and wrapped classes
* Update src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
* make style
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add head_mask & decoder_head_mask + some corrections
* Fix head masking for N-grams
* Enable test_headmasking for encoder and decod
* Fix one typo regarding in modeling_propgetnet.py
* Enable test_headmasking for ProphetNetStandaloneDecoderModelTest
and ProphetNetStandaloneEncoderModelTest in test_modeling_prophetnet.py
* make style
* Fix cross_head_mask
* Fix attention head mask naming
* `cross_head_mask` -> `cross_attn_head_mask`
* `cross_layer_head_mask` -> `cross_attn_layer_head_mask`
* Still need to merge #10605 to master to pass the tests
* Fix cross-attention head mask for Torch BART models
* Fix head masking for cross-attention module for the following
models: BART, Blenderbot, Blenderbot_small, M2M_100, Marian, MBart,
Pegasus
* Enable test_headmasking for M2M_100 model
* Fix cross_head_mask for FSMT, LED and T5
* This commit fixes `head_mask` for cross-attention modules
in the following models: FSMT, LED, T5
* It also contains some smaller changes in doc so that
it is be perfectly clear the shape of `cross_head_mask`
is the same as of `decoder_head_mask`
* Update template
* Fix template for BartForCausalLM
* Fix cross_head_mask for Speech2Text models
* Fix cross_head_mask in templates
* Fix args order in BartForCausalLM template
* Fix doc in BART templates
* Make more explicit naming
* `cross_head_mask` -> `cross_attn_head_mask`
* `cross_layer_head_mask` -> `cross_attn_layer_head_mask`
* Fix doc
* make style quality
* Fix speech2text docstring
* Initial support for upload to hub
* push -> upload
* Fixes + examples
* Fix torchhub test
* Torchhub test I hate you
* push_model_to_hub -> push_to_hub
* Apply mixin to other pretrained models
* Remove ABC inheritance
* Add tests
* Typo
* Run tests
* Install git-lfs
* Change approach
* Add push_to_hub to all
* Staging test suite
* Typo
* Maybe like this?
* More deps
* Cache
* Adapt name
* Quality
* MOAR tests
* Put it in testing_utils
* Docs + torchhub last hope
* Styling
* Wrong method
* Typos
* Update src/transformers/file_utils.py
Co-authored-by: Julien Chaumond <julien@huggingface.co>
* Address review comments
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Julien Chaumond <julien@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Base move
* Examples reorganization
* Update references
* Put back test data
* Move conftest
* More fixes
* Move test data to test fixtures
* Update path
* Apply suggestions from code review
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Address review comments and clean
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Removed `max_length` from being mandatory within `generate`.
- Moving on to fully using `StoppingCriteria` for `greedy` and `sample`
modes.
- `max_length` still used for `beam_search` and `group_beam_search`
(Follow up PR)
- Fixes a bug with MaxLengthStoppingCriteria (we should stop as soon a
we hit the max_length, the comparison needs to be or equal, that affects
the tests).
- Added options to use `logits_processor` and `stopping_criteria`
directly within `generate` function (so some users can define their own
`logits_processor` and `stopping_criteria`).
- Modified the backward compat tests to make sure we issue a warning.
* Fix `max_length` argument in `generate`.
* Moving validate to being functional.
- Renamed `smax_length` to `stoppping_max_length`.
* Removing `logits_processor` and `stopping_criteria` from `generate`
arguments.
* Deepcopy.
* Fix global variable name.
* Bulk of the work
* Polish and tests
* Update QA Trainer
* Avoid breaking the predict method
* Deprecation warnings
* Store real eval dataloder
* Get eval dataset reference before wrap
* [WIP] Enabling multilingual models for translation pipelines.
* decoder_input_ids -> forced_bos_token_id
* Improve docstring.
* Rebase
* Fixing 2 bugs
- Type token_ids coming from `_parse_and_tokenize`
- Wrong index from tgt_lang.
* Fixing black version.
* Adding tests for _build_translation_inputs and add them for all
tokenizers.
* Mbart actually puts the lang code at the end.
* Fixing m2m100.
* Adding TF support to `deep_round`.
* Update src/transformers/pipelines/text2text_generation.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Adding one line comment.
* Fixing M2M100 `_build_translation_input_ids`, and fix the call site.
* Fixing tests + deep_round -> nested_simplify
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* First draft of deit
* More improvements
* Remove DeiTTokenizerFast from init
* Conversion script works
* Add DeiT to ViT conversion script
* Add tests, add head model, add support for deit in vit conversion script
* Update model checkpoint names
* Update image_mean and image_std, set resample to bicubic
* Improve docs
* Docs improvements
* Add DeiTForImageClassificationWithTeacher to init
* Address comments by @sgugger
* Improve feature extractors
* Make fix-copies
* Minor fixes
* Address comments by @patil-suraj
* All models uploaded
* Fix tests
* Remove labels argument from DeiTForImageClassificationWithTeacher
* Fix-copies, style and quality
* Fix tests
* Fix typo
* Multiple docs improvements
* More docs fixes
* Add a special tokenizer for CPM model
* make style
* fix
* Add docs
* styles
* cpm doc
* fix ci
* fix the overview
* add test
* make style
* typo
* Custom tokenizer flag
* Add REAMDE.md
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* AutoFeatureExtractor
* Init and first tests
* Tests
* Damn you gitignore
* Quality
* Defensive test for when not all backends are here
* Use pattern for Speech2Text models
* better names
* add attention mixin
* all slow tests in one class
* make helper methods static so we can test
* add local attention tests
* better names
* doc
* apply review suggestions
* Replace pkg_resources with importlib_metadata
Fixes#10964. The other reason for this change is that pkg_resources has been [deprecated](8fe85c22ce) in favor of importlib_metadata.
* Reduce to a single importlib_metadata import switch
* Trigger CI
Co-authored-by: Stas Bekman <stas@stason.org>
* Documentation about loading a fast tokenizer within Transformers
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* style
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* closes#10258
* typo
* reworked deberta test
* implemented the comments from BigBird01 regarding sequence pair encoding of deberta
* Update style
* VOCAB_FILES_NAMES is now a oneliner as suggested by @sgugger
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* added #fmt: on as requested by @sgugger
* Style
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Squash all commits into one
* Update ViTFeatureExtractor to use image_utils instead of torchvision
* Remove torchvision and add Pillow
* Small docs improvement
* Address most comments by @sgugger
* Fix tests
* Clean up conversion script
* Pooler first draft
* Fix quality
* Improve conversion script
* Make style and quality
* Make fix-copies
* Minor docs improvements
* Should use fix-copies instead of manual handling
* Revert "Should use fix-copies instead of manual handling"
This reverts commit fd4e591bce.
* Place ViT in alphabetical order
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Replace is_sagemaker_distributed_available
* Merge SageMakerTrainer into Trainer
* Test with shorter condition
* Put back deleted line
* Deprecate SageMakerTrainer and SageMakerTrainingArguments
* Apply suggestions from code review
Co-authored-by: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
Co-authored-by: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
* init
* first working test
* added todo for setup.py
* working test for single node multi node ddp and smd
* added tensorflow single node test
* added directory for pytorch and tensorflow due to different requirements.txt
* added directory for pytorch and tensorflow
* added comment for run_glue until it is available
* added output_dir to it
* smaller dataset to make test running faster
* adjust HP and script
* adjusted parameter for tensorflow
* refactored test scripts
* adjusted make file
* init
* first working test
* added todo for setup.py
* working test for single node multi node ddp and smd
* added tensorflow single node test
* added directory for pytorch and tensorflow due to different requirements.txt
* added directory for pytorch and tensorflow
* added comment for run_glue until it is available
* added output_dir to it
* smaller dataset to make test running faster
* adjust HP and script
* adjusted parameter for tensorflow
* refactored test scripts
* adjusted make file
* updated dlc container
* commented in all tests
* added both ecr images
* added new master branches
* debug
* added new datasets version
* init
* strange rebase bug
* removed changes
* changed min version for tests to work
* updated DLC
* added model parallel test
* removed test files
* removed test files
* tested with ned dlc
* added correct sagemaker sdk version
* adjust DLCs for official one
* reworked tests
* quality
* removed default profile added documentation to it
* added step in release for sagemaker tests
* reverted version for example script removed duplicated script and added install from master to requirements.txt
* removed mistaken .DS_Stores from mac
* fixed tests
* added Sylvains feedback
* make style
* added lysandre's feedback
* Added embeddings layer
* Added layoutlm layers, main model, maskedlm and token classification classes
* Added model classes to tf auto models
* Added model to PT to TF conversion script
* Added model to doc README
* Added tests
* Removed unused imports
* Added layoutlm model, test, and doc for sequence classification, and fix imports in __init__.py
* Made tests pass!
* Fixed typos in imports and docs
* Fixed a typo in embeddings layer
* Removed imports
* Fixed formatting issues, imports, tests
* Added layoutlm layers, main model, maskedlm and token classification classes
* Added model classes to tf auto models
* Added model to PT to TF conversion script
* Removed unused imports
* Added layoutlm model, test, and doc for sequence classification, and fix imports in __init__.py
* Made tests pass!
* Fixed typos in imports and docs
* Removed imports
* Fixed small formatting issues
* Removed duplicates import from main __init__.py
* Chnaged deafult arg to true for adding pooling layer to tf layoutlm
* Fixed formatting issues
* Style
* Added copied from to classes copied from bert
* Fixed doc strings examples to work with layoutlm inputs
* Removed PyTorch reference in doc strings example
* Added integration tests
* Cleaned up initialization file
* Updated model checkpoint identifiers
* Fixed imports
Co-authored-by: Amir Tahmasbi <amir@ehsai.ca>
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* fix backend tokenizer args override: key mismatch
* no touching the docs
* fix mpnet
* add mpnet to test
* fix test
Co-authored-by: theo <theo@matussie.re>
* Added check to ensure model name passed to from_pretrained and model are the same
* Added test to check from_pretrained throws assert error when passed an incompatiable model name
* Modified assert in from_pretrained with f-strings. Modified test to ensure desired assert message is being generated
* Added check to ensure config and model has model_type
* Fix FlauBERT heads
Co-authored-by: vimarsh chaturvedi <vimarsh chaturvedi>
Co-authored-by: Stas Bekman <stas@stason.org>
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* Create modeling_flax_eletra with code copied from modeling_flax_bert
* Add ElectraForMaskedLM and ElectraForPretraining
* Add modeling test for Flax electra and fix naming and arg in Flax Electra model
* Add documentation
* Fix code style
* Create modeling_flax_eletra with code copied from modeling_flax_bert
* Add ElectraForMaskedLM and ElectraForPretraining
* Add modeling test for Flax electra and fix naming and arg in Flax Electra model
* Add documentation
* Fix code style
* Fix code quality
* Adjust tol in assert_almost_equal due to very small difference between model output, ranging 0.0010 - 0.0016
* Remove redundant ElectraPooler
* save intermediate
* adapt
* correct bert flax design
* adapt roberta as well
* finish roberta flax
* finish
* apply suggestions
* apply suggestions
Co-authored-by: Chris Nguyen <anhtu2687@gmail.com>
* Added debug prints
* Added config
* Added prints
* Added prints
* Added extra samples to SequentialDistributedSampler
* Added extra samples to SequentialDistributedSampler
Updated SequentialDistributedSampler call
* Added deubg prints
* Removed extra prints
* Making predicitons and labels multiple of batchsize
* updated number of microbatches
* Removed extra prints
* Made start_remainder similar to DistributedSamplerWithLoop
* Minor spacing update
* Added debug prints
Added config
Added prints
Added prints
* Added extra samples to SequentialDistributedSampler
Updated SequentialDistributedSampler call
Added extra samples to SequentialDistributedSampler
Added deubg prints
Removed extra prints
Making predicitons and labels multiple of batchsize
updated number of microbatches
Removed extra prints
Squashing redundant commits
* Made start_remainder similar to DistributedSamplerWithLoop
Minor spacing update
Made start_remainder similar to DistributedSamplerWithLoop
* Test and styling
* Rename test
Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com>
* Apply black before checking copies
* Fix for class methods
* Deal with lonely brackets
* Remove debug and add forward changes
* Separate copies and fix test
* Add black as a test dependency
* Tests run on Docker
Co-authored-by: Morgan <funtowiczmo@gmail.com>
* Comments from code review
* Reply to itself
* Dependencies
Co-authored-by: Morgan <funtowiczmo@gmail.com>
* [WIP] Adding new parameter to `generate`: `max_time`.
Generation by tokens number is sometimes a bit clunky because we don't
know how many tokens are good enough or even how many tokens are in
the payload (for pipelines users for instance). This leads to hard
to understand behavior.
This PR proposes a new argument `max_time` which is a float of seconds
for the allowed time for `generate` to run on.
Ideally combinations of `max_tokens=None`, `max_time=2` could be used to
generate as many tokens as possible within time budget.
NB: Another possible approach consists of passing a callback to `generate`
putting the caller in charge of the actual decision of when to stop
generating tokens. It opens the door to 'which args should we pass'
to this callback. It's hard to imagine other use-cases for this
early stopping behavior than time (that are not already covered by
parameters of generate)
* Revamp with StoppingCriteria
* Removing deprecated mentions.
* Forgot arguments to stopping criteria.
* Readding max_length it's not just used as a stopping criteria.
* Default value for `stopping_criteria`.
* Address @patrickvonplaten comments.
- More docstrings
- Actual doc
- Include in global namespace
- Remove TF work.
* Put back `max_length` (deprecation different PR).
* Doc quality.
* Fixing old behavior without `stopping_criteria` but with `max_length`.
Making sure we don't break that in the future.
* Adding more tests for possible inconsistencies between
`max_length` and `stopping_criteria`.
* Fixing the torch imports.
* Allow to pass kwargs to model's from_pretrained when using pipeline.
* Disable the use of past_keys_values for GPT2 when exporting to ONNX.
* style
* Remove comment.
* Appease the documentation gods
* Fix style
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* Create modeling_tf_dpr.py
* Add TFDPR
* Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot
last commit accidentally deleted these 4 lines, so I recover them back
* Add TFDPR
* Add TFDPR
* clean up some comments, add TF input-style doc string
* Add TFDPR
* Make return_dict=False as default
* Fix return_dict bug (in .from_pretrained)
* Add get_input_embeddings()
* Create test_modeling_tf_dpr.py
The current version is already passed all 27 tests!
Please see the test run at :
https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing
* fix quality
* delete init weights
* run fix copies
* fix repo consis
* del config_class, load_tf_weights
They shoud be 'pytorch only'
* add config_class back
after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion
* newline after .. note::
* import tf, np (Necessary for ModelIntegrationTest)
* slow_test from_pretrained with from_pt=True
At the moment we don't have TF weights (since we don't have official official TF model)
Previously, I did not run slow test, so I missed this bug
* Add simple TFDPRModelIntegrationTest
Note that this is just a test that TF and Pytorch gives approx. the same output.
However, I could not test with the official DPR repo's output yet
* upload correct tf model
* remove position_ids as missing keys
* create modeling_tf_rag
* add tests for tf
* add tf tests
* revert wrong pt commit
* further refactor
* further refactor
* refactor
* Update modeling_tf_rag.py
- input_processing
- fix prepare_input_for_generation (mostly fix generate bug)
- bring back from_pretrained hack in order to test generate
* delete colab pieces of code
* Show case of greedy "generate"
Temporarily change from beam_search test to greedy_search test to show case that TF and PT do get equivalent output.
* cosmetic update
* correct typos
* update
* push some progress
* make easy check
* fix rag save from pretrained
* Update src/transformers/modeling_tf_utils.py
* remove commented out lines
* delete unnecessary lines
* add simple test case for nq_checkpoint
Add nq_checkpoint test to show that current version without hack still fails
* temporarily put ugly hack back again
* Add TFRagSequenceForGeneration!!
* __init__.py , import TFRagSequenceForGeneration
* Add TFRagSequence tests!
* rag init.py - add TFRagSequenceForGeneration
* fix from_pretrained
* fix prepare_inputs_for_generation
* Beam search for RagToken!
* minor clean up
* add tf.cast in TFRagModel
* More tf.cast
* Add all remaining tests (still have issues)
* delete all T5 related
* make style
* fix load weight prefix
* fix bart
* fix return_dict for tf_rag
make all tests pass .. Hooray
* fix some tests
* fix code quality
* fix qualtiy check
* finish tests tf rag
* add tf rag to docs
* remove TFT5 from docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* remove TFT5 from docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Delete outdated comments
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* improve doc strings
* add generative model classes
* fix adjust token logic
* refactor generate for TFRag
* using shape_list, not _get_shape
Co-authored-by: Julien Plu <plu.julien@gmail.com>
* axis=[1]->axis=1
* delete NEED_HELP comment
* improve readability
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* improve readability
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* improve readability
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Indicating model is in a developing state in docstrings
As suggested by Julien
* small last changes
* apply sylvains suggestions
* finish tf rag
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: patrickvonplaten <patrick@huggingface.co>
Co-authored-by: Julien Plu <plu.julien@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* offline mode start
* add specific values
* fix fallback
* add test
* better values check and range
* test that actually works
* document the offline mode
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* more strict check
* cleaner test
* pt-only test
* style
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Introduce save_strategy training argument
* deprecate EvaluationStrategy
* collapse EvaluationStrategy and LoggingStrategy into a single
IntervalStrategy enum
* modify tests to use modified enum
* convbert conversion test
* fin
* fin
* fin
* clean up tf<->pt conversion
* remove from_pt
Co-authored-by: patrickvonplaten <patrick.v.platen@gmail.com>