* fix_torch_device_generate_test
* remove @
* correct greedy search
* save intertmed
* add final logits bias
* correct
* up
* add more tests
* fix another bug
* finish tests
* finish marian tests
* up
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
* Wrong model is used, should be character instead of subword
In the original Google repo for CANINE there was mixup in the model names in the README.md, which was fixed 2 weeks ago. Since this transformer model was created before, it probably resulted in wrong use in this example.
s = subword, c = character
* canine.rst style fix
* Update docs/source/model_doc/canine.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Styling canine.rst
* Added links to model cards.
* Fixed links to model cards.
Co-authored-by: Jeroen Steggink <978411+jsteggink@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Laying down building stone for more flexible ONNX export capabilities
* Ability to provide a map of config key to override before exporting.
* Makes it possible to export BART with/without past keys.
* Supports simple mathematical syntax for OnnxVariable.repeated
* Effectively apply value override from onnx config for model
* Supports export with additional features such as with-past for seq2seq
* Store the output path directly in the args for uniform usage across.
* Make BART_ONNX_CONFIG_* constants and fix imports.
* Support BERT model.
* Use tokenizer for more flexibility in defining the inputs of a model.
* Add TODO as remainder to provide the batch/sequence_length as CLI args
* Enable optimizations to be done on the model.
* Enable GPT2 + past
* Improve model validation with outputs containing nested structures
* Enable Roberta
* Enable Albert
* Albert requires opset >= 12
* BERT-like models requires opset >= 12
* Remove double printing.
* Enable XLM-Roberta
* Enable DistilBERT
* Disable optimization by default
* Fix missing setattr when applying optimizer_features
* Add value field to OnnxVariable to define constant input (not from tokenizers)
* Add T5 support.
* Simplify model type retrieval
* Example exporting token_classification pipeline for DistilBERT.
* Refactoring to package `transformers.onnx`
* Solve circular dependency & __main__
* Remove unnecessary imports in `__init__`
* Licences
* Use @Narsil's suggestion to forward the model's configuration to the ONNXConfig to avoid interpolation.
* Onnx export v2 fixes (#12388)
* Tiny fixes
Remove `convert_pytorch` from onnxruntime-less runtimes
Correct reference to model
* Style
* Fix Copied from
* LongFormer ONNX config.
* Removed optimizations
* Remvoe bad merge relicas.
* Remove unused constants.
* Remove some deleted constants from imports.
* Fix unittest to remove usage of PyTorch model for onnx.utils.
* Fix distilbert export
* Enable ONNX export test for supported model.
* Style.
* Fix lint.
* Enable all supported default models.
* GPT2 only has one output
* Fix bad property name when overriding config.
* Added unittests and docstrings.
* Disable with_past tests for now.
* Enable outputs validation for default export.
* Remove graph opt lvls.
* Last commit with on-going past commented.
* Style.
* Disabled `with_past` for now
* Remove unused imports.
* Remove framework argument
* Remove TFPreTrainedModel reference
* Add documentation
* Add onnxruntime tests to CircleCI
* Add test
* Rename `convert_pytorch` to `export`
* Use OrderedDict for dummy inputs
* WIP Wav2Vec2
* Revert "WIP Wav2Vec2"
This reverts commit f665efb04c92525c3530e589029f0ae7afdf603e.
* Style
* Use OrderedDict for I/O
* Style.
* Specify OrderedDict documentation.
* Style :)
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Copy BART to MBart and rename some stuff
* Add copy statements pointing to FlaxBart
* Update/add some common files
* Update shift_tokens_rigth + fix imports
* Fix shift_tokens_right method according to MBart implementation
* Update shift_tokens_right in tests accordingly
* Fix the import issue and update docs file
* make style quality
* Do some minor changes according to patil-suraj suggestions
* Change the order of normalization layer and attention
* Add some copu statementes
* Update generate method and add integration test for mBart
* Make a few updates after a review
Besides, add `lang_code_to_id` to MBartTokenizeFast
* fix-copies; make style quality
* Apply suggestions from code review
* Apply suggestions from code review
* Apply suggestions from code review
* fix output type, style
* add copied from
* resolve conflicts
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* First pass
* More progress
* Add support for local attention
* More improvements
* More improvements
* Conversion script working
* Add CanineTokenizer
* Make style & quality
* First draft of integration test
* Remove decoder test
* Improve tests
* Add documentation
* Mostly docs improvements
* Add CanineTokenizer tests
* Fix most tests on GPU, improve upsampling projection
* Address most comments by @dhgarrette
* Remove decoder logic
* Improve Canine tests, improve docs of CanineConfig
* All tokenizer tests passing
* Make fix-copies and fix tokenizer tests
* Fix test_model_outputs_equivalence test
* Apply suggestions from @sgugger's review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Address some more comments
* Add support for hidden_states and attentions of shallow encoders
* Define custom CanineModelOutputWithPooling, tests pass
* First pass
* More progress
* Add support for local attention
* More improvements
* More improvements
* Conversion script working
* Add CanineTokenizer
* Make style & quality
* First draft of integration test
* Remove decoder test
* Improve tests
* Add documentation
* Mostly docs improvements
* Add CanineTokenizer tests
* Fix most tests on GPU, improve upsampling projection
* Address most comments by @dhgarrette
* Remove decoder logic
* Improve Canine tests, improve docs of CanineConfig
* All tokenizer tests passing
* Make fix-copies and fix tokenizer tests
* Fix test_model_outputs_equivalence test
* Apply suggestions from @sgugger's review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Address some more comments
* Make conversion script work for Canine-c too
* Fix tokenizer tests
* Remove file
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* copy pytorch-t5
* init
* boom boom
* forward pass same
* make generation work
* add more tests
* make test work
* finish normal tests
* make fix-copies
* finish quality
* correct slow example
* correct slow test
* version table
* upload models
* Update tests/test_modeling_flax_t5.py
* correct incorrectly deleted line
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
* [run_clm.py] restore caching
* style
* [t5 doc] make the example work out of the box
This PR expands the training example to include the correct model type for the example to work, e.g. with `T5Model` this example will break.
* Update docs/source/model_doc/t5.rst
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* expand the other example
Co-authored-by: Suraj Patil <surajp815@gmail.com>
- Convert use of deprecated AutoModelWithLMHead to AutoModelForSeq2SeqLM
- Add newly required `truncation=True` to `tokenizer.encode` with `max_length`
This silences all warnings.
* [WIP] Add TFWav2Vec2Model
Work in progress for adding a tensorflow version of Wav2Vec2
* feedback changes
* small fix
* Test Feedback Round 1
* Add SpecAugment and CTC Loss
* correct spec augment mask creation
* docstring and correct copyright
* correct bugs
* remove bogus file
* finish tests correction
* del unnecessary layers
* Update src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* correct final bug
* Feedback Changes
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Start working on FlaxBart
* Create modeling_flax_bart.py
* Write FlaxBartAttention
* Add FlaxBartEncoderLayer
* Add FlaxBartDecoderLayer and some typing
* Add helepr function for FlaxBart
* shift_tokens_right
* _make_causal_mask
* _expand_mask
* Add PositionalEmbedding and fix init_std naming
* Add FlaxBartPretrainedModel
* Add FlaxBartEncoder
* Add FlaxBartEncoder
* Add FlaxBartEncoder among modules to be imported
* YET WE CANNOT INITIALIZE THAT!! :(
* Make BartEncoder working
Change BartEncoder to instance of nn.Module so far
* Add FlaxBartDecoder
* Add FlaxBartModel
* TODO to make model run -> Prepapre model inputs
* Resolve padding
* Add FlaxBartModel
* Add FlaxBartModel into importable modules
* Remove FlaxBartEncoder and FlaxBartDecoder from importable modules
* make style; not properly working
* make style; make quality not pass due to some import I left
* Remove TODO for padding_idx in nn.Embed so far
* Add FlaxBartForConditionalGeneration
* Incorporate Flax model output classes, i.e. return_dict
* Add another models and incorporate use_cache arg
* Add FlaxBartForSequenceClassification and FlaxBartForQuestionAnswering
* Incorporate use_cache arg from PyTorch implementation
* Add all necessary Flax output utils
* Add FlaxBartForCausalLM; not working yet'
* Add minor improvements; still lacks some functionality
* Update docs, src and tests
* Add support of FlaxBart to docs/source
* Fix some bugs in FlaxBart souce code
* Add some neccessary tests for FlaxBart models - jit_compilation not passing
* Fix tests and add test_head_masking
* Fix tests for @jax.jit computation
* Add test_head_masking
* Migrate FlaxBart tests from jax.numpy to numpy
* Remove FlaxBartForCausalLM
* Clean repo
* fix bart model weight structure
* Fix FlaxBartForSequenceClassification
Slicing is not possible to use below jit, therefore, selecting sentence
representation from hidden_states must be changed.
* Allow FlaxBartForSequenceClassification for testing pt_flax equivalence
* Allow testing for FlaxBartForQA for pt_flax equivalence
* Add a comment to FlaxBartForSequenceClassification + change noise from 1e-3 to 1e-6
* remove past_key_values
* remove inputs_mebeds and make input_ids required
* add position ids
* re-write attention layer
* fix dataclass
* fix pos embeds and attention output
* fix pos embeds
* expose encode method
* expose decode method
* move docstring to top
* add cache for causal attn layer
* remove head masking for now
* s2s greedy search first pass
* boom boom
* fix typos
* fix greedy generate for bart
* use encoder, decoder layers instead of num_hidden_layers
* handle encoder_outputs
* cleanup
* simplify decoding
* more clean-up
* typos
* Change header + add {decoder_,}position_ids into 2 models
* add BartConfig
* fix existing tests
* add encode, decode methods
* Fix shift_tokens_right for JIT compilation + clarify one condition
* fix decode
* encoder => encode
* simplify generate
* add tests for encode and decode
* style
* add tests for cache
* fix equivalence tests
* sample generate now works with seq2seq
* generation tests
* initialize dense layers
* docstring and cleanup
* quality
* remove get/set input_embeddings
* address Patricks suggestions
* decode for every model, remove encoder_outputs from call
* update tests accordingly
* decode returns only decoder outputs and logits
* fix arguments
* doc encode, decode methods
* correct base_model_prefix
* fix test for seq classif model
* fix docs
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>