* Add test for proper input signatures
* No more signature pruning
* Test the dummy inputs are valid too
* fine-tine -> fine-tune
* Fix indent in test_dataset_conversion
* Use tied weight keys
* More
* Fix tied weight missing warning
* Only give info on unexpected keys with different classes
* Deal with empty archs
* Fix tests
* Refine test
* Fix one BLIP arg not being optional, remove misspelled arg
* Remove the lxmert test overrides and just use the base test_saved_model_creation
* saved_model_creation fixes and re-enabling tests across the board
* Remove unnecessary skip
* Stop caching sinusoidal embeddings in speech_to_text
* Fix transfo_xl compilation
* Fix transfo_xl compilation
* Fix the conditionals in xglm
* Set the save spec only when building
* Clarify comment
* Move comment correctly
* Correct embeddings generation for speech2text
* Mark RAG generation tests as @slow
* Remove redundant else:
* Add comment to clarify the save_spec line in build()
* Fix size tests for XGLM at last!
* make fixup
* Remove one band_part operation
* Mark test_keras_fit as @slow
* Revert whisper change and modify the test_compile_tf_model test
* make fixup
* Tweak test slightly
* Add functional model saving to test
* Ensure TF can infer shapes for data2vec
* Add override for efficientformer
* Mark test as slow
* Fix LLaMa beam search when using parallelize
same issue as T5 #11717
* fix code format in modeling_llama.py
* fix format of _reorder_cache in modeling_llama.py
* Make conversion faster, fix None vs 0 bug
* Add second sort for consistency
* Update src/transformers/convert_slow_tokenizer.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Add mms ctc fine tuning
* make style
* More fixes that are needed
* make fix-copies
* make draft for README
* add new file
* move to new file
* make style
* make style
* add quick test
* make style
* make style
* MLM prediction head output size from embed_size
Take the output size of the dense projection layer from embedding_size instead of hidden_size since there could be a projection of the input embedding into hidden_size if they are different
* project TFDebertaV2 mlm output to embedding size
embedding size can be different that hidden_size, so the final layer needs to project back to embedding size. like in ELECTRA or DeBERTaV3 style pertaining.
This should solve an error that occurs when loading models like "almanach/camemberta-base-generator".
* fix the same issue for reshaping after projection
* fix layernorm size
* add self.embedding_size to scope
* fix embed_proj scope name
* apply the same changes to TF Deberta
* add the changes to deberta
* added self.embedding_size instead of config.embedding_size
* added the same change to debertav2
* added coppied from deberta to deberta2 model
* config.embedding_size fix
* black
* fix deberta config name
* import torch before it is used
* style
Signed-off-by: byhsu <byhsu@linkedin.com>
---------
Signed-off-by: byhsu <byhsu@linkedin.com>
Co-authored-by: byhsu <byhsu@linkedin.com>
* Update language_modeling.py
in "class TextDatasetForNextSentencePrediction(Dataset)", double considering "self.tokenizer.num_special_tokens_to_add(pair=True)"
so, i remove self.block_size, and add parameter for "def create_examples_from_document". like "class LineByLineWithSOPTextDataset" do
* Update language_modeling.py
* Fix URL in comment for contrastive loss function
* Stop storing references to bound methods in tf.functions
* Remove the gc.collect calls now that we resolved the underlying problem
* Remove the default signature from model.serving entirely, big cleanup
* Remove _prune_signature as self.input_signature can prune itself
* Restore serving docstring
* Update int support test to check the input signature
* Make sure other tests also use model.input_signature and not serving.input_signature
* Restore _prune_signature
* Remove the doctest GC now it's no longer needed
* Correct core tests to use the pruned sig
* order lines correctly in core tests
* Add eager_serving back with a deprecation warning
* First test
* Add info for all models
* style
* Repo consistency
* Fix last model and cleanup prints
* Repo consistency
* Use consistent function for detecting tied weights