* added sm to ua
* update id
* removed id
* removed comments
* added env variable
* changed variable name
* make quality happy
* added sguggers feedback
* make styling happy and remove brackets
* added sm to ua
* update id
* removed id
* removed comments
* added env variable
* changed variable name
* make quality happy
* added sguggers feedback
* make styling happy and remove brackets
* Create modeling_tf_dpr.py
* Add TFDPR
* Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot
last commit accidentally deleted these 4 lines, so I recover them back
* Add TFDPR
* Add TFDPR
* clean up some comments, add TF input-style doc string
* Add TFDPR
* Make return_dict=False as default
* Fix return_dict bug (in .from_pretrained)
* Add get_input_embeddings()
* Create test_modeling_tf_dpr.py
The current version is already passed all 27 tests!
Please see the test run at :
https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing
* fix quality
* delete init weights
* run fix copies
* fix repo consis
* del config_class, load_tf_weights
They shoud be 'pytorch only'
* add config_class back
after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion
* newline after .. note::
* import tf, np (Necessary for ModelIntegrationTest)
* slow_test from_pretrained with from_pt=True
At the moment we don't have TF weights (since we don't have official official TF model)
Previously, I did not run slow test, so I missed this bug
* Add simple TFDPRModelIntegrationTest
Note that this is just a test that TF and Pytorch gives approx. the same output.
However, I could not test with the official DPR repo's output yet
* upload correct tf model
* remove position_ids as missing keys
* create modeling_tf_rag
* add tests for tf
* add tf tests
* revert wrong pt commit
* further refactor
* further refactor
* refactor
* Update modeling_tf_rag.py
- input_processing
- fix prepare_input_for_generation (mostly fix generate bug)
- bring back from_pretrained hack in order to test generate
* delete colab pieces of code
* Show case of greedy "generate"
Temporarily change from beam_search test to greedy_search test to show case that TF and PT do get equivalent output.
* cosmetic update
* correct typos
* update
* push some progress
* make easy check
* fix rag save from pretrained
* Update src/transformers/modeling_tf_utils.py
* remove commented out lines
* delete unnecessary lines
* add simple test case for nq_checkpoint
Add nq_checkpoint test to show that current version without hack still fails
* temporarily put ugly hack back again
* Add TFRagSequenceForGeneration!!
* __init__.py , import TFRagSequenceForGeneration
* Add TFRagSequence tests!
* rag init.py - add TFRagSequenceForGeneration
* fix from_pretrained
* fix prepare_inputs_for_generation
* Beam search for RagToken!
* minor clean up
* add tf.cast in TFRagModel
* More tf.cast
* Add all remaining tests (still have issues)
* delete all T5 related
* make style
* fix load weight prefix
* fix bart
* fix return_dict for tf_rag
make all tests pass .. Hooray
* fix some tests
* fix code quality
* fix qualtiy check
* finish tests tf rag
* add tf rag to docs
* remove TFT5 from docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* remove TFT5 from docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Delete outdated comments
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* improve doc strings
* add generative model classes
* fix adjust token logic
* refactor generate for TFRag
* using shape_list, not _get_shape
Co-authored-by: Julien Plu <plu.julien@gmail.com>
* axis=[1]->axis=1
* delete NEED_HELP comment
* improve readability
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* improve readability
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* improve readability
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Indicating model is in a developing state in docstrings
As suggested by Julien
* small last changes
* apply sylvains suggestions
* finish tf rag
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: patrickvonplaten <patrick@huggingface.co>
Co-authored-by: Julien Plu <plu.julien@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Fix Marian decoding
Tokenizer's decode and batch_decode now accepts a new argument (use_source_tokenizer) which indicates whether the source spm should be used to decode ids. This is useful for Marian models specificallly when decoding source input ids.
* Adapt docstrings
Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com>
* offline mode start
* add specific values
* fix fallback
* add test
* better values check and range
* test that actually works
* document the offline mode
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* more strict check
* cleaner test
* pt-only test
* style
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Refactor checkpoint name in ALBERT and ALBERT_tf
* Refactor checkpoint name in BART and BART_tf
* Refactor checkpoint name in BERT generation
* Refactor checkpoint name in Blenderbot_tf
* Refactor checkpoint name in Blenderbot_small_tf
* Refactor checkpoint name in ConvBERT AND CONVBERT_TF
* Refactor checkpoint name in CTRL AND CTRL_TF
* Refactor checkpoint name in DistilBERT AND DistilBERT_TF
* Refactor checkpoint name in DistilBERT redo
* Refactor checkpoint name in Electra and Electra_tf
* Refactor checkpoint name in FlauBERT and FlauBERT_tf
* Refactor checkpoint name in FSMT
* Refactor checkpoint name in GPT2 and GPT2_tf
* Refactor checkpoint name in IBERT
* Refactor checkpoint name in LED and LED_tf
* Refactor checkpoint name in Longformer and Longformer_tf
* Refactor checkpoint name in Lxmert and Lxmert_tf
* Refactor checkpoint name in Marian_tf
* Refactor checkpoint name in MBART and MBART_tf
* Refactor checkpoint name in MobileBERT and MobileBERT_tf
* Refactor checkpoint name in mpnet and mpnet_tf
* Refactor checkpoint name in openai and openai_tf
* Refactor checkpoint name in pegasus_tf
* Refactor checkpoint name in reformer
* Refactor checkpoint name in Roberta and Roberta_tf
* Refactor checkpoint name in SqueezeBert
* Refactor checkpoint name in Transformer_xl and Transformer_xl_tf
* Refactor checkpoint name in XLM and XLM_tf
* Refactor checkpoint name in XLNET and XLNET_tf
* Refactor checkpoint name in BERT_tf
* run make tests, style, quality, fixup