Commit Graph

642 Commits

Author SHA1 Message Date
Patrick von Platen
5cd9e2cba1
Update README.md 2020-10-21 12:43:42 +02:00
Patrick von Platen
220b5f97ca
Create README.md 2020-10-21 12:34:46 +02:00
Patrick von Platen
8ffd7fb12d
Update README.md 2020-10-21 12:27:09 +02:00
Patrick von Platen
613ab364eb
Update README.md 2020-10-21 12:23:17 +02:00
Patrick von Platen
f7eb17dc47
Update README.md 2020-10-21 12:19:44 +02:00
Patrick von Platen
0264048660
Update README.md 2020-10-20 16:13:49 +02:00
Patrick von Platen
f3312515b7
Add note for WikiSplit 2020-10-20 15:42:29 +02:00
Patrick von Platen
0724c0f3a2
Fix EncoderDecoder WikiSplit Example 2020-10-20 15:13:22 +02:00
Weizhen
2422cda01b
ProphetNet (#7157)
* add new model prophetnet

prophetnet modified

modify codes as suggested v1

add prophetnet test files

* still bugs, because of changed output formats of encoder and decoder

* move prophetnet into the latest version

* clean integration tests

* clean tokenizers

* add xlm config to init

* correct typo in init

* further refactoring

* continue refactor

* save parallel

* add decoder_attention_mask

* fix use_cache vs. past_key_values

* fix common tests

* change decoder output logits

* fix xlm tests

* make common tests pass

* change model architecture

* add tokenizer tests

* finalize model structure

* no weight mapping

* correct n-gram stream attention mask as discussed with qweizhen

* remove unused import

* fix index.rst

* fix tests

* delete unnecessary code

* add fast integration test

* rename weights

* final weight remapping

* save intermediate

* Descriptions for Prophetnet Config File

* finish all models

* finish new model outputs

* delete unnecessary files

* refactor encoder layer

* add dummy docs

* code quality

* fix tests

* add model pages to doctree

* further refactor

* more refactor, more tests

* finish code refactor and tests

* remove unnecessary files

* further clean up

* add docstring template

* finish tokenizer doc

* finish prophetnet

* fix copies

* fix typos

* fix tf tests

* fix fp16

* fix tf test 2nd try

* fix code quality

* add test for each model

* merge new tests to branch

* Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

* Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

* Update src/transformers/modeling_prophetnet.py

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

* Update utils/check_repo.py

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

* apply sams and sylvains comments

* make style

* remove unnecessary code

* Update README.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update README.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/configuration_prophetnet.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* implement lysandres comments

* correct docs

* fix isort

* fix tokenizers

* fix copies

Co-authored-by: weizhen <weizhen@mail.ustc.edu.cn>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sam Shleifer <sshleifer@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2020-10-19 17:36:09 +02:00
Jordi Mas
ea1507fb45
Julibert model card (#7868)
* Julibert model card

* Fix text
2020-10-19 06:50:52 -04:00
Patrick von Platen
dc552b9b70
Fix typo in sequence model card 2020-10-16 16:05:06 +02:00
rmroczkowski
7b13bd01df
Herbert polish model (#7798)
* HerBERT transformer model for Polish language understanding.

* HerbertTokenizerFast generated with HerbertConverter

* Herbert base and large model cards

* Herbert model cards with tags

* Herbert tensorflow models

* Herbert model tests based on Bert test suit

* src/transformers/tokenization_herbert.py edited online with Bitbucket

* src/transformers/tokenization_herbert.py edited online with Bitbucket

* docs/source/model_doc/herbert.rst edited online with Bitbucket

* Herbert tokenizer tests and bug fixes

* src/transformers/configuration_herbert.py edited online with Bitbucket

* Copyrights and tests for TFHerbertModel

* model_cards/allegro/herbert-base-cased/README.md edited online with Bitbucket

* model_cards/allegro/herbert-large-cased/README.md edited online with Bitbucket

* Bug fixes after testing

* Reformat modified_only_fixup

* Proper order of configuration

* Herbert proper documentation formatting

* Formatting with make modified_only_fixup

* Dummies fixed

* Adding missing models to documentation

* Removing HerBERT model as it is a simple extension of BERT

* Update model_cards/allegro/herbert-base-cased/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>

* Update model_cards/allegro/herbert-large-cased/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>

* HerbertTokenizer deprecated configuration removed

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-16 03:06:51 -04:00
David S. Lim
9c71cca316
model card for bert-base-NER (#7799)
* model card for bert-base-NER

* add meta data up top

Co-authored-by: Julien Chaumond <chaumond@gmail.com>

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-15 21:55:00 +02:00
Julien Chaumond
e7aa64838c [model_cards] facebook/bart-large-mnli: register ZSC for the inference API
cc @Narsil @mfuntowicz @joeddav
2020-10-15 19:02:10 +02:00
Julien Chaumond
6f45dd2fac [model_cards] Fix yaml for Facebook/wmt19-*
see d99ed7ad61
2020-10-15 16:14:08 +02:00
Julien Chaumond
d99ed7ad61 [model_cards] Facebook: add thumbnail 2020-10-15 12:53:29 +02:00
Nils Reimers
3032de9369
Model Card (#7752)
* Create README.md

* Update model_cards/sentence-transformers/LaBSE/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-14 13:30:58 -04:00
sarahlintang
3fdbeba83c
[model_cards] sarahlintang/IndoBERT (#7748)
* Create README.md

* Update model_cards/sarahlintang/IndoBERT/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-14 13:10:31 -04:00
Julien Chaumond
ba654270b3 [model_cards] rename to correct model name 2020-10-14 19:02:48 +02:00
Zhuosheng Zhang
08978487e7
Create README.md (#7722) 2020-10-14 12:56:12 -04:00
Sagor Sarker
3557509127
added evaluation results for classification task (#7790) 2020-10-14 12:50:43 -04:00
XiaoqiJiao
890e790e16
[model_cards] TinyBERT (HUAWEI Noah's Ark Lab) (#7775) 2020-10-14 09:31:01 -04:00
Alex Combessie
aacac8f708
Add license info to nlptown/bert-base-multilingual-uncased-sentiment (#7738) 2020-10-12 11:56:10 -04:00
Andrew Kane
26d5475d4b
Added license information for default and distilbert models (#7688) 2020-10-10 03:55:11 -04:00
Joe Davison
a1ac082879
add license to xlm-roberta-large-xnli card 2020-10-09 09:16:06 -04:00
Blaise Cruz
aee7967fc4
Added model cards for Tagalog BERT models (#7603) 2020-10-07 16:49:20 -04:00
Bobby Donchev
b1c06140f4
Create README.md for IsRoBERTa language model (#7640)
* Create README.md

* Update README.md

* Apply suggestions from code review

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-07 16:46:03 -04:00
Keshan
e10d389561
[Model card] SinhalaBERTo model. (#7558)
* [Model card] SinhalaBERTo model.

This is the model card for keshan/SinhalaBERTo model.

* Update model_cards/keshan/SinhalaBERTo/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-07 16:40:52 -04:00
Amine Abdaoui
167bce56f2
[model_card] bert-base-5lang-cased (#7573)
Co-authored-by: Amin <amin.geotrend@gmail.com>
2020-10-07 16:38:14 -04:00
Abed khooli
923dd4e5ef
Create README.md (#7581) 2020-10-07 16:37:40 -04:00
dartrevan
85ead0fec4
Update README.md (#7590) 2020-10-07 16:37:10 -04:00
Ilias Chalkidis
c6b9c72eac
Update README.md (#7629)
Minor changes: Add arxiv link + Layout improvement + fix typos
2020-10-07 16:36:08 -04:00
Abhilash Majumder
048b4bd2c6
Create Model Card For "abhilash1910/french-roberta" Model (#7544) 2020-10-07 16:35:28 -04:00
Julien Chaumond
c2e0d8ac52
[model_card] nikokons/gpt2-greek
by @nikkon3
2020-10-07 16:28:47 -04:00
Ahmed Elnaggar
aa6c3c14b4
typo fix (#7611)
It should be T5-3B not T5-3M.
2020-10-06 15:32:52 +02:00
cedspam
8d2c248df7
Update README.md (#7612) 2020-10-06 08:46:55 -04:00
Ilias Chalkidis
1c80b2c604
Create README.md (LEGAL-BERT Model card) (#7607)
* Create README.md

Model description for all LEGAL-BERT models, published as part of  "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2018, In Findings of EMNLP 2020

* Update model_cards/nlpaueb/legal-bert-base-uncased/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-06 08:46:17 -04:00
Ahmed Elnaggar
66c72082d0
Add ProtT5-XL-BFD model card (#7606)
* Add ProtT5-XL-BFD model card

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2020-10-06 12:19:21 +02:00
Joshua H
1a00f46c74
Update Code example according to deprecation of AutoModeWithLMHead (#7555)
'The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.'
I dont know how to change the 'How to use this model directly from the 🤗/transformers library:' part since it is not part of the model-paper
2020-10-05 08:21:21 -04:00
Nathan Cooper
071970feb8
[Model card] Java Code Summarizer model (#7568)
* Create README.md

* Update model_cards/ncoop57/bart-base-code-summarizer-java-v0/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-05 04:49:17 -04:00
Forrest Iandola
02ef825be2
SqueezeBERT architecture (#7083)
* configuration_squeezebert.py

thin wrapper around bert tokenizer

fix typos

wip sb model code

wip modeling_squeezebert.py. Next step is to get the multi-layer-output interface working

set up squeezebert to use BertModelOutput when returning results.

squeezebert documentation

formatting

allow head mask that is an array of [None, ..., None]

docs

docs cont'd

path to vocab

docs and pointers to cloud files (WIP)

line length and indentation

squeezebert model cards

formatting of model cards

untrack modeling_squeezebert_scratchpad.py

update aws paths to vocab and config files

get rid of stub of NSP code, and advise users to pretrain with mlm only

fix rebase issues

redo rebase of modeling_auto.py

fix issues with code formatting

more code format auto-fixes

move squeezebert before bert in tokenization_auto.py and modeling_auto.py because squeezebert inherits from bert

tests for squeezebert modeling and tokenization

fix typo

move squeezebert before bert in modeling_auto.py to fix inheritance problem

disable test_head_masking, since squeezebert doesn't yet implement head masking

fix issues exposed by the test_modeling_squeezebert.py

fix an issue exposed by test_tokenization_squeezebert.py

fix issue exposed by test_modeling_squeezebert.py

auto generated code style improvement

issue that we inherited from modeling_xxx.py: SqueezeBertForMaskedLM.forward() calls self.cls(), but there is no self.cls, and I think the goal was actually to call self.lm_head()

update copyright

resolve failing 'test_hidden_states_output' and remove unused encoder_hidden_states and encoder_attention_mask

docs

add integration test. rename squeezebert-mnli --> squeezebert/squeezebert-mnli

autogenerated formatting tweaks

integrate feedback from patrickvonplaten and sgugger to programming style and documentation strings

* tiny change to order of imports
2020-10-05 04:25:43 -04:00
Julien Chaumond
e32390931d
[model_card] distilbert-base-german-cased 2020-10-01 09:08:49 -04:00
Julien Chaumond
9a4e163b58
[model_card] Fix metadata, adalbertojunior/PTT5-SMALL-SUM 2020-10-01 08:54:06 -04:00
Adalberto
8435e10e24
Create README.md (#7299)
* Create README.md

* language metadata

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-01 08:52:28 -04:00
Martin Müller
d727432072
Update README.md (#7459) 2020-10-01 08:51:26 -04:00
allenyummy
664da5b077
Create README.md (#7468) 2020-10-01 08:50:26 -04:00
ahotrod
f745f61c99
Update README.md (#7491)
Model now fine-tuned on Transformers 3.1.0, previous out-of-date model was fine-tuned on Transformers 2.3.0.
2020-10-01 08:50:07 -04:00
Abed khooli
6ef7658c0a
Create README.md (#7349)
Model card for akhooli/personachat-arabic
2020-10-01 08:48:51 -04:00
Bayartsogt Yadamsuren
15ab3f049b
Creating readme for bert-base-mongolian-cased (#7439)
* Creating readme for bert-base-mongolian-cased

* Update model_cards/bayartsogt/bert-base-mongolian-cased/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-01 08:46:27 -04:00
Bayartsogt Yadamsuren
0c2b9fa831
creating readme for bert-base-mongolian-uncased (#7440) 2020-10-01 08:45:22 -04:00