Commit Graph

9662 Commits

Author SHA1 Message Date
Sylvain Gugger
e6f00a11d7
Update README to latest release (#16997) 2022-04-28 14:17:44 -04:00
Zachary Mueller
3486a92a57
Fix savedir for by epoch (#16996) 2022-04-28 13:49:45 -04:00
Yih-Dar
5af5735f62
set eos_token_id to None to generate until max length (#16989)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-28 19:47:38 +02:00
amyeroberts
01562dac7e
Rename a class to reflect framework pattern AutoModelXxx -> TFAutoModelXxx (#16993) 2022-04-28 18:11:54 +01:00
conan1024hao
1be8d56ec6
Add parameter --config_overrides for run_mlm_wwm.py (#16961)
* dd parameter --config_overrides for run_mlm_wwm.py

* linter
2022-04-28 10:44:55 -04:00
Yih-Dar
1f9e862507
Update check_models_are_tested to deal with Windows path (#16973)
* fix

* Apply suggestions from code review

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-28 15:31:57 +02:00
Dat Quoc Nguyen
dced262409
Update tokenization_bertweet.py (#16941)
The emoji version must be either 0.5.4 or 0.6.0. Newer emoji versions have been updated to newer versions of the Emoji Charts, thus not consistent with the one used for pre-processing the pre-training Tweet corpus (i.e. not consistent with the vocab).
2022-04-27 16:54:31 -04:00
Yih-Dar
992996e9ca
Add -e flag to some GH workflow yml files (#16959)
* Add -e flag

* add check

* create new keys

* run python setup.py build install

* add comments

* change to develop

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-27 21:44:21 +02:00
Yih-Dar
596afb4297
Fix check_all_models_are_tested (#16970)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-27 21:18:29 +02:00
Sylvain Gugger
691cdbb7d7
Fix doc notebooks links (#16969)
* Fix doc notebooks links

* Remove missing section
2022-04-27 14:59:53 -04:00
Zachary Mueller
60e1d883f1
Fixup no_trainer save logic (#16968)
* Fixup all examples
2022-04-27 14:46:49 -04:00
Sylvain Gugger
c79bbc3ba5
Fix multiple deletions of the same files in save_pretrained (#16947)
* Fix multiple deletions of the same files in save_pretrained

* Add is_main_process argument
2022-04-27 12:28:42 -04:00
Sylvain Gugger
bfbec17765
Fix add-new-model-like when model doesn't support all frameworks (#16966) 2022-04-27 11:15:25 -04:00
Mishig Davaadorj
cf8a7c2490
Update custom_models.mdx (#16964)
BertModelForSequenceClassification -> BertForSequenceClassification
2022-04-27 16:46:55 +02:00
Antoni Baum
5896b3ecce
Fix distributed_concat with scalar tensor (#16963)
* Fix `distributed_concat` with scalar tensor

* Update trainer_pt_utils.py
2022-04-27 10:26:22 -04:00
NielsRogge
084c38c59d
[HF Argparser] Fix parsing of optional boolean arguments (#16946)
* Add fix

* Apply suggestion from code review

Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
2022-04-27 15:00:45 +02:00
Leonid Boytsov
c82e017aa9
Misc. fixes for Pytorch QA examples: (#16958)
1. Fixes evaluation errors popping up when you train/eval on squad v2 (one was newly encountered and one that was previously reported Running SQuAD 1.0 sample command raises IndexError #15401 but not completely fixed).
2. Removes boolean arguments that don't use store_true. Please, don't use these: *ANY non-empty string is being converted to True in this case and this clearly is not the desired behavior (and it creates a LOT of confusion).
3. All no-trainer test scripts are now saving metric values in the same way (with the right prefix eval_), which is consistent with the trainer-based versions.
4. Adds forgotten model.eval() in the no-trainer versions. This improved some results, but not everything (see the discussion in the end). Please, see the F1 scores and the discussion below.
2022-04-27 08:51:39 -04:00
Yih-Dar
49d5bcb0f3
Fix HubertRobustTest PT/TF equivalence test on GPU (#16943)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-27 10:50:03 +02:00
NielsRogge
479fdc4925
Add semantic script, trainer (#16834)
* Add first draft

* Improve script and README

* Improve README

* Apply suggestions from code review

* Improve script, add link to resulting model

* Add corresponding test

* Adjust learning rate
2022-04-27 10:12:18 +02:00
Anton Lozhkov
a4a88fa09f
[Research] Speed up evaluation for XTREME-S (#16785)
* Avoid repeated per-lang filtering

* Language groups and logits preprocessing

* Style
2022-04-27 08:34:21 +02:00
Yongliang Shen
2d91e3c304
use original loaded keys to find mismatched keys (#16920) 2022-04-26 17:29:52 -04:00
nikkie
d365f5074f
Fix RuntimeError message format (#16906) 2022-04-26 17:08:28 -04:00
Yang Ming
10dfa126b7
documentation: some minor clean up (#16850) 2022-04-26 16:56:08 -04:00
Krishna Sirumalla
aaee4038c3
Add onnx config for RoFormer (#16861)
* add roformer onnx config
2022-04-26 16:51:15 +02:00
Ahmed Elnaggar
8afaaa26f5
FIx Iterations for decoder (#16934)
FIx Iterations for decoder
2022-04-26 12:54:14 +02:00
Manuel
fa32247406
apply torch int div to layoutlmv2 (#15457)
* apply torch int div

* black linting fixup

* update path to torch_int_div

* clarify imports
2022-04-26 10:07:51 +02:00
Sylvain Gugger
344b9fb0c6
Limit the use of PreTrainedModel.device (#16935)
* Limit the use of PreTrainedModel.device

* Fix
2022-04-25 20:58:50 -04:00
code-review-doctor
6568752039
Fix issue probably-meant-fstring found at https://codereview.doctor (#16913) 2022-04-25 15:15:00 -04:00
Sanchit Gandhi
fea94d6790
Replace deprecated logger.warn with warning (#16876) 2022-04-25 15:12:51 -04:00
Joao Gante
e03966e404
TF: XLA stable softmax (#16892)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-25 20:10:51 +01:00
Rushi Chaudhari
8246caf3eb
added deit onnx config (#16887)
* added deit onnx config
2022-04-25 20:50:45 +02:00
Joao Gante
9331b37967
TF: XLA Logits Warpers (#16899)
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
2022-04-25 19:48:08 +01:00
Joao Gante
809dac48f9
TF: XLA logits processors - minimum length, forced eos, and forced bos (#16912)
* XLA min len, forced eos, and forced bos

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
2022-04-25 19:27:53 +01:00
Yih-Dar
f6210c49e2
Fix RemBertTokenizerFast (#16933)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-25 19:51:50 +02:00
Yih-Dar
32adbb26d6
Fix PyTorch RAG tests GPU OOM (#16881)
* add torch.cuda.empty_cache in some PT RAG tests

* torch.cuda.empty_cache in tearDownModule()

* tearDown()

* add gc.collect()

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-25 17:33:56 +02:00
Yih-Dar
3e47d19cfc
Add missing ckpt in config docs (#16900)
* add missing ckpt in config docs

* add more missing ckpt in config docs

* fix wrong ckpts

* fix realm ckpt

* fix s2t2

* fix xlm_roberta ckpt

* Fix for deberta v2

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* use only one checkpoint for DPR

* Apply suggestions from code review

Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
2022-04-25 17:31:45 +02:00
Patrick von Platen
3a71e94a92
Fix doc test quicktour dataset (#16929)
* fix doc test

* fix doc test

Co-authored-by: Patrick <patrick@pop-os.localdomain>
2022-04-25 16:26:59 +02:00
Thomas Chaigneau
508baf1943
add bigbird typo fixes (#16897)
Co-authored-by: ChainYo <t.chaigneau.tc@gmail.com>
2022-04-25 11:32:06 +02:00
Patrick von Platen
72728be3db
[DocTests] Fix some doc tests (#16889)
* [DocTests] Fix some doc tests

* hacky fix

* correct
2022-04-23 08:40:14 +02:00
cavdard
22fc93c4d9
Changes in create_optimizer to support tensor parallelism with SMP (#16880)
* changes in create optimizer to support tensor parallelism with SMP

* Update src/transformers/trainer.py

Convert if check to one line.

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Cavdar <dcavdar@a07817b12d7e.ant.amazon.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-22 15:24:38 -04:00
Joao Gante
99c8226b12
TF: XLA repetition penalty (#16879) 2022-04-22 18:29:32 +01:00
Thomas Chaigneau
ec81c11a18
Add OnnxConfig for ConvBERT (#16859)
* add OnnxConfig for ConvBert

Co-authored-by: ChainYo <t.chaigneau.tc@gmail.com>
2022-04-22 18:19:15 +02:00
Minh Chien Vu
0d1cff1195
Add doc tests for Albert and Bigbird (#16774)
* Add doctest BERT

* make fixup

* fix typo

* change checkpoints

* make fixup

* define doctest output value, update doctest for mobilebert

* solve fix-copies

* update QA target start index and end index

* change checkpoint for docs and reuse defined variable

* Update src/transformers/models/bert/modeling_tf_bert.py

Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>

* make fixup

* Add Doctest for Albert and Bigbird

* make fixup

* overwrite examples for Albert and Bigbird

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* update longer examples for Bigbird

* using examples from squad_v2

* print out example text

* change name token-classification-big-bird checkpoint to random

Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-04-22 18:07:16 +02:00
Mario Šaško
9fa88172c2
Minor fixes/improvements in convert_file_size_to_int (#16891)
* Minor improvements to `convert_file_size_to_int`

* Add <unit>bit version to kilos and megas

* Minor fix
2022-04-22 16:54:20 +02:00
Joao Gante
6d90d76f5d
TF: rework XLA generate tests (#16866) 2022-04-22 12:38:08 +01:00
Yih-Dar
3b1bbefc47
Add missing entries in mappings (#16857)
* add missing entries in some mappings

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-22 10:53:24 +02:00
Loubna Ben Allal
d91841315a
New features for CodeParrot training script (#16851)
* add tflops logging and fix grad accumulation

* add accelerate tracking and checkpointing

* scale loss of last batch correctly

* fix typo

* compress loss computation

Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>

* add resume from checkpoint argument

* add load_state accelerate from checkpoint, register lr scheduler and add tflops function

* reformat code

* reformat code

* add condition on path for resume checkpoint

* combine if conditions

Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>

* add source for tflops formula

Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
2022-04-21 18:43:46 +02:00
Yih-Dar
eef2422e96
Fix doctest list (#16878)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-04-21 18:12:14 +02:00
Thomas Chaigneau
0b1e0fcf7a
Fix GPT-J onnx conversion (#16780)
* add gptj to TOKENIZER_MAPPING_NAMES

* fix int32 to float to avoid problem in onnx

* Update src/transformers/models/gptj/modeling_gptj.py

Co-authored-by: ChainYo <t.chaigneau.tc@gmail.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
2022-04-21 15:55:30 +02:00
Eldar Kurtic
bae9b6458c
Use ACT2FN to fetch ReLU activation (#16874)
- all activations should be fetched through ACT2FN
- it returns ReLU as `nn.Module`, which allows attaching hooks on the activation function and prints it to stdout when `print(model)`
2022-04-21 09:33:29 -04:00