Commit Graph

8872 Commits

Author SHA1 Message Date
Kamal Raj
d2749cf72e
Update README.md (#15462)
fix typo
2022-02-01 10:04:30 -05:00
Suraj Patil
1c9648c457
[M2M100, XGLM] fix positional emb resize (#15444) 2022-02-01 14:32:55 +01:00
Yih-Dar
2ca6268394
fix from_vision_text_pretrained doc example (#15453)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 12:20:22 +01:00
Yih-Dar
dc05dd539f
Fix TF Causal LM models' returned logits (#15256)
* Fix TF Causal LM models' returned logits

* Fix expected shape in the tests

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 11:04:07 +00:00
Yih-Dar
af5c3329d7
remove "inputs" in tf common test script (no longer required) (#15262)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 10:09:49 +00:00
Stas Bekman
d12ae81664
[generate] fix synced_gpus default (#15446) 2022-01-31 13:58:27 -08:00
Suraj Patil
d4f201b860
skip test for XGLM (#15445) 2022-01-31 16:53:16 -05:00
Sylvain Gugger
0c17e766cb
Error when group_by_length is used with an IterableDataset (#15437) 2022-01-31 15:33:16 -05:00
peregilk
125a2882b4
Update modeling_wav2vec2.py (#15423)
* Update modeling_wav2vec2.py

With very tiny sound files (less than 0.1 seconds) the num_masked_span can be too long. The issue is described in issue #15366 and discussed with @patrickvonplaten.

* correct errors with mask time indices

* remove bogus file

* make fix-copies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-01-31 21:22:11 +01:00
Tavin Turner
d984b10335
Add 'with torch.no_grad()' to BEiT integration test forward passes (#14961)
* Add 'with torch.no_grad()' to BEiT integration test forward pass

* Fix inconsistent use of tabs and spaces in indentation
2022-01-31 15:12:10 -05:00
Matt
09f9d07271
Misfiring tf warnings (#15442)
* Fix spurious warning in TF TokenClassification models

* Fixing one last spurious warning

* Removing outdated warning altogether
2022-01-31 19:17:59 +00:00
Suraj Patil
6915174e68
[RobertaTokenizer] remove inheritance on GPT2Tokenizer (#15429)
* refactor roberta tokenizer

* refactor fast tokenizer

* remove old comment
2022-01-31 19:50:25 +01:00
Suraj Patil
a5ecbf7348
correct positionla emb size (#15441) 2022-01-31 19:47:49 +01:00
Yih-Dar
5a70987301
Fix TFLEDModel (#15356)
* fix tf led

* fix

* fix

* Add test_pt_tf_model_equivalence_extra for TFLED

* add a (temporary) test

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-31 19:35:54 +01:00
Suraj Patil
87918d3221
[examples/Flax] add a section about GPUs (#15198)
* add a section about GPUs

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-01-31 19:20:53 +01:00
Patrick von Platen
b8810847d0
[Trainer] suppress warning for length-related columns (#15421)
* [Trainer] suppress warning for length-related columns

* improve message

* Update src/transformers/trainer.py
2022-01-31 18:51:29 +01:00
Sylvain Gugger
3385ca2582
Change REALM checkpoint to new ones (#15439)
* Change REALM checkpoint to new ones

* Last checkpoint missing
2022-01-31 12:50:20 -05:00
Matt
7e56ba2864
Fix spurious warning in TF TokenClassification models (#15435) 2022-01-31 17:09:16 +00:00
Yih-Dar
554d333ece
Fix loss calculation in TFXXXForTokenClassification models (#15294)
* Fix loss calculation in TFFunnelForTokenClassification

* revert the change in TFFunnelForTokenClassification

* fix FunnelForTokenClassification loss

* fix other TokenClassification loss

* fix more

* fix more

* add num_labels to ElectraForTokenClassification

* revert the change to research projects

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-31 11:43:08 -05:00
Stas Bekman
44c7857b87
[deepspeed doc] fix import, extra notes (#15400)
* [deepspeed doc] fix import, extra notes

* typo
2022-01-31 08:28:10 -08:00
NielsRogge
47df0f2234
Add header (#15434) 2022-01-31 11:15:54 -05:00
Sylvain Gugger
7fc6f41d91
Add doc for add-new-model-like command (#15433) 2022-01-31 11:10:45 -05:00
Ogundepo Odunayo
282ae123e2
add t5 ner finetuning (#15432) 2022-01-31 17:03:06 +01:00
NielsRogge
d4b3e56d64
[Hotfix] Fix Swin model outputs (#15414)
* Fix Swin model outputs

* Rename pooler
2022-01-31 16:32:14 +01:00
Suraj Patil
38dfb40ae3
import torch.utils.checkpoint (#15427) 2022-01-31 15:51:50 +01:00
Jonatas Grosman
f624249d8b
[Robust Speech Challenge] Add missing LR parameter (#15428) 2022-01-31 15:50:56 +01:00
Kamal Raj
3254080d45
Update README.md (#15430)
fix typo
2022-01-31 09:48:20 -05:00
Julien Plu
aa19f478ac
Add (M)Luke model training for Token Classification in the examples (#14880)
* Add Luke training

* Fix true label tags

* Fix true label tags

* Fix true label tags

* Update the data collator for Luke

* Some training refactor for Luke

* Improve data collator for Luke

* Fix import

* Fix datasets concatenation

* Add the --max_entity_length argument for Luke models

* Remove unused code

* Fix style issues

* Fix style issues

* Move the Luke training into a separate folder

* Fix style

* Fix naming

* Fix filtering

* Fix filtering

* Fix filter

* Update some preprocessing

* Move luke to research_projects

* Checkstyle

* Address comments

* Fix style
2022-01-31 07:58:18 -05:00
François REMY
0094eba363
Fix additional DataTrainingArguments documentation (#15408)
(This is an editorial change only)
2022-01-31 07:45:11 -05:00
NielsRogge
ee5de66349
Add SegformerFeatureExtractor to Auto API (#15410) 2022-01-31 11:38:08 +01:00
Suraj Patil
0f69b924fb
[XGLMTokenizer] fix init and add in AutoTokenizer (#15406) 2022-01-30 15:35:53 +01:00
Yih-Dar
f380bf2b61
Fix the inconsistency of loss calculation between PT/TF XLNetLMHeadModel (#15298)
* Fix the inconsistency of loss calculation between PT/TF XLNetLMHeadModel

* overwrite test_loss_computation

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-29 15:08:35 +00:00
Soonhwan-Kwon
e09473a817
Add support for XLM-R XL and XXL models by modeling_xlm_roberta_xl.py (#13727)
* add xlm roberta xl

* add convert xlm xl fairseq checkpoint to pytorch

* fix init and documents for xlm-roberta-xl

* fix indention

* add test for XLM-R xl,xxl

* fix model hub name

* fix some stuff

* up

* correct init

* fix more

* fix as suggestions

* add torch_device

* fix default values of doc strings

* fix leftovers

* merge to master

* up

* correct hub names

* fix docs

* fix model

* up

* finalize

* last fix

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add copied from

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-01-29 13:42:37 +01:00
Steven Liu
16d4acbfdb
Get started docs (#15098)
* clean commit of changes

* apply review feedback, make edits

* fix backticks, minor formatting

* 🖍 make fixup and minor edits

* 🖍 fix # in header

* 📝 update code sample without from_pt

* 📝 final review
2022-01-28 19:01:37 -06:00
Steven Liu
cabd6d26a2
Update model share tutorial (#15288)
* add model sharing tutorial

* 🖍 apply feedback from review

* 📝 make edits

* 🖍 fix formatting

* 📝 convert from pt checkpoint to flax

* 📝 final review
2022-01-28 18:49:26 -06:00
Sylvain Gugger
c98a6ac211
Use argument for preprocessing workers in run_summairzation (#15394) 2022-01-28 18:34:10 -05:00
Yih-Dar
db07956740
Fix missing eps arg for LayerNorm in ElectraGeneratorPredictions (#15332)
* fix missing eps

* Same fix for ConvBertGeneratorPredictions

* Same fix for AlbertMLMHead

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-28 18:32:26 -05:00
Stas Bekman
297602c7f4
[deepspeed] saving checkpoint fallback when fp16 weights aren't saved (#14948)
* [deepspeed] saving checkpoint fallback when fp16 weights aren't saved

* Bump required deepspeed version to match usage when saving checkpoints

* update version

Co-authored-by: Mihai Balint <balint.mihai@gmail.com>
2022-01-28 11:05:47 -08:00
Suraj Patil
d25e25ee2b
Add XGLM models (#14876)
* add xglm

* update vocab size

* fix model name

* style and tokenizer

* typo

* no mask token

* fix pos embed compute

* fix args

* fix tokenizer

* fix positions

* fix tokenization

* style and dic fixes

* fix imports

* add fast tokenizer

* update names

* add pt tests

* fix tokenizer

* fix typo

* fix tokenizer import

* fix fast tokenizer

* fix tokenizer

* fix converter

* add tokenizer test

* update checkpoint names

* fix tokenizer tests

* fix slow tests

* add copied from comments

* rst -> mdx

* flax model

* update flax tests

* quality

* style

* doc

* update index and readme

* fix copies

* fix doc

* update toctrr

* fix indent

* minor fixes

* fix config doc

* don't save embed_pos weights

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* address Sylvains commnets, few doc fixes

* fix check_repo

* align order of arguments

* fix copies

* fix labels

* remove unnecessary mapping

* fix saving tokenizer

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-01-28 18:55:23 +01:00
Matt
b6b79faa7e
Make links explicit (#15395)
* Make links explicit

* Removing reference to compute_metrics() since it's kind of PyTorch-specific
2022-01-28 17:31:22 +00:00
Yih-Dar
6df29ba5e6
fix wrong tokenizer checkpoint name in flax marian (#15391)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-28 16:53:25 +01:00
lewtun
507601a5cf
Prepare deprecated ONNX exporter for torch v1.11 (#15388)
* Prepare deprecated ONNX exporter for PyTorch v1.11

* Add deprecation warning
2022-01-28 16:32:47 +01:00
Ngo Quang Huy
4996922b6d
[docs] fix wrong file name in pr_check (#15380) 2022-01-28 07:52:01 -05:00
Ngo Quang Huy
8f5d62fdb1
Fix bad_words_ids not working with sentencepiece-based tokenizers (#15343)
* Fix `bad_word_ids` not working with sentencepiece-based tokenizers

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-01-28 12:39:55 +01:00
Nicolas Patry
06107541d3
Fixing support batch_size and num_return_Sequences in text-generation pipeline (#15318)
* Fixing support `batch_size` and `num_return_Sequences` in
`text-generation` pipeline

And `text2text-generation` too.

The bug was caused by the batch_size containing both the incoming batch
**and** the generated `num_sequences`.

The fix simply consists into splitting both of these again into
different dimensions.

* TF support.

* Odd backward compatibility script in the way.
2022-01-28 12:15:30 +01:00
Yanming Wang
c4d1fd77fa
Set syncfree AdamW as the default optimizer for xla:gpu device in amp mode (#15361)
* Use syncfree AdamW for xla:gpu device by default

* Make syncfree AdamW optional
2022-01-27 20:05:31 -05:00
Lysandre Debut
2e4559fa37
Add init to BORT (#15378)
* Add init to BORT

* BORT should be in init
2022-01-27 15:16:54 -05:00
Steven Liu
f5db6ce76a
Fix code format for Accelerate doc (#15335)
* 🖍 fix code syntax to external libraries and replace image

* 🔄revert code formatting, replace image with code block

* 🖍 apply feedback
2022-01-27 13:49:04 -06:00
Sylvain Gugger
0b07230409
Allow relative imports in dynamic code (#15352)
* Allow dynamic modules to use relative imports

* Add tests

* Add one last test

* Changes
2022-01-27 14:47:59 -05:00
dependabot[bot]
628b59e51d
Bump numpy from 1.19.2 to 1.21.0 in /examples/research_projects/lxmert (#15369)
Bumps [numpy](https://github.com/numpy/numpy) from 1.19.2 to 1.21.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst.txt)
- [Commits](https://github.com/numpy/numpy/compare/v1.19.2...v1.21.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-27 14:46:15 -05:00