Commit Graph

10930 Commits

Author SHA1 Message Date
NielsRogge
9e29080439
[X-CLIP] Fix doc tests (#19523)
* Fix XCLIP doc tests

* Add model to doc test list

* Fix tests
2022-10-12 17:05:12 +02:00
Sanchit Gandhi
eefcecaa35
[Examples] Fix typos in run speech recognition seq2seq (#19514) 2022-10-12 15:33:22 +01:00
Vishwas
72153ba611
Remove bert fast dependency from electra (#19520)
* Replaced ElectraTokenizerFast with  BertTokenzier class

* Fixed Styling issue

Co-authored-by: vishwaspai <vishwas.pai@emplay.net>
2022-10-12 10:14:38 -04:00
Naveen Namani
2720d5fc18
made tokenization_roformer independent of bert (#19426)
* made tokenization_roformer independent of bert

* added missing imports

* added missing function and import

* Fixed copy commands

* Update tokenization_roformer.py
2022-10-12 10:13:09 -04:00
Ethan Joseph
af554e9de2
Remove roberta dependency from longformer fast tokenizer (#19501)
* remove roberta fast tokenizer dependency

* fix flake8

* Update src/transformers/models/longformer/tokenization_longformer_fast.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-12 10:12:00 -04:00
imarekkus
3ccda6d0b0
[Doctest] Bart configuration update (#19524)
* Update configuration_bart.py

* Update documentation_tests.txt

* Update documentation_tests.txt

Putting this line in a sorted order
2022-10-12 15:11:46 +02:00
Daniel van Strien
af539d6f0a
fix MarkupLMProcessor option flag (#19526) 2022-10-12 15:08:48 +02:00
Andrea Sottana
5a8a532dcf
Adding links to pipelines parameters documentation (#19227)
* Adding links to pipelines parameters documentation

Adding PR based on suggestion in this issue https://github.com/huggingface/transformers/issues/19038#issuecomment-1259592359

* styling

* Updated config.yml

* Updated config.yml

* update README_es.md
2022-10-12 08:57:08 -04:00
Ritik Nandwal
e94384e4d8
Add depth estimation pipeline (#18618)
* Add initial files for depth estimation pipelines

* Add test file for depth estimation pipeline

* Update model mapping names

* Add updates for depth estimation output

* Add generic test

* Hopefully fixing the tests.

* Check if test passes

* Add make fixup and make fix-copies changes after rebase with main

* Rebase with main

* Fixing up depth pipeline.

* This is not used anymore.

* Fixing the test. `Image` is a module `Image.Image` is the type.

* Update docs/source/en/main_classes/pipelines.mdx

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-12 08:54:20 -04:00
FilipposVentirozos
4ed0fa3676
Fix pytorch seq2seq qa (#19258)
* fixed typo for SQuAD

* Fixed the preprocess_validation_function function for the labels to reflect the remaining truncated instances

* Rolled back the trainer_seq2seq_qa.py for UnboundLocalError: local variable 'metrics' referenced before assignment

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-12 08:33:44 -04:00
Darío Hereñú
c60381e90d
Syntax issue (line 497, 526) Documentation @ssuggen (#19442) 2022-10-12 08:28:54 -04:00
Arthur
84125d7e73
Fix whisper doc (#19518) 2022-10-12 12:44:30 +02:00
NielsRogge
4d367a3c81
Add LiLT (#19450)
* First draft

* Fix more things

* Improve more things

* Remove some head models

* Fix more things

* Add missing layers

* Remove tokenizer

* Fix more things

* Fix copied from statements

* Make all tests pass

* Remove print statements

* Remove files

* Fix README and docs

* Add integration test and fix organization

* Add tips

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Make tests faster, improve docs

* Fix doc tests

* Add model to toctree

* Add docs

* Add note about creating new checkpoint

* Remove is_decoder

* Make tests smaller, add docs

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-12 10:11:20 +02:00
Yih-Dar
e2dc558e9c
[Doctest] Add configuration_bert.py to doctest (#19485)
* BertConfig for doctest

* Change import order

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-12 09:44:07 +02:00
Yih-Dar
e81cb010f8
Avoid Push CI failing to report due to many commits being merged (#19496)
* Change the depth to 20

* Add comment

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-12 09:25:05 +02:00
Wang, Yi
7543e275d4
update doc for perf_train_cpu_many (#19506)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-10-11 22:54:19 -04:00
regisss
bb2cfd1824
Add multi-node conditions in trainer_qa.py and trainer_seq2seq.py (#19502)
* Add multi-node conditions in trainer_qa.py and trainer_seq2seq.py

* Code improvement
2022-10-11 22:48:56 -04:00
Sylvain Gugger
69b81c0a5f
Use a dynamic configuration for circleCI tests (#19325)
* Generate config on the file

* Fake modif for all test launch

* Upload more artifacts

* Typo and quality

* Try converting th yml to txt

* Leave my long lines alone yaml

* Debug prints

* Debug prints v2

* Try without sorting

* Was it really working before?

* Typo

* Use a parameter

* Use a parameter?

* Typo

* Here is some JSON

* Another try

* Learning to read...

* Check default is used

* Does this work?

* With continuation

* WiP

* Use a parameter for test list

* Other fake modif

* With the comma

* Name the test step so it doesn't blow up

* Just one example modification

* Final steps

* Add nightlies

* Move config generator

* Add trigger for nightlies

* Better workflow

* Rebase on recent changes

* Fix config creation

* Fake modif in an example

* Now fake modif in one config file

* Fix install step in custom tokenizers test

* Fix generated config

* Better fix hopefully

* Finally test modif in setup

* final cleanup
2022-10-11 16:31:24 -04:00
Yih-Dar
fa9e18c65f
Fix OPTForQuestionAnswering doctest (#19479)
* Fix doc example for OPTForQuestionAnswering

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-11 20:13:04 +02:00
IMvision12
957ce6465a
New (#19481) 2022-10-11 13:46:25 -04:00
amyeroberts
67a3511443
Update PT to TF CLI for audio models (#19465)
* Update PT to TF CLI model inputs

* Get padding strategy if specified

* Make False comparison explicit
2022-10-11 18:25:29 +01:00
Yih-Dar
8d68878cc0
python3 instead of python in push CI setup job (#19492)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-11 19:18:39 +02:00
Mathieu Jouffroy
5ca131f3d4
[CvT] Tensorflow implementation (#18597)
* implemented TFCvtModel and TFCvtForImageClassification and modified relevant files, added an exception in convert_tf_weight_name_to_pt_weight_name, added quick testing file to compare with pytorch model

* added docstring + testing file in transformers testing suite

* added test in testing file, modified docs to pass repo-consistency, passed formatting test

* refactoring + passing all test

* small refacto, removing unwanted comments

* improved testing config

* corrected import error

* modified acces to pretrained model archive list, to pass tf_test

* corrected import structure in init files

* modified testing for keras_fit with cpu

* correcting PR issues + Refactoring

* Refactoring : improving readability and reducing the number of permutations

* corrected momentum value + cls_token initialization

* removed from_pt as weights were added to the hub

* Update tests/models/cvt/test_modeling_tf_cvt.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2022-10-11 18:16:52 +01:00
Oussamaosman02
0b7b4c60c6
Adding the README_es.md and reference to it in the others files readme (#19427)
* Adding the README_es.md and reference to it in the others files readme

* Updating the check_copies.py

* Updating README_es.md

* Updating chec_copies
2022-10-11 12:56:25 -04:00
Quancore
70a058bc65
Added tokenize keyword arguments to feature extraction pipeline (#19382)
* Added tokenize keyword arguments to feature extraction pipeline

* Reverted truncation parameter

* Import numpy moved to top
2022-10-11 12:54:41 -04:00
David Yang
d0d5aee1dd
Make bert_japanese and cpm independent of their inherited modules (#19431)
* Make cpm tokenization independent of xlnet

* Make bert japanese tokenization independent of bert
2022-10-11 12:09:17 -04:00
Joao Gante
462cd641d9
🚨🚨🚨 TF: Remove TFWrappedEmbeddings (breaking: TF embedding initialization updated for encoder-decoder models) (#19263)
* added test

* correct embedding init

* some changes in blenderbot (incomplete)

* update blenderbot (diff to be used as reference)

* update blenderbot_small

* update LED

* update marian

* update T5 and remove TFWrappedEmbeddings

* nullcontext() -> ContextManagers()

* fix embedding init
2022-10-11 16:48:03 +01:00
amyeroberts
8e4ee28e34
Update TF whisper doc tests (#19484) 2022-10-11 16:05:31 +01:00
Younes Belkada
6c66c6c860
Add warning in generate & device_map=auto & half precision models (#19468)
* fix device mismatch

* make fixup

* added slow tests

- added slow tests on `bnb` models to make sure generate works correctly

* replace with `self.device`

* revert force device assign

* Update src/transformers/generation_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* set the warning in `generate` instead of `sample`

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-11 16:58:49 +02:00
Ankur Goyal
a3008c5a6d
Implement multiple span support for DocumentQuestionAnswering (#19204)
* Implement multiple span support

* Address comments

* Add tests + fix bugs
2022-10-11 10:47:55 -04:00
h
ab856f68df
Decouples XLMProphet model from Prophet (#19406)
* decouples xlm_prophet from prophet and adds copy patterns that pass the copy check

* adds copy patterns to copied docstrings too

* restores autodoc for XLMProphetNetModel

* removes all-casing in a bunch of places to ensure that the model is compatible with all checkpoints on the hub

* adds missing model to main init

* adds autodocs to make document checker happy

* adds missing pretrained model import

* adds missing pretrained model import to main init

* adds XLMProphetNetPreTrainedModel to the dummy pt objects

* removes examples from the source-doc file since docstrings contain them already

* adds a missing new line to make check_repo happy
2022-10-11 10:45:23 -04:00
Yih-Dar
c66466133a
Fix get_embedding dtype at init. time (#19473)
* cast positions dtype in XGLMModel

* Get the correct dtype at init time

* Get the correct dtype at init time

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-11 16:05:39 +02:00
Sofia Oliveira
e38cf93e7c
Make XLMRoberta model and config independent from Roberta (#19359)
* remove config dependence

* remove dependencies from xlm_roberta

* Fix style

* Fix comments

* various fixes

* Fix pre-trained model name
2022-10-11 09:56:42 -04:00
Arnaud Stiegler
8cb44aaf17
Make LayoutLM tokenizers independent from BertTokenizer (#19351)
* fixing tokenizer

* adding all missing classes

* fast tokenizer | fixing format

* revert to full class copy flag

* fixing different casing
2022-10-11 09:49:23 -04:00
Joao Gante
9ed80b0000
TF: TFBart embedding initialization (#19460)
* correct embedding init
2022-10-11 14:44:46 +01:00
lewtun
b651efe59e
[Swin] Replace hard-coded batch size to enable dynamic ONNX export (#19475)
* [Swin] Replace hard-coded batch size to enable dynamic ONNX export
2022-10-11 15:21:29 +02:00
Yih-Dar
440bbd44aa
Update WhisperModelIntegrationTests.test_large_batched_generation (#19472)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-11 14:39:24 +02:00
Yih-Dar
e1a5cc338b
Fix doctests for DeiT and TFGroupViT (#19466)
* Fix some doctests

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-11 14:30:42 +02:00
Yih-Dar
d7dc774a79
Fix TFGroupViT CI (#19461)
* Fix TFGroupViT CI

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-11 14:29:15 +02:00
Joao Gante
a293a0e8a3
CLI: add import protection to datasets (#19470) 2022-10-11 13:19:32 +01:00
Darío Hereñú
ae710425d2
Syntax issues (lines 126, 203) (#19444) 2022-10-11 08:14:21 -04:00
Guillem Orellana Trullols
335f9bcd34
Extend nested_XXX functions to mappings/dicts. (#19455)
* Extend `nested_XXX` functions to mappings/dicts.

* Update src/transformers/trainer_pt_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/trainer_pt_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/trainer_pt_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Style updated file

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-11 08:13:21 -04:00
Arthur
b722a6be72
Fix whisper for pipeline (#19482)
* update feature extractor params

* update attention mask handling

* fix doc and pipeline test

* add warning when skipping test

* add whisper translation and transcription test

* fix build doc test
2022-10-11 07:17:53 -04:00
Dimitre Oliveira
df8faba4db
Enabling custom TF signature draft (#19249)
* Custom TF signature draft

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Adding tf signature tests

* Fixing signature check and adding asserts

* fixing model load path

* Adjusting signature tests

* Formatting file

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Dimitre Oliveira <dimitreoliveira@Dimitres-MacBook-Air.local>
2022-10-11 10:56:08 +01:00
Lysandre
10100979ed Dev version 2022-10-10 17:25:40 -04:00
Partho
df2f28120d
wrap forward passes with torch.no_grad() (#19412) 2022-10-10 15:04:10 -04:00
Partho
5f5e264a12
wrap forward passes with torch.no_grad() (#19413) 2022-10-10 15:03:46 -04:00
Partho
c6a928cadb
wrap forward passes with torch.no_grad() (#19414) 2022-10-10 15:03:24 -04:00
Partho
d739a707d9
wrap forward passes with torch.no_grad() (#19416) 2022-10-10 15:03:09 -04:00
Partho
870a9542be
wrap forward passes with torch.no_grad() (#19438) 2022-10-10 14:54:54 -04:00