Commit Graph

14530 Commits

Author SHA1 Message Date
jiaqiw09
d1a00f9dd0
translate deepspeed.md to chinese (#27495)
* translate deepspeed.md

* update
2023-11-17 13:49:31 -08:00
V.Prasanna kumar
ffbcfc0166
Broken links fixed related to datasets docs (#27569)
fixed the broken links belogs to dataset library of transformers
2023-11-17 13:44:09 -08:00
V.Prasanna kumar
638d49983f
fixed broken link (#27560) 2023-11-17 08:20:42 -08:00
Joao Gante
5330b83bc5
Generate: update compute transition scores doctest (#27558) 2023-11-17 11:23:09 +00:00
Joao Gante
913d03dc5e
Generate: fix flaky tests (#27543) 2023-11-17 10:15:00 +00:00
Yih-Dar
d903abfccc
Fix AMD CI not showing GPU (#27555)
fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-11-17 10:44:37 +01:00
Yih-Dar
fe3ce061c4
Skip some fuyu tests (#27553)
* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-11-17 10:35:04 +01:00
jiaqiw09
b074461ef0
translate Trainer.md to chinese (#27527)
* translate

* update

* update
2023-11-16 12:07:15 -08:00
Nathaniel Egwu
93f31e0e78
Updated albert.md doc for ALBERT model (#27223)
* Updated albert.md doc for ALBERT model

* Update docs/source/en/model_doc/albert.md

Fixed Resources heading

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update the ALBERT model doc resources

Fixed resource example for fine-tuning the ALBERT sentence-pair classification.

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

Removed resource duplicate

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Updated albert.md doc with reviewed changes

* Updated albert.md doc for ALBERT

* Update docs/source/en/model_doc/albert.md

Removed duplicates from  updated docs/source/en/model_doc/albert.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-16 11:44:36 -08:00
Joao Gante
12b50c6130
Generate: improve assisted generation tests (#27540) 2023-11-16 18:54:20 +00:00
Arthur
651408a077
[Styling] stylify using ruff (#27144)
* try to stylify using ruff

* might need to remove these changes?

* use ruf format andruff check

* use isinstance instead of type comparision

* use # fmt: skip

* use # fmt: skip

* nits

* soem styling changes

* update ci job

* nits isinstance

* more files update

* nits

* more nits

* small nits

* check and format

* revert wrong changes

* actually use formatter instead of checker

* nits

* well docbuilder is overwriting this commit

* revert notebook changes

* try to nuke docbuilder

* style

* fix feature exrtaction test

* remve `indent-width = 4`

* fixup

* more nits

* update the ruff version that we use

* style

* nuke docbuilder styling

* leve the print for detected changes

* nits

* Remove file I/O

Co-authored-by: charliermarsh
 <charlie.r.marsh@gmail.com>

* style

* nits

* revert notebook changes

* Add # fmt skip when possible

* Add # fmt skip when possible

* Fix

* More `  # fmt: skip` usage

* More `  # fmt: skip` usage

* More `  # fmt: skip` usage

* NIts

* more fixes

* fix tapas

* Another way to skip

* Recommended way

* Fix two more fiels

* Remove asynch
Remove asynch

---------

Co-authored-by: charliermarsh <charlie.r.marsh@gmail.com>
2023-11-16 17:43:19 +01:00
Yih-Dar
acb5b4aff5
Disable docker image build job latest-pytorch-amd for now (#27541)
fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-11-16 17:00:46 +01:00
Marc Sun
6b39470b74
Raise error when quantizing a quantized model (#27500)
add error msg
2023-11-16 10:35:40 -05:00
Lucain
fd65aa9818
Set usedforsecurity=False in hashlib methods (FIPS compliance) (#27483)
* Set usedforsecurity=False in hashlib methods (FIPS compliance)

* trigger ci

* tokenizers version

* deps

* bump hfh version

* let's try this
2023-11-16 14:29:53 +00:00
Patrick von Platen
5603fad247
Revert "add attention_mask and position_ids in assisted model" (#27523)
* Revert "add attention_mask and position_ids in assisted model (#26892)"

This reverts commit 184f60dcec.

* more debug
2023-11-16 14:50:39 +01:00
Matt
4989e73e2f
Update the TF pin for 2.15 (#27375)
* Move the TF pin for 2.15

* make fixup
2023-11-16 13:47:43 +00:00
Phuc Van Phan
69c9b89fcb
docs: add docs for map, and add num procs to load_dataset (#27520) 2023-11-16 13:16:19 +00:00
Arthur
85fde09c97
[pytest] Avoid flash attn test marker warning (#27509)
add flash attn markers
2023-11-16 11:13:07 +01:00
Dean Wyatte
1394e08cf0
Support ONNX export for causal LM sequence classifiers (#27450)
support onnx for causal lm sequence classification
2023-11-16 18:56:34 +09:00
Hz, Ji
06343b0633
translate model.md to chinese (#27518)
* translate model.md to chinese

* apply review suggestion

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-15 16:59:03 -08:00
Marc Sun
1ac599d90f
Fix offload disk for loading derivated model checkpoint into base model (#27253)
* fix

* style

* add test
2023-11-15 14:58:08 -05:00
JiangZhongqing
b71c38a094
Fix bug for T5x to PyTorch convert script with varying encoder and decoder layers (#27448)
* Fix bug in handling varying encoder and decoder layers

This commit resolves an issue where the script failed to convert T5x models to PyTorch models when the number of decoder layers differed from the number of encoder layers.  I've addressed this issue by passing an additional 'num_decoder_layers' parameter to the relevant function.

* Fix bug in handling varying encoder and decoder layers
2023-11-15 19:00:22 +00:00
Matt
2e72bbab2c
Incorrect setting for num_beams in translation and summarization examples (#27519)
* Remove the torch main_process_first context manager from TF examples

* Correctly set num_beams=1 in our examples, and add a guard in GenerationConfig.validate()

* Update src/transformers/generation/configuration_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-11-15 18:18:54 +00:00
Adam Louly
e6522e49a7
Fixing the failure of models without max_position_embeddings attribute. (#27499)
fix max pos issue

Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-11-15 18:16:42 +00:00
Yuki-Imajuku
a0633c4483
Translating en/model_doc docs to Japanese. (#27401)
* update _toctree.yml & add albert-autoformer

* Fixed typo in docs/source/ja/model_doc/audio-spectrogram-transformer.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Delete duplicated sentence docs/source/ja/model_doc/autoformer.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Reflect reviews

* delete untranslated models from toctree

* delete all comments

* add abstract translation

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-15 10:13:52 -08:00
Zach Mueller
a85ea4b19a
Fix wav2vec2 params (#27515)
Fix test
2023-11-15 09:24:03 -05:00
Arthur
48ba1e074f
[ PretrainedConfig] Improve messaging (#27438)
* import hf error

* nits

* fixup

* catch the error at the correct place

* style

* improve message a tiny bit

* Update src/transformers/utils/hub.py

Co-authored-by: Lucain <lucainp@gmail.com>

* add a test

---------

Co-authored-by: Lucain <lucainp@gmail.com>
2023-11-15 14:10:39 +01:00
Xin Qiu
453079c7f8
🚨🚨 Fix beam score calculation issue for decoder-only models (#27351)
* Fix beam score calculation issue for decoder-only models

* Update beam search test and fix code quality issue

* Fix beam_sample, group_beam_search and constrained_beam_search

* Split test for pytorch and TF, add documentation

---------

Co-authored-by: Xin Qiu <xin.qiu@sentient.ai>
2023-11-15 12:49:14 +00:00
Arthur
3d1a7bf476
[tokenizers] update tokenizers version pin (#27494)
* update `tokenizers` version pin

* force tokenizers>=0.15

* use  0.14

Co-authored-by: Lysandre <lysandre@huggingface.co>

---------

Co-authored-by: Lysandre <lysandre@huggingface.co>
2023-11-15 10:46:02 +01:00
Yih-Dar
64e21ca2a4
Make some jobs run on the GitHub Actions runners (#27512)
fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-11-15 10:43:16 +01:00
Arthur
1e0e2dd376
[CircleCI] skip test_assisted_decoding_sample for everyone (#27511)
* skip 4 tests

* nits

* style

* wow it's not my day

* skip new failing tests

* style

* skip for NLLB MoE as well

* skip `test_assisted_decoding_sample` for everyone
2023-11-15 10:17:51 +01:00
Phyzer
7ddb21b4db
Update spelling mistake (#27506)
thoroughly was misspelled thouroughly
2023-11-15 09:50:45 +01:00
NielsRogge
72f531ab6b
[Table Transformer] Add Transformers-native checkpoints (#26928)
* Improve conversion scripts

* Fix paths

* Fix style
2023-11-15 09:35:53 +01:00
NielsRogge
cc0dc24bc9
[Fuyu] Add tests (#27001)
* Add tests

* Add integration test

* More improvements

* Fix tests

* Fix style

* Skip gradient checkpointing tests

* Update script

* Remove scripts

* Remove Fuyu from auto mapping

* Fix integration test

* More improvements

* Remove file

* Add Fuyu to slow documentation tests

* Address comments

* Clarify comment
2023-11-15 09:33:04 +01:00
Arthur
186c077513
[CI-test_torch] skip test_tf_from_pt_safetensors and test_assisted_decoding_sample (#27508)
* skip 4 tests

* nits

* style

* wow it's not my day

* skip new failing tests

* style

* skip for NLLB MoE as well
2023-11-15 08:39:29 +01:00
Zach Mueller
2fc33ebead
Track the number of tokens seen to metrics (#27274)
* Add tokens seen

* Address comments, add to TrainingArgs

* Update log

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Use self.args

* Fix docstring

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-11-14 15:31:04 -05:00
amyeroberts
303c1d69f3
Update processor mapping for hub snippets (#27477) 2023-11-14 20:05:54 +00:00
Zach Mueller
067c4a310d
Have seq2seq just use gather (#27025)
* Have seq2seq just use gather

* Change

* Reset after

* Make slow

* Apply suggestions from code review

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Clean

* Simplify and just use gather

* Update tests/trainer/test_trainer_seq2seq.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* gather always for seq2seq

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-11-14 14:54:44 -05:00
Costa Huang
250032e974
Minor type annotation fix (#27276)
* Minor type annotation fix

* Trigger Build
2023-11-14 19:09:21 +00:00
Joao Gante
a53a0c5159
Generate: GenerationConfig.from_pretrained can return unused kwargs (#27488) 2023-11-14 18:40:57 +00:00
Matt
5468ab3555
Update and reorder docs for chat templates (#27443)
* Update and reorder docs for chat templates

* Fix Mistral docstring

* Add section link and small fixes

* Remove unneeded line in Mistral example

* Add comment on saving memory

* Fix generation prompts linl

* Fix code block languages
2023-11-14 18:26:13 +00:00
Joao Gante
fe472b1db4
Generate: fix ExponentialDecayLengthPenalty doctest (#27485)
fix exponential doctest
2023-11-14 18:21:50 +00:00
jiaqiw09
73bc0c9e88
translate hpo_train.md and perf_hardware.md to chinese (#27431)
* translate

* translate

* update
2023-11-14 09:57:17 -08:00
amyeroberts
78f6ed6c70
Revert "[time series] Add PatchTST (#25927)" (#27486)
The model was merged before final review and approval.

This reverts commit 2ac5b9325e.
2023-11-14 12:24:00 +00:00
Sanchit Gandhi
a4616c6767
[Whisper] Fix pipeline test (#27442) 2023-11-14 11:18:26 +00:00
Max Bain
b86c54d9ff
Clap processor: remove wasteful np.stack operations (#27454)
remove wasteful np.stack

Np.stack on large 1-D tensor, causing ~0.5s processing time on short audio (<10s). Compared to 0.02s for medium length audio
2023-11-14 10:41:12 +00:00
Sihan Chen
4309abedbc
Add speecht5 batch generation and fix wrong attention mask when padding (#25943)
* fix speecht5 wrong attention mask when padding

* enable batch generation and add parameter attention_mask

* fix doc

* fix format

* batch postnet inputs, return batched lengths, and consistent to old api

* fix format

* fix format

* fix the format

* fix doc-builder error

* add test, cross attention and docstring

* optimize code based on reviews

* docbuild

* refine

* not skip slow test

* add consistent dropout for batching

* loose atol

* add another test regarding to the consistency of vocoder

* fix format

* refactor

* add return_concrete_lengths as parameter for consistency w/wo batching

* fix review issues

* fix cross_attention issue
2023-11-14 09:54:09 +00:00
Yoach Lacombe
ee4fb326c7
Fix M4T weights tying (#27395)
fix seamless m4t weights tying
2023-11-14 09:52:11 +00:00
Arthur
e107ae364e
[CI-test_torch] skip test_tf_from_pt_safetensors for 4 models (#27481)
* skip 4 tests

* nits

* style

* wow it's not my day
2023-11-14 10:34:03 +01:00
Younes Belkada
d71fa9f618
[Peft] modules_to_save support for peft integration (#27466)
* `modules_to_save` support for peft integration

* Update docs/source/en/peft.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* slightly elaborate test

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-11-14 10:32:57 +01:00