Commit Graph

179 Commits

Author SHA1 Message Date
Anton Lozhkov
e226a24f84
[xtreme-s] Update Minds14 results (#16241)
* update results

* per-language metrics

* Format the per-language metrics
2022-03-21 19:33:59 +01:00
Suraj Patil
93d3fd8645
remove jax.ops.index (#16220) 2022-03-17 17:51:43 +01:00
Anton Lozhkov
d35e0c6247
Minor fixes to XTREME-S (#16193)
* Minor fixes

* Fix vocab union

* Update examples/research_projects/xtreme-s/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update README

* unused import

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-03-16 17:23:00 +04:00
Sanchit Gandhi
ee27b3d7df
Replace all deprecated jax.ops operations with jnp's at (#16078)
* Replace all deprecated `jax.ops` operations with jnp's `at`

* np to jnp scores

* suggested changes
2022-03-16 09:08:55 +00:00
Patrick von Platen
c2dc89be62
[Xtreme-S] fix some namings (#16183) 2022-03-16 01:21:31 +01:00
Anton Lozhkov
99fd3eb4a5
Add the XTREME-S fine-tuning example (#15985)
* CTC+classification draft

* CTC+classification draft

* style

* multilingual runs

* Fix race condition during processor.from_reatrained

* Merge covost experiments

* Add README

* Quality

* Switch to .all configs

* Fix typos
2022-03-16 00:21:06 +01:00
Stas Bekman
580dd87c55
[Deepspeed] add support for bf16 mode (#14569)
* [WIP] add support for bf16 mode

* prep for bf16

* prep for bf16

* fix; zero2/bf16 is ok

* check bf16 is available

* test fixes

* enable zero3_bf16

* config files

* docs

* split stage_dtype; merge back to non-dtype-specific config file

* fix doc

* cleanup

* cleanup

* bfloat16 => bf16 to match the PR changes

* s/zero_gather_fp16_weights_on_model_save/zero_gather_16bit_weights_on_model_save/; s/save_fp16_model/save_16bit_model/

* test fixes/skipping

* move

* fix

* Update docs/source/main_classes/deepspeed.mdx

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* backticks

* cleanup

* cleanup

* cleanup

* new version

* add note about grad accum in bf16

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-03-11 17:53:53 -08:00
Sanchit Gandhi
6c9010ef63
Update README.md 2022-03-10 10:20:37 +01:00
Sanchit Gandhi
b71474895d
Update README.md 2022-03-04 09:58:45 +01:00
Ross Johnstone
e535c389aa
Fix tiny typo (#15884) 2022-03-02 15:37:05 +01:00
Ivan Agarský
5444687f0f
Fix minor comment typos (#15740) 2022-02-21 12:41:27 +01:00
Shamane Siri
80f1a59168
updated with latest PL and Ray (#15653) 2022-02-15 16:53:05 +01:00
Stas Bekman
fcb0f74397
[research_projects] deal with security alerts (#15594)
* [research_projects] deal with security alerts

* add a note of the original PL ver and warning
2022-02-11 14:31:09 -05:00
Lysandre Debut
7732d0fe7a
Upgrade black to version ~=22.0 (#15565)
* Upgrade black to version ~=22.0

* Check copies

* Fix code
2022-02-09 09:28:57 -05:00
Anton Lozhkov
a459f7f97d
Add ASR CTC streaming example (#15309)
* Single-epoch run

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Infinite dataset

* Trainer fix + distributed benchmark

* Benchmark fix

* unused import

* interleaved splits

* interleaved splits

* has_length util

* Move to research projects

* Leftover Sized checks

* Bump min version

* Unused import

* Revert trainer changes

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-02-07 18:35:37 +03:00
Jonatas Grosman
f624249d8b
[Robust Speech Challenge] Add missing LR parameter (#15428) 2022-01-31 15:50:56 +01:00
Julien Plu
aa19f478ac
Add (M)Luke model training for Token Classification in the examples (#14880)
* Add Luke training

* Fix true label tags

* Fix true label tags

* Fix true label tags

* Update the data collator for Luke

* Some training refactor for Luke

* Improve data collator for Luke

* Fix import

* Fix datasets concatenation

* Add the --max_entity_length argument for Luke models

* Remove unused code

* Fix style issues

* Fix style issues

* Move the Luke training into a separate folder

* Fix style

* Fix naming

* Fix filtering

* Fix filtering

* Fix filter

* Update some preprocessing

* Move luke to research_projects

* Checkstyle

* Address comments

* Fix style
2022-01-31 07:58:18 -05:00
dependabot[bot]
628b59e51d
Bump numpy from 1.19.2 to 1.21.0 in /examples/research_projects/lxmert (#15369)
Bumps [numpy](https://github.com/numpy/numpy) from 1.19.2 to 1.21.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst.txt)
- [Commits](https://github.com/numpy/numpy/compare/v1.19.2...v1.21.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-27 14:46:15 -05:00
dependabot[bot]
ca0848b2ff
Bump notebook in /examples/research_projects/visual_bert (#15368)
Bumps [notebook](http://jupyter.org) from 6.1.5 to 6.4.1.

---
updated-dependencies:
- dependency-name: notebook
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2022-01-27 14:45:58 -05:00
dependabot[bot]
7d45a2e81c
Bump numpy in /examples/research_projects/visual_bert (#15367)
Bumps [numpy](https://github.com/numpy/numpy) from 1.19.2 to 1.21.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst.txt)
- [Commits](https://github.com/numpy/numpy/compare/v1.19.2...v1.21.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-27 14:45:18 -05:00
Anton Lozhkov
196cce6e9b
Add a device argument to the eval script (#15371)
* Device argument for the eval script

* Default to none

* isort
2022-01-27 15:58:55 +01:00
Patrick von Platen
4bf97415a4
Update eval.py (#15310) 2022-01-24 11:46:38 +01:00
Patrick von Platen
11afb709ec
[Robust Speech Challenge] Add timeline (#15274) 2022-01-21 17:12:09 +01:00
lewtun
833635e259
Move BART + ONNX example to research_projects (#15271)
* Move BART + ONNX example to research_projects

* Add author information
2022-01-21 14:47:34 +01:00
Anton Lozhkov
85ea462c08
Update README.md (#15246)
Clarify OVH instruction
2022-01-20 13:40:26 +03:00
Anton Lozhkov
e57468b8a8
Update README.md (#15239)
Add an OVHcloud tutorial URL for the Robust Speech Challenge
2022-01-20 11:46:50 +03:00
Patrick von Platen
691878ee2f
Update README.md (#15233) 2022-01-19 18:03:17 +01:00
Suraj Patil
2a5a384970
fix speech event readme (#15227) 2022-01-19 15:30:03 +01:00
Patrick von Platen
6d92c429c7
Update README.md (#15226) 2022-01-19 15:23:00 +01:00
Patrick von Platen
19c217b4b7
Update README.md 2022-01-19 15:21:03 +01:00
Patrick von Platen
5439cda7f0
Update README.md 2022-01-19 15:19:57 +01:00
Patrick von Platen
e118e085ea
[Robust Speech Event] Add guides (#15155)
* up

* improve readme

* up

* up

* more info

* up

* up

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* add more stuff for eval

* update

* up

* Update README.md

* Update examples/research_projects/xls_r/README.md

Co-authored-by: Omar Sanseviero <osanseviero@users.noreply.github.com>

* apply omar's suggestions

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Omar Sanseviero <osanseviero@users.noreply.github.com>
2022-01-18 18:44:48 +01:00
Leandro von Werra
aa0135f2e0
fix: switch from slow to generic tokenizer class (#15122) 2022-01-12 09:12:43 -05:00
Patrick von Platen
d72343d2b8
[Wav2Vec2 Speech Event] Add speech event v2 (#15083)
* up

* up

* up

* up

* up

* up

* improve

* up

* up

* Update src/transformers/trainer.py

* up

* up

* up
2022-01-10 10:46:21 +01:00
Leandro von Werra
1d651868d6
add custom stopping criteria to human eval script (#14897) 2021-12-23 14:59:11 +01:00
Nathan Cooper
48bf7e47a0
Code parrot minor fixes/niceties (#14666)
* Add some nicety flags for better controlling evaluation.

* Fix dependency issue with outdated requirement

* Add additional flag to example to ensure eval is done

* Wrap code into main function for accelerate launcher to find

* Fix valid batch size flag in readme

* Add note to install git-lfs when initializing/training the model

* Update examples/research_projects/codeparrot/scripts/arguments.py

Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>

* Update examples/research_projects/codeparrot/README.md

Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>

* Revert "Wrap code into main function for accelerate launcher to find"

This reverts commit ff11df1c81.

* Fix formatting issue

* Move git-lfs instructions to installation section

* Add a quick check before code generation for code evaluation

* Fix styling issue

* Update examples/research_projects/codeparrot/scripts/human_eval.py

Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>

* Make iterable dataset use passed in tokenizer rather than globally defined one

Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
Co-authored-by: ncoop57 <nac33@students.uwf.edu>
2021-12-13 09:30:50 +01:00
Julien Chaumond
6cdc3a7844
[urls to hub] Replace outdated model tags with their now-canonical pipeline types (#14617)
* Replace outdated model tags with their now-canonical pipeline types

* spam the CI till it's green
2021-12-06 04:35:01 -05:00
Leandro von Werra
43f953cc2e
Add CodeParrot 🦜 codebase (#14536)
* add readme skeleton

* update readme

* add initialization script

* add deduplication script

* add codeparrot training script

* add code generation evaluation

* add validation loss script

* add requirements

* update readme

* tweak readme

* make style

* add highlights to readme

* add CLIs to scripts

* add tokenizer training script

* add docstring to constant length dataset

* fix defaults in arguments

* update readme with cli

* move image to hub

* tweaks of readme

* fix cli commands

* add author

* explain env variables

* fix formatting

* Update examples/research_projects/codeparrot/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Apply suggestions from code review

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* replace generic with gpt2 tokenizer

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
2021-12-02 10:41:35 +01:00
Thomas Viehmann
6ed9882ddb
use functional interface for softmax in attention (#14198)
* use functional interface instead of instantiating module and immediately calling it

* fix torch.nn.functional to nn.functional. Thank you Stas!
2021-11-30 11:47:33 -05:00
Nicholas Broad
69e16abf98
Switch from using sum for flattening lists of lists in group_texts (#14472)
* remove sum for list flattening

* change to chain(*)

* make chain object a list

* delete empty lines

per sgugger's suggestions

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Nicholas Broad <nicholas@nmbroad.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-11-22 16:17:26 -05:00
Shang Zhang
a59e7c1ed4
Add QDQBert model and quantization examples of SQUAD task (#14066)
* clean up branch for add-qdqbert-model

* README update for QAT example; update docstrings in modeling_qdqbert.py

* Update qdqbert.rst

* Update README.md

* Update README.md

* calibration data using traning set; QAT example runs in fp32

* re-use BERTtokenizer for qdqbert

* Update docs/source/model_doc/qdqbert.rst

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update docs/source/model_doc/qdqbert.rst

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update docs/source/model_doc/qdqbert.rst

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* remove qdqbert tokenizer

* Update qdqbert.rst

* update evaluate-hf-trt-qa.py

* update configuration_qdqbert.py

* update modeling_qdqbert.py: add copied statement; replace assert with ValueError

* update copied from statement

* add is_quantization_available; run make fix-copies

* unittest add require_quantization

* add backend dependency to qdqbert model

* update README; update evaluate script; make style

* lint

* docs qdqbert update

* circleci build_doc add pytorch-quantization for qdqbert

* update README

* update example readme with instructions to upgrade TensorRT to 8.2

* Update src/transformers/models/qdqbert/configuration_qdqbert.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Update src/transformers/models/qdqbert/configuration_qdqbert.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Update src/transformers/models/qdqbert/configuration_qdqbert.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Update src/transformers/models/qdqbert/configuration_qdqbert.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* change quantization to pytorch_quantization for backend requirement

* feed_forward_chunking not supported in QDQBert

* make style

* update model docstrings and comments in testing scripts

* rename example to quantization-qdqbert; rename example scripts from qat to quant

* Update src/transformers/models/qdqbert/modeling_qdqbert.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* rm experimental functions in quant_trainer

* qa cleanup

* make fix-copies for docs index.rst

* fix doctree; use post_init() for qdqbert

* fix early device assignment for qdqbert

* fix CI:Model templates runner

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2021-11-19 13:33:39 -05:00
Antonio Carlos Falcão Petri
7544efc92e
[Gradient checkpoining] Update Wav2Vec scripts (#14036)
Co-authored-by: Stas Bekman <stas@stason.org>
2021-11-17 18:37:21 +01:00
Eldar Kurtic
9fd937ead1
Replace BertLayerNorm with LayerNorm (#14385)
Running Movement pruning experiments with the newest HuggingFace would crash due to non-existing BertLayerNorm.
2021-11-15 13:25:10 -05:00
Stas Bekman
77262ef750
fix --gradient_checkpointing (#13964) 2021-11-11 17:50:21 +01:00
Suraj Patil
e92190c0f8
Fix Flax params dtype (#13098)
* fix inits

* fix embed dtype

* fix embed dtype

* add test to check default dtype

* quality

* add type conversion methods for flax models

* more robust casting

* cast sinusoidal positions

* update pegasus

* update albert

* update test

* make sure dtype is passed to every module

* style

* fix electra dense

* fix t5

* quality

* add more tests

* better name

* use the dtype for lm head computation

* fix albert

* style

* fix albert embed dtype

* more tests

* fix vision enc-dec

* cleanup

* fix embed dtype pegasus

* fix default param test

* doc

* update template

* fix final_logits_bias dtype

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix doc

* fix doc

* add detailed docstring for dtype parameter

* remove un-necessary import

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2021-11-11 14:45:20 +05:30
Suraj Patil
85a4bda4f4
bump flax version (#14343) 2021-11-09 22:15:22 +05:30
Junbum Lee
c016dbdbda
Fix execution PATH for PPLM Example (#14287) 2021-11-06 10:33:47 -04:00
Sylvain Gugger
558f8543ba
Update Transformers to huggingface_hub >= 0.1.0 (#14251)
* Update Transformers to huggingface_hub >= 0.1.0

* Forgot to save...

* Style

* Fix test
2021-11-02 18:58:42 -04:00
Thomas Wang
5b45422b58
Remove n_ctx from configs (#14165)
* Remove n_ctx from configs

* Fix GPTJ and OpenAIGPT, both are acceptable breaking changes as there are no configs such that it breaks

* Remove unecessary n_positions from TFOpenAIGPT
2021-10-29 11:50:25 +02:00
Antonio Carlos Falcão Petri
05a2afc252
Add missing --validation_split_percentage data args (#14119) 2021-10-22 19:04:54 +02:00