* started bf16 integration
* minor changes
* code now runs
* style
* lay foundation for bf16 testing
* lay foundation for bf16 testing
* start the tests
* better bf16 check
* style
* 2 separate checkers - one for bf16 support, another for bf16+autocast
* Update src/transformers/training_args.py
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* a couple of comment resolutions
* more comment resolutions
* resolved a small bug
* just some print statemtns
* added todo marking
* added a todo
* adjust for API change s/fast_dtype/dtype/
* fix style
* merge 2 bf16 util functions
* bf16 now does scaling too
* Add support for bfloat16
* Revert T5 layernorm to float32
This is based on the comment at https://github.com/huggingface/transformers/pull/14448/files#r752660929 and the PyTorch PR https://github.com/pytorch/pytorch/pull/66920 .
* Add comment about conversion to float32 before returning the numpy data
* Add comment about AMP-bfloat16 incompatibility
* Fix formatting
* typo
* reformer / bf16
* cleanup
* require at least pt-1.10
* fix
* will deal with deepspeed separately
* cleanup
* revert
* cleanup
* fp16_full_eval and bf16_full_eval are separate modes
* proper deprecation
* cleanup
* test and fixes
* spelling
* cleanup
* add a note that this API is experimental
Co-authored-by: jamie <jamie@cortx.com>
Co-authored-by: Stas Bekman <stas@stason.org>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: suriya <suriya@cortx.com>
Co-authored-by: Manuel R. Ciosici <manuelrciosici@gmail.com>
* Init Flax implementation for Blenderbot
* Add a majority of stuff except for tests
* make style quality
* Add tests and fix some bugs
* Add tests
* Clean source code and fix some bugs
* Fix copies and docs
* Fix jax device condition for tests
* Fix layer norm in the encoder
* Fix a few typos in the test file
* make fix-copies
* make fix-copies
* fix layer norm
* Fix Flax params dtype (#13090)
* Fix PR reference (#13098)
* make fix-copies
* Update tests/test_modeling_flax_blenderbot.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* TF Tapas first commit
* updated docs
* updated logger message
* updated pytorch weight conversion
script to support scalar array
* added use_cache to tapas model config to
work properly with tf input_processing
* 1. rm embeddings_sum
2. added # Copied
3. + TFTapasMLMHead
4. and lot other small fixes
* updated docs
* + test for tapas
* updated testing_utils to check
is_tensorflow_probability_available
* converted model logits post processing using
numpy to work with both PT and TF models
* + TFAutoModelForTableQuestionAnswering
* added TF support
* added test for
TFAutoModelForTableQuestionAnswering
* added test for
TFAutoModelForTableQuestionAnswering pipeline
* updated auto model docs
* fixed typo in import
* added tensorflow_probability to run tests
* updated MLM head
* updated tapas.rst with TF model docs
* fixed optimizer import in docs
* updated convert to np
data from pt model is not
`transformers.tokenization_utils_base.BatchEncoding`
after pipeline upgrade
* updated pipeline:
1. with torch.no_gard removed, pipeline forward handles
2. token_type_ids converted to numpy
* updated docs.
* removed `use_cache` from config
* removed floats_tensor
* updated code comment
* updated Copyright Year and
logits_aggregation Optional
* updated docs and comments
* updated docstring
* fixed model weight loading
* make fixup
* fix indentation
* added tf slow pipeline test
* pip upgrade
* upgrade python to 3.7
* removed from_pt from tests
* revert commit f18cfa9
* [deepspeed] zero inference
* only z3 makes sense for inference
* fix and style
* docs
* rework
* fix test
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* responding to suggestions
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* test: make sure model configs are jsonifiable
* fix: return python dict instead of config object
* fix: accept pretrained config and use correct class
* Re-enabling slow tests and applying them to core models only
* Re-enabling slow tests and applying them to core models only
* Add new test file to fetcher
* Remove tooslow tests from test_modeling_tf_common.py
* make style
* Style fixes
* Style fixes
* Style fixes
* Style fixes
* Adding core tests to GPT2 and BART
* Removing unused imports
Co-authored-by: niklas.fruehauf <niklas.fruehauf@sovanta.com>
Co-authored-by: matt <rocketknight1@gmail.com>
* add new wav2vec2 translation
* correct
* up
* add tests
* correct end copy
* correct more
* up
* correct unispeech sat
* finish
* finalize
* finish
* up
* stop training when a finite IterableDataset is exhausted
when using an iterable dataset num_epochs is set to
sys.maxsize to make sure all data is consumed
likewise we want to set max_steps high enough
but still stop when all data is consumed
(cherry picked from commit 6f0e1d6363153da9051e93acffe1cbab3a3f3b12)
* fix typo flase -> false
* add test for stopping training on exhausted finite iterable dataset
* remove redundant gradient_accumulation_steps
* run make style
reformat training_args docstring
* Fix gradient_checkpointing backward compatibility
* Remove needless line
* make sure mask prob is big enough and length small enough
* Fix tests
Co-authored-by: patrickvonplaten <patrick.v.platen@gmail.com>
* Adding support for raw python `generator` in addition to `Dataset`
The main goal is to ease the create of streaming data to the pipe.
`Dataset` is more involved and pytorch specific.
This PR, provides a way to use a python iterator too.
This enabled #14250 but can be proposed as a standalone PR.
```python
from transformers import pipeline
def read_data(filename):
with open(filename, 'r') as f:
for line in f:
yield f
pipe = pipeline("text-classification")
for classified in pipe(read_data("large_file.txt")):
print("Success ! ", classified)
```
The main caveat of this, is the interaction with `DataLoader` with
`num_workers>1`. When you have multiple workers, each receive a copy
of the generator (like `IterableDataset`). That means the naive Iterator
will fail since all workers iterate on all items of the generator.
There are ways to do clever "skipping", but it could be bad still
because all workers still do have to pass through all items of the
generator (they just ignore items they don't handle), depending on
the case it might be bad.
Using `num_workers=1` is the simplest fix and if the cost of loading
your data is small enough should be good enough. In the above example
trying to do smart tricks to skip some lines is unlikely to be a net
positive for instance.
If there are better ways to do "jumps" on some data, then using
`Dataset` is more advised (since then differents workers can just jump
themselves).
* Adding iterator support for `tf` too.
* fix loading flax bf16 weights in pt
* fix clip test
* fix t5 test
* add logging statement
* Update src/transformers/modeling_flax_pytorch_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* switch back to native any
* fix check for bf16 weights
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Start the work for TFViTModel
* Convert to TF code - need to check in the follow up commits
* Clean up model code
* Expose TFViTModel
* make style
* make quality
* Add test
* make style & quality
* Fix some imports
* fix wrong usage - *kwargs => ** kwargs
* Fix Conv2D weight loading (PT->TF) issue
* Add tests for images with different sizes + fix model
* Fix some common tests for TFViTModel
* Use inputs instead of input_ids in test_compile_tf_model
* Add a comment about transpose and Conv2D in convert_tf_weight_name_to_pt_weight_name
* Avoid transpose in TFViT call
* Fix Conv2D issue in load_tf2_weights_in_pytorch_model
* Use tf.keras.layers.Conv2D instead of tf.nn.conv2d
* Using simpler heuristic to detect Conv2D layer
* Change convert_tf_weight_name_to_pt_weight_name to return TransposeType
* Check tf_weight_shape is not None before using it
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* fix missing comma
* fix input dtype
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* correct order of overflowing tokens for LayoutLmV2 tokenizer
* test to check order of overflowing_tokens for a seq of input_ids
* fix up quality
* added suggested changes
* check that tests the bbox sequence
* pair_input test added
* pass quality test
* check bbox sequence added
* unittest method
* comments added
* add overflowing bbox test
* improved "seq_1"
Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com>
* improve code quality
Co-authored-by: SaulLu <lucilesaul.com@gmail.com>
Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com>
* Adding support for `truncation` parameter on `feature-extraction`
pipeline.
Fixes#14183
* Fixing tests on ibert, longformer, and roberta.
* Rebase fix.