* support sentencepiece for bertjapanesetokenizer
* add test vocab file for sentencepiece, bertjapanesetokenizer
* make BasicTokenizer be identical to transformers.models.bert.tokenization_bert.BasicTokenizer
* fix missing of \n in comment
* fix init argument missing in tests
* make spm_file be optional, exclude spiece.model from tests/fixtures, and add description comments
* make comment length less than 119
* apply doc style check
* Added support for multivariate independent emission heads
* fix typo
* rename distr_cls
* scale is a vector for multivariate
* set affine transform event_dim
* fix typo
* added variable
* added beta in the config
* set beta
* remove beta-nll option in nll
* First step of PT->TF for composite models
* Update the tests
* For VisionEncoderDecoderModel
* Fix
* Fix
* Add comment
* Fix
* clean up import
* Save memory
* For (TF)EncoderDecoderModel
* For (TF)EncoderDecoderModel
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Re-enable `small_model_pt`.
Re-enable `small_model_pt`.
Enabling the current test with the current values.
Debugging the values on the CI.
More logs ? Printing doesn't work ?
Using the CI values instead. Seems to be a Pillow sensitivity.
* Update src/transformers/pipelines/image_segmentation.py
Co-authored-by: Alara Dirik <8944735+alaradirik@users.noreply.github.com>
Co-authored-by: Alara Dirik <8944735+alaradirik@users.noreply.github.com>
* Change the import order of the model and configuration classes
* Add (with random weights) in the comment before model initialization
* Add configuration_wavlm to doctest
* add: the contrastive search for generaton_utils
* add: testing scripts for contrastive search under examples/text-generation
* update the quality of codes
* revise the docstring; make the generation_contrastive_search.py scripts;
* revise the examples/pytorch/text-generation/run_generation_contrastive_search.py to the auto-APIs format
* revise the necessary documents
* fix: revise the docstring of generation_contrastive_search.py
* Fix the code indentation
* fix: revise the nits and examples in contrastive_search docstring.
* fix the copyright
* delete generation_contrastive_search.py
* revise the logic in contrastive_search
* update the intergration test and the docstring
* run the tests over
* add the slow decorate to the contrastive_search intergrate test
* add more test
* do the style, quality, consistency checks
* Clean up deprecation warnings
Notes:
Changed some strings in tests to raw strings, which will change the literal content of the strings as they are fed into whatever machine handles them.
Test cases for past in the past/past_key_values switch changed/removed due to warning of impending removal
* Add PILImageResampling abstraction for PIL.Image.Resampling
This PR (https://github.com/huggingface/transformers/pull/19367) introduced a few breaking changes:
- Removed an argument `mask_threshold`.
- Broke the default behavior (instance vs panoptic in the function call)
https://github.com/huggingface/transformers/pull/19367/files#diff-60f846b86fb6a21d4caf60f5b3d593a04accb8f248de3029cccae2ff898c5bc3R119-R120
- Broke the actual masks: https://github.com/huggingface/transformers/pull/1961
This PR is the start of a handful that will aim at bringing back the old
behavior(s).
- tests should not have to specify `task` by default, unless we want to
modify the behavior and have a lower form of segmentation running)
- `test_small_model_pt` should be working.
This specific PR starts with adding more information to the masks hash
because missing the actual mask was actual easy to miss (the hashes do
change, but it was easy to miss that one code path wasn't properly
updated).
So we go from a simple `hash` to
```
{"hash": #smaller hash, "shape": (h, w), "white_pixels": n}
```
The `shape` should help make sure the interpolation of the mask works
correctly, the `white_pixels` hopefully helps detect big regressions in
their amount when the hash gets modified.
* add return_tensors parameter for feature_extraction w/ test
add return_tensor parameter for feature extraction
Revert "Merge branch 'feature-extraction-return-tensor' of https://github.com/ajsanjoaquin/transformers into feature-extraction-return-tensor"
This reverts commit d559da743b87914e111a84a98ba6dbb70d08ad88, reversing
changes made to bbef89278650c04c090beb65637a8e9572dba222.
call parameter directly
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Fixup.
Update src/transformers/pipelines/feature_extraction.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Fix the imports.
* Fixing the test by not overflowing the model capacity.
Co-authored-by: AJ San Joaquin <ajsanjoaquin@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>