* Rework pipeline tests
* Try to fix Flax tests
* Try to put it before
* Use a new decorator instead
* Remove ignore marker since it doesn't work
* Filter pipeline tests
* Woopsie
* Use the fitlered list
* Clean up and fake modif
* Remove init
* Revert fake modif
- Fixes the image segmentation pipeline test failures caused by changes to the postprocessing methods of supported models
- Updates the ImageSegmentationPipeline tests
- Improves docs, adds 'task' argument to optionally perform semantic, instance or panoptic segmentation
* Copied all the code required from transformers.models.bert.modeling_bert to here
* Fixed styling issues
* Reformatted copied names with Model specific name.
* Reverted BertEncoder part as there is already a class called BertGenerationEncoder
* Added prefixes in missing places.
Co-authored-by: vishwaspai <vishwas.pai@emplay.net>
* camembert tf version independent
* fixup
* fixup, all working
* remove comments
* Adding copied from roberta
Co-authored-by: Mustapha AJEGHRIR <mustapha.ajeghrir@kleegroup.com>
* removed dependency from bart(slow)
* removed dependency from bart(slow)
* adding copying comments (copied from bart to led)
* updated led docstring
* updated led docstring
* removed dependency from Bart (fast)
* replaced bart with LED in docstrings
* complying flake8
* added more copy comments
* fixing copying comments
* added comments back
* fix copy comments
* fixing copied from comments
* fixing copied from comments
* Remove dependency of Bert from Squeezebert tokenizer
* run style corrections
* update copies from BertTokenizers
* Update changes and style to Squeezebert files
* update copies for bert-fast
* validate onnx models with a different input geometry than saved with
* only test working features for now
* simpler test skipping
* rm TODO
* expose batch_size/seq_length on vit
* skip certain name, feature, framework parameterizations known to fail validation
* Trigger CI
* Trigger CI
* Add ZeroShotObjectDetectionPipeline (#18445)
* Add AutoModelForZeroShotObjectDetection task
This commit also adds the following
- Add explicit _processor method for ZeroShotObjectDetectionPipeline.
This is necessary as pipelines don't auto infer processors yet and
`OwlVitProcessor` wraps tokenizer and feature_extractor together, to
process multiple images at once
- Add auto tests and other tests for ZeroShotObjectDetectionPipeline
* Add AutoModelForZeroShotObjectDetection task
This commit also adds the following
- Add explicit _processor method for ZeroShotObjectDetectionPipeline.
This is necessary as pipelines don't auto infer processors yet and
`OwlVitProcessor` wraps tokenizer and feature_extractor together, to
process multiple images at once
- Add auto tests and other tests for ZeroShotObjectDetectionPipeline
* Add batching for ZeroShotObjectDetectionPipeline
* Fix doc-string ZeroShotObjectDetectionPipeline
* Fix output format: ZeroShotObjectDetectionPipeline
The link to https://github.com/vasudevgupta7/bigbird is vulnerable to repojacking (it redirects to the orignial project that changed name), you should change the link to the current name of the project. if you won't change the link, an attacker can open the linked repository and attacks users that trust your links
This PR aims to rectify the discrepancy between the training performances of HF and Timm ViT implementations.
- Initializes torch and flax ViT dense layer weights with trunc_normal instead of normal (consistent with the TF implementation.
- Initializes cls_token and positional_embeddings with trunc_normal
- Updates DeiT copy to reflect the changes
Ensures post_process_instance_segmentation and post_process_panoptic_segmentation methods return a tensor of shape (target_height, target_width) filled with -1 values if no segment with score > threshold is found.
Ensures post_process_instance_segmentation and post_process_panoptic_segmentation methods return a tensor of shape (target_height, target_width) filled with -1 values if no segment with score > threshold is found.
* removes roberta and bert config dependencies from longformer
* adds copied from statements
* fixes style
* removes excessive comments and replace bert with longformer in a couple places
* fixes style
* Add a build_from_serving_sig_and_dummies method and replace all calls like model(model.dummy_inputs) with it.
* make fixup
* Remove the overridden save() as this is no longer necessary
* Also call _set_save_spec(), the last missing piece
* Ensure we set the save spec when loading from config too
* Turn this whole thing into a one-line PR
* Turn this whole thing into a one-line PR
* Turn this whole thing into a one-line PR
Co-authored-by: Your Name <you@example.com>
* add sudachipy and jumanpp tokenizers for bert_japanese
* use ImportError instead of ModuleNotFoundError in SudachiTokenizer and JumanppTokenizer
* put test cases of test_tokenization_bert_japanese in one line
* add require_sudachi and require_jumanpp decorator for testing
* add sudachi and pyknp(jumanpp) to dependencies
* remove sudachi_dict_small and sudachi_dict_full from dependencies
* empty commit for ci
- Improves MaskFormer docs, corrects minor typos
- Restructures MaskFormerFeatureExtractor.post_process_panoptic_segmentation for better readability, adds target_sizes argument for optional resizing
- Adds post_process_semantic_segmentation and post_process_instance_segmentation methods.
- Adds a deprecation warning to post_process_segmentation method in favour of post_process_instance_segmentation
* add bloom for question answering
- attempt to add Bloom for question answering
- adapted from `GPTJForQuestionAnswering`
- Fixed `num_labels` to `2` for common tests
- Added a bit of docstring
- All common tests pass
* Update src/transformers/models/bloom/modeling_bloom.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* revert changes related to `num_labels`
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>