* First draft
* Add VideoMAEForVideoClassification
* Improve conversion script
* Add VideoMAEForPreTraining
* Add VideoMAEFeatureExtractor
* Improve VideoMAEFeatureExtractor
* Improve docs
* Add first draft of model tests
* Improve VideoMAEForPreTraining
* Fix base_model_prefix
* Make model take pixel_values of shape (B, T, C, H, W)
* Add loss computation of VideoMAEForPreTraining
* Improve tests
* Improve model testsé
* Make all tests pass
* Add VideoMAE to main README
* Add tests for VideoMAEFeatureExtractor
* Add integration test
* Improve conversion script
* Rename patch embedding class
* Remove VideoMAELayer from init
* Update design of patch embeddings
* Improve comments
* Improve conversion script
* Improve conversion script
* Add conversion of pretrained model
* Add loss verification of pretrained model
* Add loss verification of unnormalized targets
* Add integration test for pretraining model
* Apply suggestions from code review
* Fix bug to make feature extractor resize only shorter edge
* Address more comments
* Improve normalization of videos
* Add doc examples
* Move constants to dedicated script
* Remove scripts
* Transfer checkpoints, fix docs
* Update script
* Update image mean and std
* Fix doc tests
* Set return_tensors to NumPy by default
* Revert the previous change
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
* Enable HFTracer to trace with custom dummy inputs instead of pre-computed ones
* Add HFTracer.trace docstring, and make it possible to handle callable and torch.nn.Module in general
* Remove pdb comment
* Apply suggestions
* Cleanup some code
* Improve signatures
* Try to reduce the number of reshape/copies
* I don't think we actually need the layer_num scaling trick
* No need for duplication
* Try to fix beam_search
* Fix beam search
* Removing layer num normalization seems to be breaking
* Not sure self.layer_number normalization actually matters
* Try and be backward compatible
* Try to fix beam_search
* Revert attempt to be backward compatible
* Improve documentation on past_key_values format
* Optimize the device allocation in case of hidden_states in multiple devices
* No need to manually cast the values to a specific device
* Rename with long version of variables
* Improve type hinting
* Add comment that explains that some methods return views
* Actually i think the attention casting only makes sense when we use torch.float16
* We don't actually need layer_number to be passed anymore
* Fix FX test
* Bypass torch.baddbmm
* Apply suggestions from code review
* Add comment about support for torchScript v1.11
* fix ONNX support for bloom (#18456)
Co-authored-by: Niklas Muennighoff <n.muennighoff@gmail.com>
Co-authored-by: Nouamane Tazi <nouamane98@gmail.com>
Comparisons like
version.parse(torch.__version__) > version.parse("1.6")
are True for torch==1.6.0+cu101 or torch==1.6.0+cpu
version.parse(version.parse(torch.__version__).base_version) are preferred (and available in pytorch_utils.py
* fix: keras fit tests for segformer tf and minor refactors.
* refactor: test_keras_fit to make it simpler using the existing one.
* fix: styling issues.
* Add file in spanish docs to be translated
* Translate first two sections to Spanish
* Translate four additional sections to Spanish
* Finish translation to Spanish
* Improve writing style in Spanish
* Add suggested changes from reviewer
This PR moves GroupViT and LXMert to their correct sections. As pointed out by @NielsRogge and @LysandreJik, GroupViT and LXMert are both multimodal models.
* Update pipeline word heuristic to work with whitespace in token offsets
This change checks for whitespace in the input string at either the
character preceding the token or in the first character of the token.
This works with tokenizers that return offsets excluding whitespace
between words or with offsets including whitespace.
fixes#18111
starting
* Use smaller model, ensure expected tokenization
* Re-run CI (please squash)
`torch.Tensor` creates an unitialized tensor (as via `torch.empty`), this leads to undeterministic behavior, poor initialization, and nans if you have unlucky init. The paper does not specify the initialization for bias terms, so I guess zero seems like a good choice - no bias initially. `torch.Tensor` is usually populated with zeros, so this fix will be close to the intended behavior:
```
>>> torch.Tensor(100, 100).sum()
tensor(0.)
>>> torch.Tensor(100, 100).sum()
tensor(nan)
>>> torch.Tensor(100, 100).sum()
tensor(0.)
```
* Added option for users to modify config parameter used by pytesseract during feature extraction
- Added optional 'tess_config' kwarg when setting up LayoutLMV2 processor that is used by pytesseract during feature extraction
- Eg. Can be used to modify psm values by setting tess_config to '--psm 7'
- Different psm values significantly influences the output of layoutlmv2
* Update src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Updated variable names to be more explicit
* Fixed styles
* Added option for users to modify config parameter when calling pytesseract during feature extraction
- Added option to set "tesseract_config" parameter during LayoutLMV3 processor initialization
- Can be used to modify PSM values, eg. by setting tesseract_config="--psm 6"
* Removed from function signature
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* add LUKE models for downstream tasks
* add new LUKE models to docs
* fix typos
* remove commented lines
* exclude None items from tuple return values
Fix#18385
I don't know whether `use_auth_token`, `cache_dir` and `local_files_only` should be passed to `(cls.slow_tokenizer_class)._from_pretrained`, but I guess it should.
* Bloom model can now be traced
* Bloom traced model can be torch scripted and serialized
* Bloom can be traced with variable keyword arguments
* Enable XLNet support
* Disable XLNet for now