* Fixed base model class name extraction from PeftModels
* Changes to first unwrap the model then extract the base model name
* Changed base_model to base_model.model to stay consistent with peft model abstractions
* Removed the redundant SiLUActivation class and now use nn.functional.silu directly.
* I apologize for adding torch.functional.silu. I have replaced it with nn.SiLU.
* Fixing m4t.
* Trying to remove comparison ? Odd test failure.
* Adding shared. But why on earth does it hang ????
* Putting back the model weights checks the test is silently failing on
cuda.
* Fix style + unremoved comment.
* Fix Fuyu image scaling bug
It could produce negative padding and hence inference errors for certain
image sizes.
* initial rework commit
* add batching capabilities, refactor image processing
* add functional batching for a list of images and texts
* make args explicit
* Fuyu processing update (#27133)
* Add file headers
* Add file headers
* First pass - preprocess method with standard args
* First pass image processor rework
* Small tweaks
* More args and docstrings
* Tidying iterating over batch
* Tidying up
* Modify to have quick tests (for now)
* Fix up
* BatchFeature
* Passing tests
* Add tests for processor
* Sense check when patchifying
* Add some tests
* FuyuBatchFeature
* Post-process box coordinates
* Update to `size` in processor
* Remove unused and duplicate constants
* Store unpadded dims after resize
* Fix up
* Return FuyuBatchFeature
* Get unpadded sizes after resize
* Update exception
* Fix return
* Convert input `<box>` coordinates to model format.
* Post-process point coords, support multiple boxes/points in a single
sequence
* Replace constants
* Update src/transformers/models/fuyu/image_processing_fuyu.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Preprocess List[List[image]]
* Update src/transformers/models/fuyu/image_processing_fuyu.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update to Amy's latest state.
* post-processing returns a list of tensors
* Fix error when target_sizes is None
Co-authored-by: Pablo Montalvo <pablo.montalvo.leroux@gmail.com>
* Update src/transformers/models/fuyu/image_processing_fuyu.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/transformers/models/fuyu/image_processing_fuyu.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/transformers/models/fuyu/image_processing_fuyu.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/transformers/models/fuyu/image_processing_fuyu.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Review comments
* Update src/transformers/models/fuyu/image_processing_fuyu.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix up
* Fix up
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-72-126.ec2.internal>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pablo Montalvo <pablo.montalvo.leroux@gmail.com>
* Fix conflicts in fuyu_follow_up_image_processing (#27228)
fixing conflicts and updating on main
* Revert "Fix conflicts in fuyu_follow_up_image_processing" (#27232)
Revert "Fix conflicts in fuyu_follow_up_image_processing (#27228)"
This reverts commit acce10b6c6.
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-72-126.ec2.internal>
* add whisper fa2
* correct
* change all
* correct
* correct
* fix more
* fix more
* fix more
* fix more
* fix more
* fix more
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix more
* fix more
* fix more
* fix more
* fix more
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Enable split_batches through TrainingArguments
* Extra dispatch_batches
* Keep as default false
* Add to docstring
* Add to docstring
* Remove the capturewarnings change
* Comma
* Add type annotations to TFConvNextDropPath
* Use tf.debugging.assert_equal for TFConvNextEmbeddings shape check
* Add TensorFlow implementation of ConvNeXTV2
* check_docstrings: add TFConvNextV2Model to exclusions
TFConvNextV2Model and TFConvNextV2ForImageClassification have docstrings
which are equivalent to their PyTorch cousins, but a parsing issue prevents them
from passing the test.
Adding exclusions for these two classes as discussed in #25558.
* docs(zh): translate tflite.md
* docs(zh): add space around links
* Update docs/source/zh/tflite.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Safetensors serialization by default
* First pass on the tests
* Second pass on the tests
* Third pass on the tests
* Fix TF weight loading from TF-format safetensors
* Specific encoder-decoder fixes for weight crossloading
* Add VisionEncoderDecoder fixes for TF too
* Change filename test for pt-to-tf
* One missing fix for TFVisionEncoderDecoder
* Fix the other crossload test
* Support for flax + updated tests
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Sanchit's comments
* Sanchit's comments 2
* Nico's comments
* Fix tests
* cleanup
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Matt <rocketknight1@gmail.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add support for loading GPTQ models on CPU
Right now, we can only load the GPTQ Quantized model on the CUDA
device. The attribute `gptq_supports_cpu` checks if the current
auto_gptq version is the one which has the cpu support for the
model or not.
The larger variants of the model are hard to load/run/trace on
the GPU and that's the rationale behind adding this attribute.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
* Update quantization.md
* Update quantization.md
* Update quantization.md
A recent PR https://github.com/huggingface/transformers/pull/26579 fixed an edge case out-of-bounds tensor indexing error in TypicalLogitsWarper, and a related behaviour change was made that we thought fixed a long-standing bug w.r.t. the token inclusion cutoff.
However after looking more closely, I am pretty certain that the original logic was correct and that the OOB fix should have been made differently.
Specifically the docs state that it should include the "smallest set of tokens that add up to P or higher" and so `last_ind` should actually be one more than the index of the last token satisfying (cumulative_probs < self.mass).
We still need a max clamp in case that last token is the very last one in the tensor.