* Normalize image - cast input images to float32.
This is done if the input image isn't of floating type. Issues can occur when do_rescale=False is set in an image processor. When this happens, the image passed to the call is of type uint8 becuase of the type casting that happens in resize because of the PIL image library. As the mean and std values are cast to match the image dtype, this can cause NaNs and infs to appear in the normalized image, as the floating values being used to divide the image are now set to 0.
The reason the mean and std values are cast is because previously they were set as float32 by default. However, if the input image was of type float16, the normalization would result in the image being upcast to float32 too.
* Add tests
* Remove float32 cast
* only dir not even init
* init
* tokenizer removed and reference of codegen added
* modeling file updated a lot remaining app_rotary_emb
* conversion script done
* conversion script fixed, a lot of factoring done and most tests pass
* added token_clf and extractive_QA_head
* integration tests pass
* flash attn tests pass!
* config done
* more docs in modeling file
* some style fix
* style and others
* doc test error fix
* more doc fix
* some attention fixes
* most fixes
* style and other fixes
* docs fix and config
* doc fix
* some comments
* conversion script updated
* conversion script updated
* Revert "conversion script updated"
This reverts commit e92378c54084ec0747041b113083d1746ecb6c7f.
* final comments
* add Phi to language_modeling.md
* edit phi.md file
* rebase and fix
* removed phi-1.5 example
* changed model_type from 'phi'->'mixformer-sequential'
* small change
* small change
* revert \small change
* changed mixformer-sequential->phi
* small change
* added phi-1.5 example instead of phi-1
* doc test might pass now
* rebase and small change
* added the dropout layer
* more fixes
* modified .md file
* very very small doc change
* fix?
* actual fix
* fixups
* add dataclass to the attention mask converter
* refine testing suite
* make sure there are no overflows
* update the test
* init commit
* attention arch done except rotary emb
* rotary emb done
* text encoder working
* outputs matching
* arch first pass done
* make commands done, tests and docs remaining
* all tests passed, only docs remaining
* docs done
* doc-builder fix
* convert script removed(not relevant)
* minor comments done
* added ckpt conversion script
* tokenizer done
* very minor fix of index.md 2
* mostly make fixup related
* all done except fe and rotary emb
* very small change
* removed unidecode dependency
* style changes
* tokenizer removed require_backends
* added require_inflect to tokenizer tests
* removed VOCAB_FILES in tokenizer test
* inflect dependency removed
* added rotary pos emb cache and simplified the apply method
* style
* little doc change
* more comments
* feature extractor added
* added processor
* auto-regressive config added
* added CLVPConditioningEncoder
* comments done except the test one
* weights added successfull(NOT tested)
* tokenizer fix with numbers
* generate outputs matching
* almost tests passing Integ tests not written
* Integ tests added
* major CUDA error fixed
* docs done
* rebase and multiple fixes
* fixed rebase overwrites
* generate code simplified and tests for AutoRegressive model added
* minor changes
* refectored gpt2 code in clvp file
* weights done and all code refactored
* mostly done except the fast_tokenizer
* doc test fix
* config file's doc fixes
* more config fix
* more comments
* tokenizer comments mostly done
* modeling file mostly refactored and can load modules
* ClvpEncoder tested
* ClvpDecoder, ClvpModel and ClvpForCausalLM tested
* integration and all tests passed
* more fixes
* docs almost done
* ckpt conversion refectored
* style and some failing tests fix
* comments
* temporary output fix but test_assisted_decoding_matches_greedy_search test fails
* majority changes done
* use_cache outputs same now! Along with the asisted_greedy_decoding test fix
* more comments
* more comments
* prepare_inputs_for_generation fixed and _prepare_model_inputs added
* style fix
* clvp.md change
* moved clvpconditionalencoder norms
* add model to new index
* added tokenizer input_ids_with_special_tokens
* small fix
* config mostly done
* added config-tester and changed conversion script
* more comments
* comments
* style fix
* some comments
* tokenizer changed back to prev state
* small commnets
* added output hidden states for the main model
* style fix
* comments
* small change
* revert small change
* .
* Update clvp.md
* Update test_modeling_clvp.py
* :)
* some minor change
* new fixes
* remove to_dict from FE
* Add index.md for tukish language
* Fix index.md (huggingface/transformers#27088)
* Add 'tr' to additional files
* Update docs/source/tr/_toctree.yml
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update index.md
---------
Co-authored-by: Mert Yanık <mert.yanik@lcwaikiki.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* add audio_utils usage in the FE of SpeechToText
* clean unecessary parameters of AudioSpectrogramTransformer FE
* add audio_utils usage in AST
* add serialization tests and function to FEs
* make style
* remove use_torchaudio and move to_dict to FE
* test audio_utils usage
* make style and fix import (remove torchaudio dependency import)
* fix torch dependency for jax and tensor tests
* fix typo
* clean tests with suggestions
* add lines to test if is_speech_availble is False
This commit addresses the 'NoneType' object AttributeError within the IdeficsModel forward function. Previously, the 'device' attribute was accessed directly from input_ids, resulting in a potential 'NoneType' error. Now, the device is properly calculated at the beginning of the forward function and utilized consistently throughout, ensuring the 'image_hidden_states' are derived from the correct device. This modification enables smoother processing and compatibility, ensuring the correct device attribution for 'image_encoder_embeddings' in the IdeficsModel forward pass.
* Removed the redundant SiLUActivation class and now use nn.functional.silu directly.
* I apologize for adding torch.functional.silu. I have replaced it with nn.SiLU.
* Remove redundant variable in feature_extraction file
* Fix error in convert_openai_to_hf.py: "_download() missing 1 required positional argument: root"
* Fix error in convert_openai_to_hf.py: "TypeError: byte indices must be integers or slices, not str"
* Fix decoder_attention_heads value in convert_openai_to_hf.py.
Correct the assignment for `decoder_attention_heads` in the conversion script for the Whisper model.
* Black reformat convert_openai_to_hf.py file.
* Fix Whisper model configuration defaults (for Tiny).
- Correct encoder/decoder layers and attention heads count.
- Update model width (`d_model`) to 384.
* Add docstring to the convert_openai_to_hf.py script with a doctest
* Add shebang and +x permission to the convert_openai_to_hf.py
* convert_openai_to_hf.py: reuse the read model_bytes in the _download() function
* Move convert_openai_to_hf.py doctest example to whisper.md
* whisper.md: Add an inference example to the Conversion section.
* whisper.md: remove `model.config.forced_decoder_ids` from examples (deprecated)
* whisper.md: Remove "## Format Conversion" section; not used by users
* whisper.md: Use librispeech_asr_dummy dataset and load_dataset()