Sanchit Gandhi
3263b34354
Revert "Incorrect Whisper long-form decoding timestamps " ( #32148 )
...
Revert "Incorrect Whisper long-form decoding timestamps (#32003 )"
This reverts commit cd48553fc8
.
2024-07-23 18:34:30 +08:00
Amit Garg
034b477847
Rename Phi-3 rope scaling type ( #31436 )
...
* renamed phi3 rope_scaling type
* fixed trailing whitespaces
* fixed test
* added warning
* fixed format
2024-07-23 12:33:22 +02:00
Alexandre TL
bab32d6fe9
Added mamba.py backend ( #30139 )
...
* Update README.md
* tests: forward ok
* backward test done
* done testing
* removed check. scripts
* Update README.md
* added use_mambapy arg
* fixed typo in warning
* protected imports w/ mambapy package
* delete pscan.py + raise rather than assert
* Update import_utils.py
* fix whitespaces and unused import
* trailing whitespace + import block unformatted
* Update modeling_mamba.py
* transpose before pscan
* shape comment
* ran make style
* use_mambapy=False by default
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* ran make fix-copies
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-07-23 12:32:19 +02:00
Merve Noyan
9ced33ca7f
Fix video batching to videollava ( #32139 )
...
---------
Co-authored-by: Merve Noyan <mervenoyan@Merve-MacBook-Pro.local>
2024-07-23 13:23:23 +03:00
Cyril Vallez
a5b226ce98
Fix flash attention speed issue ( #32028 )
...
Add the lru_cache for speed
2024-07-23 12:21:23 +02:00
Ita Zaporozhets
a1844a3209
gguf conversion add_prefix_space=None for llama3 ( #31937 )
...
* gguf conversion forces add_prefix_space=False for llama3, this is not required and forces from_slow, which fails. changing to None + test
* typo
* clean test
2024-07-23 11:45:54 +02:00
Joao Gante
2e113422b3
Llama: RoPE refactor ( #32135 )
...
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-07-23 10:42:55 +01:00
bayllama
5a4a76edb7
Modify resize_token_embeddings to ensure output type is same as input ( #31979 )
...
* Change resize_token_embeddings to make it return same Class that is passed to it
* Add explanatory comment as requested in review
* Add explanatory comments for add resizing function in lxmert
* Add comment for padding_idx and moving _resize_bias in lxmert to LxmertForPreTraining
---------
Co-authored-by: Prashanth Sateesh <prasatee@Prashanths-MBP.attlocal.net>
Co-authored-by: Prashanth Sateesh <prasatee@Prashanths-MacBook-Pro.local>
2024-07-23 10:28:44 +01:00
Daniel Lok
1535a2c93d
Disable quick init for TapasPreTrainedModel ( #32149 )
...
add attribute to model
Signed-off-by: Daniel Lok <daniel.lok@databricks.com>
2024-07-23 10:26:00 +01:00
mig-mfreitas
34b43211d7
Add YaRN and Dynamic-YaRN RoPE Scaling Methods ( #30910 )
...
* Add YaRN and Dynamic-YaRN RoPE Scaling Methods
YaRN (Yet another RoPE extension method) combines the NTK-By-Parts
Interpolation and Attention Scaling methods, improving upon existing
RoPE interpolation methods for longer context window sizes.
Fine-tuned models maintain their original performance across benchmarks
while enabling efficient extrapolation and transfer learning for
quicker convergence, especially in compute-limited environments.
We implement YaRN and Dynamic-YaRN for the following list of models:
- LLaMA
- Falcon
- GPT-NeoX
- Olmo
- Persimmon
- Phi
- StableLM
- OpenLLaMA
New unit tests are added to assert YaRN's correct behavior on both
short and long sequence inputs.
For more details, please refer to https://arxiv.org/abs/2309.00071 .
Co-authored-by: Miguel Almeida <miguel.pessanha.almeida@tecnico.ulisboa.pt>
* Refactor YaRN implementation for LLaMA
Iterate on YaRN implementation for LLaMA and remove diff from remaining
models for increased PR modularity.
This commit includes the following changes:
- Merge 'yarn_rope_scaling' and 'rope_scaling' dictionaries
- Remove unnecessary attributes ('extrapolation_factor' and 'finetuned')
from YaRN classes
- Inherit 'forward' method in YaRN classes from superclass
- Rename 'yarn' method to 'compute_yarn_scaling'
- Extend YaRN tests with further assertions
- Fix style inconsistencies
Co-authored-by: Miguel Monte e Freitas <miguelmontefreitas@tecnico.ulisboa.pt>
* Refactor Tensor Building Logic for YaRN
- Comply with the the tensor building logic introduced in #30743
- Add referencing to the optimized Attention Factor equation
- Remove Dynamic YaRN for a more agile deployment
Co-authored-by: mig-mfreitas <mig-mfreitas@users.noreply.github.com>
* remove unwanted file
---------
Co-authored-by: Miguel Almeida <miguel.pessanha.almeida@tecnico.ulisboa.pt>
Co-authored-by: mig-mfreitas <mig-mfreitas@users.noreply.github.com>
Co-authored-by: Joao Gante <joao@huggingface.co>
2024-07-23 10:07:58 +01:00
KonradSzafer
7405c1c77e
Add method to retrieve used chat template ( #32032 )
...
encapsulate chat template logic
2024-07-23 10:56:21 +02:00
Anton Vlasjuk
605f3245dc
Fix mask creations of GPTNeoX
and GPT2
( #31944 )
...
* fix mask creation of gpt2 and gpt_neox caused by me
* forgot the reshape of masks when shape > 2
* add tests for gpt neox and gpt2
* nit on a comment
2024-07-23 10:11:12 +02:00
Sanchit Gandhi
2782aadae2
[modelling] remove un-necessary transpose for fa2 attention ( #31749 )
...
* [whisper] remove un-necessary transpose for fa2 attention
* propagate
2024-07-23 14:55:16 +08:00
Sanchit Gandhi
f83c6f1d02
Remove trust_remote_code
when loading Libri Dummy ( #31748 )
...
* [whisper integration] use parquet dataset for testing
* propagate to others
* more propagation
* last one
2024-07-23 14:54:38 +08:00
Raushan Turganbay
3aefb4ec7f
LLaVaNeXT: pad on right if training ( #32134 )
...
* pad on right if training
* docs
* add tests
2024-07-23 10:23:55 +05:00
James Thewlis
251a2409c6
Add llama3-llava-next-8b to llava_next conversion script ( #31395 )
...
* Add llama3-llava-next-8b to llava_next conversion script
Adds support for the lmms-lab/llama3-llava-next-8b model to the
convert_llava_next_weights_to_hf.py script, along with an example
prompt generated from the llava_llama_3 conv_template in the LLaVA-NeXT
repo.
* Exclude <|begin_of_text|> from prompt example
This token gets added automatically, so it should not be included in the
prompt example.
* Add llava-next-72b and llava-next-110b
Adds the Qwen-based LLaVA-Next models to the conversion script, along
with changes to load the models on multiple GPUs for inference.
* Add llama3 and qwen prompt formats to docs
* Chat prompt and padding side left for llama3 batched
* update
* Update src/transformers/models/llava_next/convert_llava_next_weights_to_hf.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/convert_llava_next_weights_to_hf.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* remove code
* better naming
---------
Co-authored-by: raushan <raushan@huggingface.co>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-07-23 10:12:16 +05:00
Marc Sun
96a074fa7e
Add new quant method ( #32047 )
...
* Add new quant method
* update
* fix multi-device
* add test
* add offload
* style
* style
* add simple example
* initial doc
* docstring
* style again
* works ?
* better docs
* switch to non persistant
* remove print
* fix init
* code review
2024-07-22 20:21:59 +02:00
Arthur
bd9dca3b85
set warning level to info for special tokens have been added ( #32138 )
...
fixes #7002
2024-07-22 19:42:47 +02:00
amyeroberts
817a676bd7
Don't default to other weights file when use_safetensors=True ( #31874 )
...
* Don't default to other weights file when use_safetensors=True
* Add tests
* Update tests/utils/test_modeling_utils.py
* Add clarifying comments to tests
* Update tests/utils/test_modeling_utils.py
* Update tests/utils/test_modeling_utils.py
2024-07-22 18:29:50 +01:00
Yoni Gottesman
74d0eb3fed
Return assistant generated tokens mask in apply_chat_template ( #30650 )
...
return assistant generated tokens mask in apply_chat_template
2024-07-22 18:24:43 +01:00
Bertrand Thia
7987710696
[RoBERTa] Minor clarifications to model doc ( #31949 )
...
* minor edits and clarifications
* address comment
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-07-22 10:08:27 -07:00
Sai-Suraj-27
12b6880c81
fix: Fixed raising TypeError
instead of ValueError
for invalid type ( #32111 )
...
* Raised TypeError instead of ValueError for invalid types.
* Updated formatting using ruff.
* Retrieved few changes.
* Retrieved few changes.
* Updated tests accordingly.
2024-07-22 17:46:17 +01:00
Woojun Jung
d1ec36b94f
Update ko/_toctree.yml
and remove custom_tools.md
to reflect latest changes ( #31969 )
...
update `ko/_toctree.yml` and remove `custom_tools.md`
2024-07-22 08:27:13 -07:00
Matt
7ba028fccb
Fix failing test with race condition ( #32140 )
...
* Fix failing test with race condition
* make fixup
* monotonic_ns instead of randint
* uuid4 instead of monotonic_ns
* Add a finally cleanup step
2024-07-22 16:07:29 +01:00
Sanchit Gandhi
5a649ff3ec
[generate] fix eos/pad id check on mps devices ( #31695 )
...
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2024-07-22 15:18:48 +02:00
Lucain
f2a1e3ca68
Mention model_info.id instead of model_info.modelId ( #32106 )
2024-07-22 14:14:47 +01:00
Sai-Suraj-27
0fcfc5ccc9
fix: Replaced deprecated mktemp()
function ( #32123 )
...
Replaced deprecated mktemp function.
2024-07-22 14:13:39 +01:00
Joao Gante
c38c55f4fb
Generate: store special token tensors under a unique variable name ( #31980 )
...
* rename stuff
* english; this one shouldn't be changed
* add a _ to the new var names
* musicgen
* derp
2024-07-22 14:06:49 +01:00
Brian
aa8f86a421
Fix shard order ( #32023 )
2024-07-22 14:06:22 +02:00
Aymeric Roucher
b381880597
Agents planning ( #31702 )
...
* Allow planning for agents
2024-07-22 10:49:57 +02:00
Lucain
0fdea8607d
Fix tests after huggingface_hub
0.24 ( #32054 )
...
* adapt tests
* style
* comment
2024-07-19 19:32:39 +01:00
Raushan Turganbay
fe008d6ebe
Chameleon: not supported with fast load ( #32091 )
...
fixes
2024-07-19 19:21:45 +05:00
Zach Mueller
62aa270f2a
Disable quick init for deepspeed ( #32066 )
...
Disable via deepspeed
2024-07-19 08:58:53 -04:00
Kamil Akesbi
89575b567e
Support generating with fallback for short form audio in Whisper ( #30984 )
...
* remove is_shortform
* adapt _retrieve_max_frames_and_seek for short_form
* return bos token in short and long form
* add decoder_input_ids to short form audios
* add eos token for short form
* handle short form token_timestamps
* no need to return scores
* add is_shortform conditions
* handle when max_new_tokens is None - short form
* handle assistant decoding
* fix
* handle return_dict_in_generate
* handle split_by_batch for encoder_attentions attribute
* handle num_beams>1
* handle num_return_sequences>1 in generate_with_fallback
* handle num_return_sequences>1 with return_dict_in_generate=True
* raise error if max_new_tokens + decoder_inputs_ids > max_target_pos
* fix
* apply review suggestions
* fix
* Update src/transformers/models/whisper/generation_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/whisper/generation_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/whisper/generation_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* fix
* logits for both short form and long form
* handle if logits_processor is None
* test
* apply review changes to num_return_sequences
* add _expand_variables_for_generation
* remove short form commented section
* update comments
* uncomment num_beams line in generate_with_fallback
* update assistant decoding
* handle return_segment with short form generation
* up
* fix output format is_shortform
* overwrite beam_sample test
* update _set_return_timestamps
* apply review suggestions
* apply review suggestions
* remove seek_outputs_short_form
* fix _stack_split_outputs
* fix stack dim in _stack_split_outputs
* update tests
* fix past_key_values + beam tests
* fix
* clean _expand_variables_for_generation
* make style
* fix slow tests
* make style
* max_length condition
* make style
* add slow tests for shortform fallback
* Update src/transformers/models/whisper/generation_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/whisper/generation_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* apply review changes
* Update src/transformers/models/whisper/generation_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* up
* fix slow tests
* apply review suggestions
* update test
* make style
* small fix
* fix
* fix test_new_cache_format
* fix past_key_values
* fix
* make style
* fix slow tests
* fix
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2024-07-19 13:42:22 +01:00
Merve Noyan
46835ec6ae
Add image-text-to-text task guide ( #31777 )
...
* Add image-text-to-text task page
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Address comments
* Fix heading
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/tasks/image_text_to_text.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Address comments
* Update image_text_to_text.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-07-19 13:40:40 +01:00
Merve Noyan
4bd8f12972
Fixes to chameleon docs ( #32078 )
...
* Fixes
* Let's not use auto
2024-07-19 12:50:34 +01:00
Keith Stevens
566b0f1fbf
Fix progress callback deepcopy ( #32070 )
...
* Replacing ProgressCallbacks deepcopy with a shallowcopy
* Using items instead of entries
* code cleanup for copy in trainer callback
* Style fix for ProgressCallback
2024-07-19 11:56:45 +01:00
Raushan Turganbay
e316c5214f
VideoLLaVa: fix chat format in docs ( #32083 )
...
fix chat format
2024-07-19 15:38:01 +05:00
Joshua Lochner
22f888b3fa
[mistral] Fix FA2 attention reshape for Mistral Nemo ( #32065 )
...
* [mistral] Fix FA2 attention reshape
* [run-slow] mistral
2024-07-19 11:19:35 +02:00
Kamil Akesbi
cd48553fc8
Incorrect Whisper long-form decoding timestamps ( #32003 )
...
* fix lo form timestamps in decode_batch
* Update src/transformers/models/whisper/tokenization_whisper.py
Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
* Update src/transformers/models/whisper/tokenization_whisper.py
Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
* add test
* make style
* fix copies
* Update src/transformers/models/whisper/tokenization_whisper_fast.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/whisper/tokenization_whisper.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/whisper/processing_whisper.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/whisper/tokenization_whisper.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* apply review suggestions
* fix
* fix copies
* fix
* Update src/transformers/models/whisper/tokenization_whisper_fast.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix-copies
---------
Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-07-19 09:26:38 +01:00
NielsRogge
56a7745704
[Chameleon, Hiera] Improve docs ( #32038 )
...
* Improve docs
* Fix docs
* Fix code snippet
2024-07-19 11:20:03 +03:00
Raushan Turganbay
b873234cb6
Llava: add default chat templates ( #31691 )
...
* add default chat templates
* Update src/transformers/models/llava/processing_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/processing_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* more clear docstring and docs
* Update docs/source/en/model_doc/llava.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/llava_next.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/vipllava.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* add tests
* remove default templates (see #31733 )
* load chat template from another file
* Update docs/source/en/model_doc/llava_next.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* revert some changes in docs
* forgot vipllava
* chat template file is not temporary hack
* warn if loading from processor
* not that file
* similarly modify `save_pretrained`
* Update tests/models/llava_next/test_processor_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/vipllava/test_processor_vipllava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/vipllava.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/processing_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/processing_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/vipllava.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/llava.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/llava.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/llava_next.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/llava_next.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/processing_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/llava_next.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
2024-07-19 10:08:56 +05:00
Sai-Suraj-27
271fd8e60d
docs: Fixed 2 links in the docs along with some minor fixes ( #32058 )
...
* Fixed 2 links in the docs along with some minor fixes.
* Updated Contributing.md
2024-07-18 21:28:36 +01:00
Sai-Suraj-27
8f0d26c55e
fix: Removed duplicate entries
in a dictionary ( #32041 )
...
Removed duplicate key in a dictionary.
2024-07-18 17:26:08 +01:00
Longjie Zheng
c75969ee28
Add torch.compile Support For Mamba ( #31247 )
...
* modify mamba cache
* set up cache
* add test
* [run-slow] mamba
* [run-slow] mamba
* address comments
* [run-slow] mamba
* use_cache_position
* [run-slow] mamba
* [run-slow] mamba
* [run-slow] mamba
* [run-slow] mamba
* fix
* cache in generate
* [run-slow] mamba
* address comments
* [run-slow] mamba
* [run-slow] mamba
* address comments
* [run-slow] mamba
* fix
* [run-slow] mamba
* fix
* [run-slow] mamba
* fix cache name
* [run-slow] mamba
2024-07-18 11:54:54 -04:00
Joshua Lochner
4c040aba02
[mistral] Support passing head_dim
through config (and do not require head_dim * num_heads == hidden_size
) ( #32050 )
...
* Allow `head_dim` to be set in Mistral config
* Add docstring
* Do not require `head_dim * num_heads == hidden_size`
* [run-slow] mistral
2024-07-18 16:41:12 +02:00
dependabot[bot]
c50e0551fd
Bump scikit-learn from 1.1.2 to 1.5.0 in /examples/research_projects/codeparrot/examples ( #32052 )
...
Bump scikit-learn in /examples/research_projects/codeparrot/examples
Bumps [scikit-learn](https://github.com/scikit-learn/scikit-learn ) from 1.1.2 to 1.5.0.
- [Release notes](https://github.com/scikit-learn/scikit-learn/releases )
- [Commits](https://github.com/scikit-learn/scikit-learn/compare/1.1.2...1.5.0 )
---
updated-dependencies:
- dependency-name: scikit-learn
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 13:29:56 +01:00
dependabot[bot]
c25dde1fc9
Bump scikit-learn from 1.0.2 to 1.5.0 in /examples/research_projects/decision_transformer ( #31458 )
...
Bump scikit-learn in /examples/research_projects/decision_transformer
Bumps [scikit-learn](https://github.com/scikit-learn/scikit-learn ) from 1.0.2 to 1.5.0.
- [Release notes](https://github.com/scikit-learn/scikit-learn/releases )
- [Commits](https://github.com/scikit-learn/scikit-learn/compare/1.0.2...1.5.0 )
---
updated-dependencies:
- dependency-name: scikit-learn
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 13:13:38 +01:00
Raushan Turganbay
673d30b826
Chameleon: minor fixes after shipping ( #32037 )
...
* fix merging
* make chameleon conditional
2024-07-18 16:54:07 +05:00
Yih-Dar
765732e92c
unpin numpy<2.0
( #32018 )
...
* unpin np
* [test_all] trigger full CI
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-07-18 11:26:01 +02:00