Yih-Dar
f2d5dfbab2
Remove @slow
for test_eager_matches_sdpa_inference
( #34558 )
...
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-11-05 16:10:42 +01:00
Joao Gante
8a734ea2c3
Tests: move generate
tests to the right mixin and delete redundant tests ( #34464 )
...
* tmp commit
* tmp commit
* cull overwrites of deleted tests
* typo
* more specific docstring
* make fixup
* parameterize at the top?
* correction
* more deletions :D
* tmp commit
* for VLMs too
* fix _check_outputs
* test nit
* make fixup
* fix another flaky
* test_generate_from_inputs_embeds -- handle missing attention mask
2024-10-30 10:59:08 +00:00
Joao Gante
186b8dc190
Tests: upgrade test_eager_matches_sdpa_generate
( #34386 )
2024-10-25 11:55:07 +01:00
Cyril Vallez
6604764007
add Glm ( #33823 )
...
* Create modular_glm.py
* Update modular_glm.py
* Finalize architecture without all attentions
* Add all attentions modules
* Finalize modular
* Update given last version
* Last update
* Finalize model
* Finalize converter
* Update convert_glm_weights_to_hf.py
* style
* style
* Create __init__.py
* Aff all inits
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Correct the rotary embeddings
* Remove apply_residual_connection_post_layernorm (always false)
* remove use_rms_norm (always true)
* remove past_layer_norm (always true)
* Update __init__.py
* Update config and license
* start adding tests and doc
* Add doc + style
* Update test_modeling_glm.py
* Add dummies
* Apply correct modeling
* Refactor attention to follow llama
* Update __init__.py
* Update convert_glm_weights_to_hf.py
* Correct bias
* remove linear_bias and pdrop (never used)
* apply modular
* Simplify converter
* remove dummies + style
* add model_input_names
* Add pretraining_tp to config for when eager attention is used
* Update modular to remove all pretraining_tp
* Update test_modeling_glm.py
* Update the __all__
* Update __all__
* Update __init__.py
* Update test_modeling_glm.py
* add revisions
* Add the correct repos and revisions
* style
* Update __init__.py
* update exports
* remove import of modular files
* style
* Apply Llama changes + refine converter
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* style
* Use new modular converter
* add pretrainedmodel to init
* style
* Update test_modeling_glm.py
* Move config outside modular to please CI about docstrings
* Add dummies to please CI
* Update glm.md
* Update glm.md
2024-10-18 17:41:12 +02:00