Quentin Gallouédec
|
de24fb63ed
|
Use HF papers (#38184)
* Use hf papers
* Hugging Face papers
* doi to hf papers
* style
|
2025-06-13 11:07:09 +00:00 |
|
Steven Liu
|
c0f8d055ce
|
[docs] Redesign (#31757)
* toctree
* not-doctested.txt
* collapse sections
* feedback
* update
* rewrite get started sections
* fixes
* fix
* loading models
* fix
* customize models
* share
* fix link
* contribute part 1
* contribute pt 2
* fix toctree
* tokenization pt 1
* Add new model (#32615)
* v1 - working version
* fix
* fix
* fix
* fix
* rename to correct name
* fix title
* fixup
* rename files
* fix
* add copied from on tests
* rename to `FalconMamba` everywhere and fix bugs
* fix quantization + accelerate
* fix copies
* add `torch.compile` support
* fix tests
* fix tests and add slow tests
* copies on config
* merge the latest changes
* fix tests
* add few lines about instruct
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fix
* fix tests
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* "to be not" -> "not to be" (#32636)
* "to be not" -> "not to be"
* Update sam.md
* Update trainer.py
* Update modeling_utils.py
* Update test_modeling_utils.py
* Update test_modeling_utils.py
* fix hfoption tag
* tokenization pt. 2
* image processor
* fix toctree
* backbones
* feature extractor
* fix file name
* processor
* update not-doctested
* update
* make style
* fix toctree
* revision
* make fixup
* fix toctree
* fix
* make style
* fix hfoption tag
* pipeline
* pipeline gradio
* pipeline web server
* add pipeline
* fix toctree
* not-doctested
* prompting
* llm optims
* fix toctree
* fixes
* cache
* text generation
* fix
* chat pipeline
* chat stuff
* xla
* torch.compile
* cpu inference
* toctree
* gpu inference
* agents and tools
* gguf/tiktoken
* finetune
* toctree
* trainer
* trainer pt 2
* optims
* optimizers
* accelerate
* parallelism
* fsdp
* update
* distributed cpu
* hardware training
* gpu training
* gpu training 2
* peft
* distrib debug
* deepspeed 1
* deepspeed 2
* chat toctree
* quant pt 1
* quant pt 2
* fix toctree
* fix
* fix
* quant pt 3
* quant pt 4
* serialization
* torchscript
* scripts
* tpu
* review
* model addition timeline
* modular
* more reviews
* reviews
* fix toctree
* reviews reviews
* continue reviews
* more reviews
* modular transformers
* more review
* zamba2
* fix
* all frameworks
* pytorch
* supported model frameworks
* flashattention
* rm check_table
* not-doctested.txt
* rm check_support_list.py
* feedback
* updates/feedback
* review
* feedback
* fix
* update
* feedback
* updates
* update
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
|
2025-03-03 10:33:46 -08:00 |
|
Steven Liu
|
f11f57c925
|
[doctest] Fixes (#35863)
doctest fixes
|
2025-01-26 15:26:38 -08:00 |
|
Cyril Vallez
|
6604764007
|
add Glm (#33823)
* Create modular_glm.py
* Update modular_glm.py
* Finalize architecture without all attentions
* Add all attentions modules
* Finalize modular
* Update given last version
* Last update
* Finalize model
* Finalize converter
* Update convert_glm_weights_to_hf.py
* style
* style
* Create __init__.py
* Aff all inits
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Correct the rotary embeddings
* Remove apply_residual_connection_post_layernorm (always false)
* remove use_rms_norm (always true)
* remove past_layer_norm (always true)
* Update __init__.py
* Update config and license
* start adding tests and doc
* Add doc + style
* Update test_modeling_glm.py
* Add dummies
* Apply correct modeling
* Refactor attention to follow llama
* Update __init__.py
* Update convert_glm_weights_to_hf.py
* Correct bias
* remove linear_bias and pdrop (never used)
* apply modular
* Simplify converter
* remove dummies + style
* add model_input_names
* Add pretraining_tp to config for when eager attention is used
* Update modular to remove all pretraining_tp
* Update test_modeling_glm.py
* Update the __all__
* Update __all__
* Update __init__.py
* Update test_modeling_glm.py
* add revisions
* Add the correct repos and revisions
* style
* Update __init__.py
* update exports
* remove import of modular files
* style
* Apply Llama changes + refine converter
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* Update convert_glm_weights_to_hf.py
* style
* Use new modular converter
* add pretrainedmodel to init
* style
* Update test_modeling_glm.py
* Move config outside modular to please CI about docstrings
* Add dummies to please CI
* Update glm.md
* Update glm.md
|
2024-10-18 17:41:12 +02:00 |
|