transformers/docs/source/en
pglorio 33cb1f7b61
Add Zamba2 (#34517)
* First commit

* Finish model implementation

* First commit

* Finish model implementation

* Register zamba2

* generated modeling and configuration

* generated modeling and configuration

* added hybrid cache

* fix attention_mask in mamba

* dropped unused loras

* fix flash2

* config docstrings

* fix config and fwd pass

* make fixup fixes

* text_modeling_zamba2

* small fixes

* make fixup fixes

* Fix modular model converter

* added inheritances in modular, renamed zamba cache

* modular rebase

* new modular conversion

* fix generated modeling file

* fixed import for Zamba2RMSNormGated

* modular file cleanup

* make fixup and model tests

* dropped inheritance for Zamba2PreTrainedModel

* make fixup and unit tests

* Add inheritance of rope from GemmaRotaryEmbedding

* moved rope to model init

* drop del self.self_attn and del self.feed_forward

* fix tests

* renamed lora -> adapter

* rewrote adapter implementation

* fixed tests

* Fix torch_forward in mamba2 layer

* Fix torch_forward in mamba2 layer

* Fix torch_forward in mamba2 layer

* Dropped adapter in-place sum

* removed rope from attention init

* updated rope

* created get_layers method

* make fixup fix

* make fixup fixes

* make fixup fixes

* update to new attention standard

* update to new attention standard

* make fixup fixes

* minor fixes

* cache_position

* removed cache_position postion_ids use_cache

* remove config from modular

* removed config from modular (2)

* import apply_rotary_pos_emb from llama

* fixed rope_kwargs

* Instantiate cache in Zamba2Model

* fix cache

* fix @slow decorator

* small fix in modular file

* Update docs/source/en/model_doc/zamba2.md

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* several minor fixes

* inherit mamba2decoder fwd and drop position_ids in mamba

* removed docstrings from modular

* reinstate zamba2 attention decoder fwd

* use regex for tied keys

* Revert "use regex for tied keys"

This reverts commit 9007a522b1.

* use regex for tied keys

* add cpu to slow forward tests

* dropped config.use_shared_mlp_adapter

* Update docs/source/en/model_doc/zamba2.md

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* re-convert from modular

---------

Co-authored-by: root <root@node-2.us-southcentral1-a.compute.internal>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2025-01-27 10:51:23 +01:00
..
internal Implement AsyncTextIteratorStreamer for asynchronous streaming (#34931) 2024-12-20 12:08:12 +01:00
main_classes HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
model_doc Add Zamba2 (#34517) 2025-01-27 10:51:23 +01:00
quantization Enable gptqmodel (#35012) 2025-01-15 14:22:49 +01:00
tasks [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
_config.py Add optimized PixtralImageProcessorFast (#34836) 2024-11-28 16:04:05 +01:00
_redirects.yml Docs / Quantization: Redirect deleted page (#31063) 2024-05-28 18:29:22 +02:00
_toctree.yml Granite Vision Support (#35579) 2025-01-23 17:15:52 +01:00
accelerate.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
add_new_model.md Model addition timeline (#33762) 2024-09-27 17:15:13 +02:00
add_new_pipeline.md [docs] Follow up register_pipeline (#35310) 2024-12-20 09:22:44 -08:00
agents_advanced.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
agents.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
attention.md [Docs] Fix broken links and syntax issues (#28918) 2024-02-08 14:13:35 -08:00
autoclass_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
bertology.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
big_models.md [docs] Big model loading (#29920) 2024-04-01 18:47:32 -07:00
chat_templating.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
community.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
contributing.md Enable doc in Spanish (#16518) 2022-04-04 10:25:46 -04:00
conversations.md [docs] change temperature to a positive value (#32077) 2024-07-23 17:47:51 +01:00
create_a_model.md Enable HF pretrained backbones (#31145) 2024-06-06 22:02:38 +01:00
custom_models.md Updated the custom_models.md changed cross_entropy code (#33118) 2024-08-26 13:15:43 +02:00
debugging.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
deepspeed.md [doc] deepspeed universal checkpoint (#35015) 2025-01-09 09:50:51 -08:00
fast_tokenizers.md Migrate doc files to Markdown. (#24376) 2023-06-20 18:07:47 -04:00
fsdp.md Fix docs typos. (#35465) 2025-01-02 11:29:46 +01:00
generation_strategies.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
gguf.md Add Gemma2 GGUF support (#34002) 2025-01-03 14:50:07 +01:00
glossary.md Fix typos (#31819) 2024-07-08 11:52:47 +01:00
how_to_hack_models.md [Docs] Add Developer Guide: How to Hack Any Transformers Model (#33979) 2024-10-07 10:08:20 +02:00
hpo_train.md Trainer - deprecate tokenizer for processing_class (#32385) 2024-10-02 14:08:46 +01:00
index.md Add Zamba2 (#34517) 2025-01-27 10:51:23 +01:00
installation.md Enhanced Installation Section in README.md (#35094) 2025-01-14 08:05:08 -08:00
kv_cache.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
llm_optims.md Update llm_optims docs for sdpa_kernel (#35481) 2025-01-06 08:54:31 -08:00
llm_tutorial_optimization.md [docs] add explanation to release_memory() (#34911) 2024-11-27 07:47:28 -08:00
llm_tutorial.md [chat] docs fix (#35840) 2025-01-22 14:32:27 +00:00
model_memory_anatomy.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
model_sharing.md [docs] update not-working model revision (#34682) 2024-11-11 07:09:31 -08:00
model_summary.md model_summary.md - Restore link to Harvard's Annotated Transformer. (#29702) 2024-03-23 18:29:39 -07:00
modular_transformers.md Improve modular documentation (#35737) 2025-01-21 17:53:30 +01:00
multilingual.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
notebooks.md Enable doc in Spanish (#16518) 2022-04-04 10:25:46 -04:00
pad_truncation.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
peft.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
perf_hardware.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
perf_infer_cpu.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
perf_infer_gpu_multi.md Fix image preview in multi-GPU inference docs (#35303) 2024-12-17 09:33:50 -08:00
perf_infer_gpu_one.md Add Zamba2 (#34517) 2025-01-27 10:51:23 +01:00
perf_torch_compile.md [docs] use device-agnostic instead of cuda (#35047) 2024-12-03 10:53:45 -08:00
perf_train_cpu_many.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_cpu.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_gpu_many.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
perf_train_gpu_one.md Corrected max number for bf16 in transformer/docs (#33658) 2024-09-25 19:20:51 +02:00
perf_train_special.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
perf_train_tpu_tf.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
performance.md Simplify Tensor Parallel implementation with PyTorch TP (#34184) 2024-11-18 19:51:49 +01:00
perplexity.md [docs] use device-agnostic API instead of cuda (#34913) 2024-11-26 09:23:34 -08:00
philosophy.md [docs] fixed links with 404 (#27327) 2023-11-06 19:45:03 +00:00
pipeline_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
pipeline_webserver.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
pr_checks.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
preprocessing.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
quicktour.md [chat] docs fix (#35840) 2025-01-22 14:32:27 +00:00
run_scripts.md [docs] refine the doc for train with a script (#33423) 2024-09-12 10:16:12 -07:00
sagemaker.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
serialization.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
task_summary.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
tasks_explained.md fix: Wrong task mentioned in docs (#34757) 2024-11-18 18:42:28 +00:00
testing.md [tests] add XPU part to testing (#34778) 2024-11-18 09:59:11 -08:00
tf_xla.md fix(docs): Fixed a link in docs (#32274) 2024-07-29 10:50:43 +01:00
tflite.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
tiktoken.md Updated documentation and added conversion utility (#34319) 2024-11-25 18:44:09 +01:00
tokenizer_summary.md [docs] Spanish translation of tokenizer_summary.md (#31154) 2024-06-03 16:52:23 -07:00
torchscript.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
trainer.md Fix callback key name (#34762) 2024-11-18 18:41:12 +00:00
training.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
troubleshooting.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00