# Mamba ## Overview The Mamba model was proposed in [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752) by Albert Gu and Tri Dao. This model is a new paradigm architecture based on `state-space-models`. You can read more about the intuition behind these [here](https://srush.github.io/annotated-s4/). The abstract from the paper is the following: *Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.* Tips: - Mamba is a new `state space model` architecture that rivals the classic Transformers. It is based on the line of progress on structured state space models, with an efficient hardware-aware design and implementation in the spirit of [FlashAttention](https://github.com/Dao-AILab/flash-attention). - Mamba stacks `mixer` layers, which are the equivalent of `Attention` layers. The core logic of `mamba` is held in the `MambaMixer` class. - Two implementations cohabit: one is optimized and uses fast cuda kernels, while the other one is naive but can run on any device! - The current implementation leverages the original cuda kernels: the equivalent of flash attention for Mamba are hosted in the [`mamba-ssm`](https://github.com/state-spaces/mamba) and the [`causal_conv1d`](https://github.com/Dao-AILab/causal-conv1d) repositories. Make sure to install them if your hardware supports them! - Contributions to make the naive path faster are welcome 🤗 This model was contributed by [ArthurZ](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/state-spaces/mamba). # Usage ### A simple generation example: ```python from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"] out = model.generate(input_ids, max_new_tokens=10) print(tokenizer.batch_decode(out)) ``` ### Peft finetuning The slow version is not very stable for training, and the fast one needs `float32`! ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments model_id = "state-spaces/mamba-130m-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["x_proj", "embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, processing_class=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ``` ## MambaConfig [[autodoc]] MambaConfig ## MambaModel [[autodoc]] MambaModel - forward ## MambaLMHeadModel [[autodoc]] MambaForCausalLM - forward