mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 05:10:06 +06:00
Updated model card for OLMo2 (#38394)
Some checks are pending
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
Some checks are pending
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
* Updated OLMo2 model card * added command line * Add suggestions Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Added suggestions Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Indented code block as per suggestions --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
This commit is contained in:
parent
f5307272f5
commit
3b3ebcec40
@ -14,27 +14,119 @@ rendered properly in your Markdown viewer.
|
|||||||
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
# OLMo2
|
<div style="float: right;">
|
||||||
|
<div class="flex flex-wrap space-x-1">
|
||||||
<div class="flex flex-wrap space-x-1">
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
||||||
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||||
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
## Overview
|
# OLMo2
|
||||||
|
[OLMo2](https://huggingface.co/papers/2501.00656) improves on [OLMo](./olmo) by changing the architecture and training recipes of the original models. This includes excluding all biases to improve training stability, non-parametric layer norm, SwiGLU activation function, rotary positional embeddings, and a modified BPE-based tokenizer that masks personal identifiable information. It is pretrained on [Dolma](https://huggingface.co/datasets/allenai/dolma), a dataset of 3T tokens.
|
||||||
|
|
||||||
The OLMo2 model is the successor of the OLMo model, which was proposed in
|
You can find all the original OLMo2 checkpoints under the [OLMo2](https://huggingface.co/collections/allenai/olmo-2-674117b93ab84e98afc72edc) collection.
|
||||||
[OLMo: Accelerating the Science of Language Models](https://arxiv.org/abs/2402.00838).
|
|
||||||
|
|
||||||
The architectural changes from the original OLMo model to this model are:
|
> [!TIP]
|
||||||
|
> Click on the OLMo2 models in the right sidebar for more examples of how to apply OLMo2 to different language tasks.
|
||||||
|
|
||||||
- RMSNorm is used instead of standard layer norm.
|
The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`] and from the command line.
|
||||||
- Norm is applied to attention queries and keys.
|
|
||||||
- Norm is applied after attention/feedforward layers rather than before.
|
|
||||||
|
|
||||||
This model was contributed by [shanearora](https://huggingface.co/shanearora).
|
<hfoptions id="usage">
|
||||||
The original code can be found [here](https://github.com/allenai/OLMo/tree/main/olmo).
|
<hfoption id="Pipeline">
|
||||||
|
|
||||||
|
```py
|
||||||
|
import torch
|
||||||
|
from transformers import pipeline
|
||||||
|
|
||||||
|
pipe = pipeline(
|
||||||
|
task="text-generation",
|
||||||
|
model="allenai/OLMo-2-0425-1B",
|
||||||
|
torch_dtype=torch.float16,
|
||||||
|
device=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = pipe("Plants create energy through a process known as")
|
||||||
|
print(result)
|
||||||
|
```
|
||||||
|
|
||||||
|
</hfoption>
|
||||||
|
<hfoption id="AutoModel">
|
||||||
|
|
||||||
|
```py
|
||||||
|
import torch
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(
|
||||||
|
"allenai/OLMo-2-0425-1B"
|
||||||
|
)
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
"allenai/OLMo-2-0425-1B",
|
||||||
|
torch_dtype=torch.float16,
|
||||||
|
device_map="auto",
|
||||||
|
attn_implementation="sdpa"
|
||||||
|
)
|
||||||
|
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
|
||||||
|
|
||||||
|
output = model.generate(**input_ids, max_length=50, cache_implementation="static")
|
||||||
|
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
||||||
|
```
|
||||||
|
|
||||||
|
</hfoption>
|
||||||
|
<hfoption id="transformers CLI">
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo -e "Plants create energy through a process known as" | transformers-cli run --task text-generation --model allenai/OLMo-2-0425-1B --device 0
|
||||||
|
```
|
||||||
|
|
||||||
|
</hfoption>
|
||||||
|
</hfoptions>
|
||||||
|
|
||||||
|
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
|
||||||
|
|
||||||
|
The example below uses [torchao](../quantization/torchao) to only quantize the weights to 4-bits.
|
||||||
|
```py
|
||||||
|
|
||||||
|
#pip install torchao
|
||||||
|
import torch
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
|
||||||
|
|
||||||
|
torchao_config = TorchAoConfig(
|
||||||
|
"int4_weight_only",
|
||||||
|
group_size=128
|
||||||
|
)
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(
|
||||||
|
"allenai/OLMo-2-0425-1B"
|
||||||
|
)
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
"allenai/OLMo-2-0425-1B",
|
||||||
|
quantization_config=torchao_config,
|
||||||
|
torch_dtype=torch.bfloat16,
|
||||||
|
device_map="auto",
|
||||||
|
attn_implementation="sdpa"
|
||||||
|
)
|
||||||
|
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
|
||||||
|
|
||||||
|
output = model.generate(**input_ids, max_length=50, cache_implementation="static")
|
||||||
|
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- OLMo2 uses RMSNorm instead of standard layer norm. The RMSNorm is applied to attention queries and keys, and it is applied after the attention and feedforward layers rather than before.
|
||||||
|
- OLMo2 requires Transformers v4.48 or higher.
|
||||||
|
- Load specific intermediate checkpoints by adding the `revision` parameter to [`~PreTrainedModel.from_pretrained`].
|
||||||
|
|
||||||
|
```py
|
||||||
|
from transformers import AutoModelForCausalLM
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B", revision="stage1-step140000-tokens294B")
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## Olmo2Config
|
## Olmo2Config
|
||||||
|
Loading…
Reference in New Issue
Block a user