
* Updated OLMo2 model card * added command line * Add suggestions Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Added suggestions Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Indented code block as per suggestions --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
4.9 KiB
OLMo2
OLMo2 improves on OLMo by changing the architecture and training recipes of the original models. This includes excluding all biases to improve training stability, non-parametric layer norm, SwiGLU activation function, rotary positional embeddings, and a modified BPE-based tokenizer that masks personal identifiable information. It is pretrained on Dolma, a dataset of 3T tokens.
You can find all the original OLMo2 checkpoints under the OLMo2 collection.
Tip
Click on the OLMo2 models in the right sidebar for more examples of how to apply OLMo2 to different language tasks.
The example below demonstrates how to generate text with [Pipeline
], [AutoModel
] and from the command line.
import torch
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="allenai/OLMo-2-0425-1B",
torch_dtype=torch.float16,
device=0,
)
result = pipe("Plants create energy through a process known as")
print(result)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"allenai/OLMo-2-0425-1B"
)
model = AutoModelForCausalLM.from_pretrained(
"allenai/OLMo-2-0425-1B",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_length=50, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))
echo -e "Plants create energy through a process known as" | transformers-cli run --task text-generation --model allenai/OLMo-2-0425-1B --device 0
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to 4-bits.
#pip install torchao
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
torchao_config = TorchAoConfig(
"int4_weight_only",
group_size=128
)
tokenizer = AutoTokenizer.from_pretrained(
"allenai/OLMo-2-0425-1B"
)
model = AutoModelForCausalLM.from_pretrained(
"allenai/OLMo-2-0425-1B",
quantization_config=torchao_config,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="sdpa"
)
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_length=50, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))
Notes
-
OLMo2 uses RMSNorm instead of standard layer norm. The RMSNorm is applied to attention queries and keys, and it is applied after the attention and feedforward layers rather than before.
-
OLMo2 requires Transformers v4.48 or higher.
-
Load specific intermediate checkpoints by adding the
revision
parameter to [~PreTrainedModel.from_pretrained
].from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B", revision="stage1-step140000-tokens294B")
Olmo2Config
autodoc Olmo2Config
Olmo2Model
autodoc Olmo2Model - forward
Olmo2ForCausalLM
autodoc Olmo2ForCausalLM - forward