transformers/docs/source/en/model_doc/roformer.md
Kseniya Parkhamchuk 31f8a0fe8a
Some checks are pending
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
[docs]: update roformer.md model card (#37946)
* Update roformer model card

* fix example purpose description

* fix model description according to the comments

* revert changes for autodoc

* remove unneeded tags

* fix review issues

* fix hfoption

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-05-23 16:27:56 -07:00

8.2 KiB

PyTorch TensorFlow Flax

RoFormer

RoFormer introduces Rotary Position Embedding (RoPE) to encode token positions by rotating the inputs in 2D space. This allows a model to track absolute positions and model relative relationships. RoPE can scale to longer sequences, account for the natural decay of token dependencies, and works with the more efficient linear self-attention.

You can find all the RoFormer checkpoints on the Hub.

Tip

Click on the RoFormer models in the right sidebar for more examples of how to apply RoFormer to different language tasks.

The example below demonstrates how to predict the [MASK] token with [Pipeline], [AutoModel], and from the command line.

# uncomment to install rjieba which is needed for the tokenizer
# !pip install rjieba
import torch
from transformers import pipeline

pipe = pipeline(
    task="fill-mask",
    model="junnyu/roformer_chinese_base",
    torch_dtype=torch.float16,
    device=0
)
output = pipe("水在零度时会[MASK]")
print(output)
# uncomment to install rjieba which is needed for the tokenizer
# !pip install rjieba
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer

model = AutoModelForMaskedLM.from_pretrained(
    "junnyu/roformer_chinese_base", torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")

input_ids = tokenizer("水在零度时会[MASK]", return_tensors="pt").to(model.device)
outputs = model(**input_ids)
decoded = tokenizer.batch_decode(outputs.logits.argmax(-1), skip_special_tokens=True)
print(decoded)
echo -e "水在零度时会[MASK]" | transformers-cli run --task fill-mask --model junnyu/roformer_chinese_base --device 0

Notes

  • The current RoFormer implementation is an encoder-only model. The original code can be found in the ZhuiyiTechnology/roformer repository.

RoFormerConfig

autodoc RoFormerConfig

RoFormerTokenizer

autodoc RoFormerTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

RoFormerTokenizerFast

autodoc RoFormerTokenizerFast - build_inputs_with_special_tokens

RoFormerModel

autodoc RoFormerModel - forward

RoFormerForCausalLM

autodoc RoFormerForCausalLM - forward

RoFormerForMaskedLM

autodoc RoFormerForMaskedLM - forward

RoFormerForSequenceClassification

autodoc RoFormerForSequenceClassification - forward

RoFormerForMultipleChoice

autodoc RoFormerForMultipleChoice - forward

RoFormerForTokenClassification

autodoc RoFormerForTokenClassification - forward

RoFormerForQuestionAnswering

autodoc RoFormerForQuestionAnswering - forward

TFRoFormerModel

autodoc TFRoFormerModel - call

TFRoFormerForMaskedLM

autodoc TFRoFormerForMaskedLM - call

TFRoFormerForCausalLM

autodoc TFRoFormerForCausalLM - call

TFRoFormerForSequenceClassification

autodoc TFRoFormerForSequenceClassification - call

TFRoFormerForMultipleChoice

autodoc TFRoFormerForMultipleChoice - call

TFRoFormerForTokenClassification

autodoc TFRoFormerForTokenClassification - call

TFRoFormerForQuestionAnswering

autodoc TFRoFormerForQuestionAnswering - call

FlaxRoFormerModel

autodoc FlaxRoFormerModel - call

FlaxRoFormerForMaskedLM

autodoc FlaxRoFormerForMaskedLM - call

FlaxRoFormerForSequenceClassification

autodoc FlaxRoFormerForSequenceClassification - call

FlaxRoFormerForMultipleChoice

autodoc FlaxRoFormerForMultipleChoice - call

FlaxRoFormerForTokenClassification

autodoc FlaxRoFormerForTokenClassification - call

FlaxRoFormerForQuestionAnswering

autodoc FlaxRoFormerForQuestionAnswering - call