PyTorch TensorFlow Flax
# RoFormer [RoFormer](https://huggingface.co/papers/2104.09864) introduces Rotary Position Embedding (RoPE) to encode token positions by rotating the inputs in 2D space. This allows a model to track absolute positions and model relative relationships. RoPE can scale to longer sequences, account for the natural decay of token dependencies, and works with the more efficient linear self-attention. You can find all the RoFormer checkpoints on the [Hub](https://huggingface.co/models?search=roformer). > [!TIP] > Click on the RoFormer models in the right sidebar for more examples of how to apply RoFormer to different language tasks. The example below demonstrates how to predict the `[MASK]` token with [`Pipeline`], [`AutoModel`], and from the command line. ```py # uncomment to install rjieba which is needed for the tokenizer # !pip install rjieba import torch from transformers import pipeline pipe = pipeline( task="fill-mask", model="junnyu/roformer_chinese_base", torch_dtype=torch.float16, device=0 ) output = pipe("水在零度时会[MASK]") print(output) ``` ```py # uncomment to install rjieba which is needed for the tokenizer # !pip install rjieba import torch from transformers import AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained( "junnyu/roformer_chinese_base", torch_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base") input_ids = tokenizer("水在零度时会[MASK]", return_tensors="pt").to(model.device) outputs = model(**input_ids) decoded = tokenizer.batch_decode(outputs.logits.argmax(-1), skip_special_tokens=True) print(decoded) ``` ```bash echo -e "水在零度时会[MASK]" | transformers-cli run --task fill-mask --model junnyu/roformer_chinese_base --device 0 ``` ## Notes - The current RoFormer implementation is an encoder-only model. The original code can be found in the [ZhuiyiTechnology/roformer](https://github.com/ZhuiyiTechnology/roformer) repository. ## RoFormerConfig [[autodoc]] RoFormerConfig ## RoFormerTokenizer [[autodoc]] RoFormerTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RoFormerTokenizerFast [[autodoc]] RoFormerTokenizerFast - build_inputs_with_special_tokens ## RoFormerModel [[autodoc]] RoFormerModel - forward ## RoFormerForCausalLM [[autodoc]] RoFormerForCausalLM - forward ## RoFormerForMaskedLM [[autodoc]] RoFormerForMaskedLM - forward ## RoFormerForSequenceClassification [[autodoc]] RoFormerForSequenceClassification - forward ## RoFormerForMultipleChoice [[autodoc]] RoFormerForMultipleChoice - forward ## RoFormerForTokenClassification [[autodoc]] RoFormerForTokenClassification - forward ## RoFormerForQuestionAnswering [[autodoc]] RoFormerForQuestionAnswering - forward ## TFRoFormerModel [[autodoc]] TFRoFormerModel - call ## TFRoFormerForMaskedLM [[autodoc]] TFRoFormerForMaskedLM - call ## TFRoFormerForCausalLM [[autodoc]] TFRoFormerForCausalLM - call ## TFRoFormerForSequenceClassification [[autodoc]] TFRoFormerForSequenceClassification - call ## TFRoFormerForMultipleChoice [[autodoc]] TFRoFormerForMultipleChoice - call ## TFRoFormerForTokenClassification [[autodoc]] TFRoFormerForTokenClassification - call ## TFRoFormerForQuestionAnswering [[autodoc]] TFRoFormerForQuestionAnswering - call ## FlaxRoFormerModel [[autodoc]] FlaxRoFormerModel - __call__ ## FlaxRoFormerForMaskedLM [[autodoc]] FlaxRoFormerForMaskedLM - __call__ ## FlaxRoFormerForSequenceClassification [[autodoc]] FlaxRoFormerForSequenceClassification - __call__ ## FlaxRoFormerForMultipleChoice [[autodoc]] FlaxRoFormerForMultipleChoice - __call__ ## FlaxRoFormerForTokenClassification [[autodoc]] FlaxRoFormerForTokenClassification - __call__ ## FlaxRoFormerForQuestionAnswering [[autodoc]] FlaxRoFormerForQuestionAnswering - __call__