
* Update XLM-RoBERTa model documentation with enhanced usage examples and improved layout * Added CLI command example and quantization example for XLM RoBERTa model card. * Minor change to transformers CLI and quantization example for XLM roberta model card
16 KiB
XLM-RoBERTa
XLM-RoBERTa is a large multilingual masked language model trained on 2.5TB of filtered CommonCrawl data across 100 languages. It shows that scaling the model provides strong performance gains on high-resource and low-resource languages. The model uses the RoBERTa pretraining objectives on the XLM model.
You can find all the original XLM-RoBERTa checkpoints under the Facebook AI community organization.
Tip
Click on the XLM-RoBERTa models in the right sidebar for more examples of how to apply XLM-RoBERTa to different cross-lingual tasks like classification, translation, and question answering.
The example below demonstrates how to predict the <mask>
token with [Pipeline
], [AutoModel
], and from the command line.
import torch
from transformers import pipeline
pipeline = pipeline(
task="fill-mask",
model="FacebookAI/xlm-roberta-base",
torch_dtype=torch.float16,
device=0
)
# Example in French
pipeline("Bonjour, je suis un modèle <mask>.")
</hfoption>
<hfoption id="AutoModel">
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained(
"FacebookAI/xlm-roberta-base"
)
model = AutoModelForMaskedLM.from_pretrained(
"FacebookAI/xlm-roberta-base",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
# Prepare input
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]
predicted_token_id = predictions[0, masked_index].argmax(dim=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")
echo -e "Plants create <mask> through a process known as photosynthesis." | transformers-cli run --task fill-mask --model FacebookAI/xlm-roberta-base --device 0
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the quantization guide overview for more available quantization backends.
The example below uses bitsandbytes the quantive the weights to 4 bits
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16
bnb_4bit_quant_type="nf4", # or "fp4" for float 4-bit quantization
bnb_4bit_use_double_quant=True, # use double quantization for better performance
)
tokenizer = AutoTokenizer.from_pretrained("facebook/xlm-roberta-large")
model = AutoModelForMaskedLM.from_pretrained(
"facebook/xlm-roberta-large",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="flash_attention_2",
quantization_config=quantization_config
)
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Notes
- Unlike some XLM models, XLM-RoBERTa doesn't require
lang
tensors to understand what language is being used. It automatically determines the language from the input IDs
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- A blog post on how to finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS
- [
XLMRobertaForSequenceClassification
] is supported by this example script and notebook. - [
TFXLMRobertaForSequenceClassification
] is supported by this example script and notebook. - [
FlaxXLMRobertaForSequenceClassification
] is supported by this example script and notebook. - Text classification chapter of the 🤗 Hugging Face Task Guides.
- Text classification task guide
- [
XLMRobertaForTokenClassification
] is supported by this example script and notebook. - [
TFXLMRobertaForTokenClassification
] is supported by this example script and notebook. - [
FlaxXLMRobertaForTokenClassification
] is supported by this example script. - Token classification chapter of the 🤗 Hugging Face Course.
- Token classification task guide
- [
XLMRobertaForCausalLM
] is supported by this example script and notebook. - Causal language modeling chapter of the 🤗 Hugging Face Task Guides.
- Causal language modeling task guide
- [
XLMRobertaForMaskedLM
] is supported by this example script and notebook. - [
TFXLMRobertaForMaskedLM
] is supported by this example script and notebook. - [
FlaxXLMRobertaForMaskedLM
] is supported by this example script and notebook. - Masked language modeling chapter of the 🤗 Hugging Face Course.
- Masked language modeling
- [
XLMRobertaForQuestionAnswering
] is supported by this example script and notebook. - [
TFXLMRobertaForQuestionAnswering
] is supported by this example script and notebook. - [
FlaxXLMRobertaForQuestionAnswering
] is supported by this example script. - Question answering chapter of the 🤗 Hugging Face Course.
- Question answering task guide
Multiple choice
- [
XLMRobertaForMultipleChoice
] is supported by this example script and notebook. - [
TFXLMRobertaForMultipleChoice
] is supported by this example script and notebook. - Multiple choice task guide
🚀 Deploy
- A blog post on how to Deploy Serverless XLM RoBERTa on AWS Lambda.
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs.
XLMRobertaConfig
autodoc XLMRobertaConfig
XLMRobertaTokenizer
autodoc XLMRobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
XLMRobertaTokenizerFast
autodoc XLMRobertaTokenizerFast
XLMRobertaModel
autodoc XLMRobertaModel - forward
XLMRobertaForCausalLM
autodoc XLMRobertaForCausalLM - forward
XLMRobertaForMaskedLM
autodoc XLMRobertaForMaskedLM - forward
XLMRobertaForSequenceClassification
autodoc XLMRobertaForSequenceClassification - forward
XLMRobertaForMultipleChoice
autodoc XLMRobertaForMultipleChoice - forward
XLMRobertaForTokenClassification
autodoc XLMRobertaForTokenClassification - forward
XLMRobertaForQuestionAnswering
autodoc XLMRobertaForQuestionAnswering - forward
TFXLMRobertaModel
autodoc TFXLMRobertaModel - call
TFXLMRobertaForCausalLM
autodoc TFXLMRobertaForCausalLM - call
TFXLMRobertaForMaskedLM
autodoc TFXLMRobertaForMaskedLM - call
TFXLMRobertaForSequenceClassification
autodoc TFXLMRobertaForSequenceClassification - call
TFXLMRobertaForMultipleChoice
autodoc TFXLMRobertaForMultipleChoice - call
TFXLMRobertaForTokenClassification
autodoc TFXLMRobertaForTokenClassification - call
TFXLMRobertaForQuestionAnswering
autodoc TFXLMRobertaForQuestionAnswering - call
FlaxXLMRobertaModel
autodoc FlaxXLMRobertaModel - call
FlaxXLMRobertaForCausalLM
autodoc FlaxXLMRobertaForCausalLM - call
FlaxXLMRobertaForMaskedLM
autodoc FlaxXLMRobertaForMaskedLM - call
FlaxXLMRobertaForSequenceClassification
autodoc FlaxXLMRobertaForSequenceClassification - call
FlaxXLMRobertaForMultipleChoice
autodoc FlaxXLMRobertaForMultipleChoice - call
FlaxXLMRobertaForTokenClassification
autodoc FlaxXLMRobertaForTokenClassification - call
FlaxXLMRobertaForQuestionAnswering
autodoc FlaxXLMRobertaForQuestionAnswering - call