transformers/docs/source/en/model_doc/distilbert.md
Chathumina Vimukthi 579135a2f6
Updated model card for distilbert (#37157)
* Updated model card for distilbert

* Updated the distilbert model card

* Updated model card for distilbert

* Updated the distilbert model card

* Addressed code review comments

* Addressed review comments

* fix pipeline

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-04-04 15:22:46 -07:00

8.7 KiB

PyTorch TensorFlow Flax SDPA FlashAttention

DistilBERT

DistilBERT is pretrained by knowledge distillation to create a smaller model with faster inference and requires less compute to train. Through a triple loss objective during pretraining, language modeling loss, distillation loss, cosine-distance loss, DistilBERT demonstrates similar performance to a larger transformer language model.

You can find all the original DistilBERT checkpoints under the DistilBERT organization.

Tip

Click on the DistilBERT models in the right sidebar for more examples of how to apply DistilBERT to different language tasks.

The example below demonstrates how to classify text with [Pipeline], [AutoModel], and from the command line.

from transformers import pipeline

classifier = pipeline(
    task="text-classification",
    model="distilbert-base-uncased-finetuned-sst-2-english",
    torch_dtype=torch.float16,
    device=0
)

result = classifier("I love using Hugging Face Transformers!")
print(result)
# Output: [{'label': 'POSITIVE', 'score': 0.9998}]
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(
    "distilbert/distilbert-base-uncased-finetuned-sst-2-english",
)
model = AutoModelForSequenceClassification.from_pretrained(
    "distilbert/distilbert-base-uncased-finetuned-sst-2-english",
    torch_dtype=torch.float16,
    device_map="auto",
    attn_implementation="sdpa"
)
inputs = tokenizer("I love using Hugging Face Transformers!", return_tensors="pt").to("cuda")

with torch.no_grad():
    outputs = model(**inputs)

predicted_class_id = torch.argmax(outputs.logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_id]
print(f"Predicted label: {predicted_label}")
echo -e "I love using Hugging Face Transformers!" | transformers-cli run --task text-classification --model distilbert-base-uncased-finetuned-sst-2-english

Notes

  • DistilBERT doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP]).
  • DistilBERT doesn't have options to select the input positions (position_ids input). This could be added if necessary though, just let us know if you need this option.

DistilBertConfig

autodoc DistilBertConfig

DistilBertTokenizer

autodoc DistilBertTokenizer

DistilBertTokenizerFast

autodoc DistilBertTokenizerFast

DistilBertModel

autodoc DistilBertModel - forward

DistilBertForMaskedLM

autodoc DistilBertForMaskedLM - forward

DistilBertForSequenceClassification

autodoc DistilBertForSequenceClassification - forward

DistilBertForMultipleChoice

autodoc DistilBertForMultipleChoice - forward

DistilBertForTokenClassification

autodoc DistilBertForTokenClassification - forward

DistilBertForQuestionAnswering

autodoc DistilBertForQuestionAnswering - forward

TFDistilBertModel

autodoc TFDistilBertModel - call

TFDistilBertForMaskedLM

autodoc TFDistilBertForMaskedLM - call

TFDistilBertForSequenceClassification

autodoc TFDistilBertForSequenceClassification - call

TFDistilBertForMultipleChoice

autodoc TFDistilBertForMultipleChoice - call

TFDistilBertForTokenClassification

autodoc TFDistilBertForTokenClassification - call

TFDistilBertForQuestionAnswering

autodoc TFDistilBertForQuestionAnswering - call

FlaxDistilBertModel

autodoc FlaxDistilBertModel - call

FlaxDistilBertForMaskedLM

autodoc FlaxDistilBertForMaskedLM - call

FlaxDistilBertForSequenceClassification

autodoc FlaxDistilBertForSequenceClassification - call

FlaxDistilBertForMultipleChoice

autodoc FlaxDistilBertForMultipleChoice - call

FlaxDistilBertForTokenClassification

autodoc FlaxDistilBertForTokenClassification - call

FlaxDistilBertForQuestionAnswering

autodoc FlaxDistilBertForQuestionAnswering - call