transformers/docs/source/en/model_doc/albert.md
SOUVIK CHAND [ZD] d5d007a1a0
Some checks failed
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Has been cancelled
Build documentation / build (push) Has been cancelled
Slow tests on important models (on Push - A10) / Get all modified files (push) Has been cancelled
Secret Leaks / trufflehog (push) Has been cancelled
Update Transformers metadata / build_and_package (push) Has been cancelled
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Has been cancelled
Updated Albert model Card (#37753)
* Updated Albert model Card

* Update docs/source/en/model_doc/albert.md

added the quotes in <hfoption id="Pipeline">

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

updated checkpoints

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

changed !Tips description

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

updated text

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

updated transformer-cli implementation

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

changed text

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

removed repeated description

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update albert.md

removed lines

* Update albert.md

updated pipeline code

* Update albert.md

updated auto model code, removed quantization as model size is not large, removed the attention visualizer part

* Update docs/source/en/model_doc/albert.md

updated notes

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update albert.md

reduced a  repeating point in notes

* Update docs/source/en/model_doc/albert.md

updated transformer-CLI

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/albert.md

removed extra notes

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-06-13 14:58:06 -07:00

12 KiB

PyTorch TensorFlow Flax SDPA

ALBERT

ALBERT is designed to address memory limitations of scaling and training of BERT. It adds two parameter reduction techniques. The first, factorized embedding parametrization, splits the larger vocabulary embedding matrix into two smaller matrices so you can grow the hidden size without adding a lot more parameters. The second, cross-layer parameter sharing, allows layer to share parameters which keeps the number of learnable parameters lower.

<<<<<<< HEAD

<<<<<<< HEAD ALBERT was created to address problems like -- GPU/TPU memory limitations, longer training times, and unexpected model degradation in BERT. ALBERT uses two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT:

  • Factorized embedding parameterization: The large vocabulary embedding matrix is decomposed into two smaller matrices, reducing memory consumption.
  • Cross-layer parameter sharing: Instead of learning separate parameters for each transformer layer, ALBERT shares parameters across layers, further reducing the number of learnable weights.

ALBERT uses absolute position embeddings (like BERT) so padding is applied at right. Size of embeddings is 128 While BERT uses 768. ALBERT can processes maximum 512 token at a time.

7ba1110083 (Update docs/source/en/model_doc/albert.md )

=======

155b733538 (Update albert.md) You can find all the original ALBERT checkpoints under the ALBERT community organization.

Tip

Click on the ALBERT models in the right sidebar for more examples of how to apply ALBERT to different language tasks.

The example below demonstrates how to predict the [MASK] token with [Pipeline], [AutoModel], and from the command line.

import torch
from transformers import pipeline

pipeline = pipeline(
    task="fill-mask",
    model="albert-base-v2",
    torch_dtype=torch.float16,
    device=0
)
pipeline("Plants create [MASK] through a process known as photosynthesis.", top_k=5)
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("albert/albert-base-v2")
model = AutoModelForMaskedLM.from_pretrained(
    "albert/albert-base-v2",
    torch_dtype=torch.float16,
    attn_implementation="sdpa",
    device_map="auto"
)

prompt = "Plants create energy through a process known as [MASK]."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) 

with torch.no_grad():
    outputs = model(**inputs)
    mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
    predictions = outputs.logits[0, mask_token_index]

top_k = torch.topk(predictions, k=5).indices.tolist()
for token_id in top_k[0]:
    print(f"Prediction: {tokenizer.decode([token_id])}")
echo -e "Plants create [MASK] through a process known as photosynthesis." | transformers run --task fill-mask --model albert-base-v2 --device 0

Notes

  • Inputs should be padded on the right because BERT uses absolute position embeddings.
  • The embedding size E is different from the hidden size H because the embeddings are context independent (one embedding vector represents one token) and the hidden states are context dependent (one hidden state represents a sequence of tokens). The embedding matrix is also larger because V x E where V is the vocabulary size. As a result, it's more logical if H >> E. If E < H, the model has less parameters.

Resources

The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with AlBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

Multiple choice

AlbertConfig

autodoc AlbertConfig

AlbertTokenizer

autodoc AlbertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

AlbertTokenizerFast

autodoc AlbertTokenizerFast

Albert specific outputs

autodoc models.albert.modeling_albert.AlbertForPreTrainingOutput

autodoc models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput

AlbertModel

autodoc AlbertModel - forward

AlbertForPreTraining

autodoc AlbertForPreTraining - forward

AlbertForMaskedLM

autodoc AlbertForMaskedLM - forward

AlbertForSequenceClassification

autodoc AlbertForSequenceClassification - forward

AlbertForMultipleChoice

autodoc AlbertForMultipleChoice

AlbertForTokenClassification

autodoc AlbertForTokenClassification - forward

AlbertForQuestionAnswering

autodoc AlbertForQuestionAnswering - forward

TFAlbertModel

autodoc TFAlbertModel - call

TFAlbertForPreTraining

autodoc TFAlbertForPreTraining - call

TFAlbertForMaskedLM

autodoc TFAlbertForMaskedLM - call

TFAlbertForSequenceClassification

autodoc TFAlbertForSequenceClassification - call

TFAlbertForMultipleChoice

autodoc TFAlbertForMultipleChoice - call

TFAlbertForTokenClassification

autodoc TFAlbertForTokenClassification - call

TFAlbertForQuestionAnswering

autodoc TFAlbertForQuestionAnswering - call

FlaxAlbertModel

autodoc FlaxAlbertModel - call

FlaxAlbertForPreTraining

autodoc FlaxAlbertForPreTraining - call

FlaxAlbertForMaskedLM

autodoc FlaxAlbertForMaskedLM - call

FlaxAlbertForSequenceClassification

autodoc FlaxAlbertForSequenceClassification - call

FlaxAlbertForMultipleChoice

autodoc FlaxAlbertForMultipleChoice - call

FlaxAlbertForTokenClassification

autodoc FlaxAlbertForTokenClassification - call

FlaxAlbertForQuestionAnswering

autodoc FlaxAlbertForQuestionAnswering - call