
12 KiB
ALBERT
ALBERT is designed to address memory limitations of scaling and training of BERT. It adds two parameter reduction techniques. The first, factorized embedding parametrization, splits the larger vocabulary embedding matrix into two smaller matrices so you can grow the hidden size without adding a lot more parameters. The second, cross-layer parameter sharing, allows layer to share parameters which keeps the number of learnable parameters lower.
ALBERT was created to address problems like -- GPU/TPU memory limitations, longer training times, and unexpected model degradation in BERT. ALBERT uses two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT:
- Factorized embedding parameterization: The large vocabulary embedding matrix is decomposed into two smaller matrices, reducing memory consumption.
- Cross-layer parameter sharing: Instead of learning separate parameters for each transformer layer, ALBERT shares parameters across layers, further reducing the number of learnable weights.
ALBERT uses absolute position embeddings (like BERT) so padding is applied at right. Size of embeddings is 128 While BERT uses 768. ALBERT can processes maximum 512 token at a time.
You can find all the original ALBERT checkpoints under the ALBERT community organization.
Tip
Click on the ALBERT models in the right sidebar for more examples of how to apply ALBERT to different language tasks.
The example below demonstrates how to predict the [MASK]
token with [Pipeline
], [AutoModel
], and from the command line.
import torch
from transformers import pipeline
pipeline = pipeline(
task="fill-mask",
model="albert-base-v2",
torch_dtype=torch.float16,
device=0
)
pipeline("Plants create [MASK] through a process known as photosynthesis.", top_k=5)
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("albert/albert-base-v2")
model = AutoModelForMaskedLM.from_pretrained(
"albert/albert-base-v2",
torch_dtype=torch.float16,
attn_implementation="sdpa",
device_map="auto"
)
prompt = "Plants create energy through a process known as [MASK]."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
predictions = outputs.logits[0, mask_token_index]
top_k = torch.topk(predictions, k=5).indices.tolist()
for token_id in top_k[0]:
print(f"Prediction: {tokenizer.decode([token_id])}")
echo -e "Plants create [MASK] through a process known as photosynthesis." | transformers run --task fill-mask --model albert-base-v2 --device 0
Notes
- Inputs should be padded on the right because BERT uses absolute position embeddings.
- The embedding size
E
is different from the hidden sizeH
because the embeddings are context independent (one embedding vector represents one token) and the hidden states are context dependent (one hidden state represents a sequence of tokens). The embedding matrix is also larger becauseV x E
whereV
is the vocabulary size. As a result, it's more logical ifH >> E
. IfE < H
, the model has less parameters.
Resources
The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with AlBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
-
[
AlbertForSequenceClassification
] is supported by this example script. -
[
TFAlbertForSequenceClassification
] is supported by this example script. -
[
FlaxAlbertForSequenceClassification
] is supported by this example script and notebook. -
Check the Text classification task guide on how to use the model.
-
[
AlbertForTokenClassification
] is supported by this example script. -
[
TFAlbertForTokenClassification
] is supported by this example script and notebook. -
[
FlaxAlbertForTokenClassification
] is supported by this example script. -
Token classification chapter of the 🤗 Hugging Face Course.
-
Check the Token classification task guide on how to use the model.
- [
AlbertForMaskedLM
] is supported by this example script and notebook. - [
TFAlbertForMaskedLM
] is supported by this example script and notebook. - [
FlaxAlbertForMaskedLM
] is supported by this example script and notebook. - Masked language modeling chapter of the 🤗 Hugging Face Course.
- Check the Masked language modeling task guide on how to use the model.
- [
AlbertForQuestionAnswering
] is supported by this example script and notebook. - [
TFAlbertForQuestionAnswering
] is supported by this example script and notebook. - [
FlaxAlbertForQuestionAnswering
] is supported by this example script. - Question answering chapter of the 🤗 Hugging Face Course.
- Check the Question answering task guide on how to use the model.
Multiple choice
-
[
AlbertForMultipleChoice
] is supported by this example script and notebook. -
[
TFAlbertForMultipleChoice
] is supported by this example script and notebook. -
Check the Multiple choice task guide on how to use the model.
AlbertConfig
autodoc AlbertConfig
AlbertTokenizer
autodoc AlbertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
AlbertTokenizerFast
autodoc AlbertTokenizerFast
Albert specific outputs
autodoc models.albert.modeling_albert.AlbertForPreTrainingOutput
autodoc models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
AlbertModel
autodoc AlbertModel - forward
AlbertForPreTraining
autodoc AlbertForPreTraining - forward
AlbertForMaskedLM
autodoc AlbertForMaskedLM - forward
AlbertForSequenceClassification
autodoc AlbertForSequenceClassification - forward
AlbertForMultipleChoice
autodoc AlbertForMultipleChoice
AlbertForTokenClassification
autodoc AlbertForTokenClassification - forward
AlbertForQuestionAnswering
autodoc AlbertForQuestionAnswering - forward
TFAlbertModel
autodoc TFAlbertModel - call
TFAlbertForPreTraining
autodoc TFAlbertForPreTraining - call
TFAlbertForMaskedLM
autodoc TFAlbertForMaskedLM - call
TFAlbertForSequenceClassification
autodoc TFAlbertForSequenceClassification - call
TFAlbertForMultipleChoice
autodoc TFAlbertForMultipleChoice - call
TFAlbertForTokenClassification
autodoc TFAlbertForTokenClassification - call
TFAlbertForQuestionAnswering
autodoc TFAlbertForQuestionAnswering - call
FlaxAlbertModel
autodoc FlaxAlbertModel - call
FlaxAlbertForPreTraining
autodoc FlaxAlbertForPreTraining - call
FlaxAlbertForMaskedLM
autodoc FlaxAlbertForMaskedLM - call
FlaxAlbertForSequenceClassification
autodoc FlaxAlbertForSequenceClassification - call
FlaxAlbertForMultipleChoice
autodoc FlaxAlbertForMultipleChoice - call
FlaxAlbertForTokenClassification
autodoc FlaxAlbertForTokenClassification - call
FlaxAlbertForQuestionAnswering
autodoc FlaxAlbertForQuestionAnswering - call