mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 13:20:12 +06:00

* toctree * not-doctested.txt * collapse sections * feedback * update * rewrite get started sections * fixes * fix * loading models * fix * customize models * share * fix link * contribute part 1 * contribute pt 2 * fix toctree * tokenization pt 1 * Add new model (#32615) * v1 - working version * fix * fix * fix * fix * rename to correct name * fix title * fixup * rename files * fix * add copied from on tests * rename to `FalconMamba` everywhere and fix bugs * fix quantization + accelerate * fix copies * add `torch.compile` support * fix tests * fix tests and add slow tests * copies on config * merge the latest changes * fix tests * add few lines about instruct * Apply suggestions from code review Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * fix * fix tests --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * "to be not" -> "not to be" (#32636) * "to be not" -> "not to be" * Update sam.md * Update trainer.py * Update modeling_utils.py * Update test_modeling_utils.py * Update test_modeling_utils.py * fix hfoption tag * tokenization pt. 2 * image processor * fix toctree * backbones * feature extractor * fix file name * processor * update not-doctested * update * make style * fix toctree * revision * make fixup * fix toctree * fix * make style * fix hfoption tag * pipeline * pipeline gradio * pipeline web server * add pipeline * fix toctree * not-doctested * prompting * llm optims * fix toctree * fixes * cache * text generation * fix * chat pipeline * chat stuff * xla * torch.compile * cpu inference * toctree * gpu inference * agents and tools * gguf/tiktoken * finetune * toctree * trainer * trainer pt 2 * optims * optimizers * accelerate * parallelism * fsdp * update * distributed cpu * hardware training * gpu training * gpu training 2 * peft * distrib debug * deepspeed 1 * deepspeed 2 * chat toctree * quant pt 1 * quant pt 2 * fix toctree * fix * fix * quant pt 3 * quant pt 4 * serialization * torchscript * scripts * tpu * review * model addition timeline * modular * more reviews * reviews * fix toctree * reviews reviews * continue reviews * more reviews * modular transformers * more review * zamba2 * fix * all frameworks * pytorch * supported model frameworks * flashattention * rm check_table * not-doctested.txt * rm check_support_list.py * feedback * updates/feedback * review * feedback * fix * update * feedback * updates * update --------- Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
104 lines
4.5 KiB
Markdown
104 lines
4.5 KiB
Markdown
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
|
|
-->
|
|
|
|
# SqueezeBERT
|
|
|
|
<div class="flex flex-wrap space-x-1">
|
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
|
</div>
|
|
|
|
## Overview
|
|
|
|
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a
|
|
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
|
|
SqueezeBERT architecture is that SqueezeBERT uses [grouped convolutions](https://blog.yani.io/filter-group-tutorial)
|
|
instead of fully-connected layers for the Q, K, V and FFN layers.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
|
|
large computing systems, and better neural network models, natural language processing (NLP) technology has made
|
|
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
|
|
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
|
|
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
|
|
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
|
|
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
|
|
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
|
|
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
|
|
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
|
|
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
|
|
set. The SqueezeBERT code will be released.*
|
|
|
|
This model was contributed by [forresti](https://huggingface.co/forresti).
|
|
|
|
## Usage tips
|
|
|
|
- SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
|
|
rather than the left.
|
|
- SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
|
|
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
|
|
with a causal language modeling (CLM) objective are better in that regard.
|
|
- For best results when finetuning on sequence classification tasks, it is recommended to start with the
|
|
*squeezebert/squeezebert-mnli-headless* checkpoint.
|
|
|
|
## Resources
|
|
|
|
- [Text classification task guide](../tasks/sequence_classification)
|
|
- [Token classification task guide](../tasks/token_classification)
|
|
- [Question answering task guide](../tasks/question_answering)
|
|
- [Masked language modeling task guide](../tasks/masked_language_modeling)
|
|
- [Multiple choice task guide](../tasks/multiple_choice)
|
|
|
|
## SqueezeBertConfig
|
|
|
|
[[autodoc]] SqueezeBertConfig
|
|
|
|
## SqueezeBertTokenizer
|
|
|
|
[[autodoc]] SqueezeBertTokenizer
|
|
- build_inputs_with_special_tokens
|
|
- get_special_tokens_mask
|
|
- create_token_type_ids_from_sequences
|
|
- save_vocabulary
|
|
|
|
## SqueezeBertTokenizerFast
|
|
|
|
[[autodoc]] SqueezeBertTokenizerFast
|
|
|
|
## SqueezeBertModel
|
|
|
|
[[autodoc]] SqueezeBertModel
|
|
|
|
## SqueezeBertForMaskedLM
|
|
|
|
[[autodoc]] SqueezeBertForMaskedLM
|
|
|
|
## SqueezeBertForSequenceClassification
|
|
|
|
[[autodoc]] SqueezeBertForSequenceClassification
|
|
|
|
## SqueezeBertForMultipleChoice
|
|
|
|
[[autodoc]] SqueezeBertForMultipleChoice
|
|
|
|
## SqueezeBertForTokenClassification
|
|
|
|
[[autodoc]] SqueezeBertForTokenClassification
|
|
|
|
## SqueezeBertForQuestionAnswering
|
|
|
|
[[autodoc]] SqueezeBertForQuestionAnswering
|