transformers/docs/source/en/index.md
Susnato Dhar 7e9f10ac94
Add CLVP (#24745)
* init commit

* attention arch done except rotary emb

* rotary emb done

* text encoder working

* outputs matching

* arch first pass done

* make commands done, tests and docs remaining

* all tests passed, only docs remaining

* docs done

* doc-builder fix

* convert script removed(not relevant)

* minor comments done

* added ckpt conversion script

* tokenizer done

* very minor fix of index.md 2

* mostly make fixup related

* all done except fe and rotary emb

* very small change

* removed unidecode dependency

* style changes

* tokenizer removed require_backends

* added require_inflect to tokenizer tests

* removed VOCAB_FILES in tokenizer test

* inflect dependency removed

* added rotary pos emb cache and simplified the apply method

* style

* little doc change

* more comments

* feature extractor added

* added processor

* auto-regressive config added

* added CLVPConditioningEncoder

* comments done except the test one

* weights added successfull(NOT tested)

* tokenizer fix with numbers

* generate outputs matching

* almost tests passing Integ tests not written

* Integ tests added

* major CUDA error fixed

* docs done

* rebase and multiple fixes

* fixed rebase overwrites

* generate code simplified and tests for AutoRegressive model added

* minor changes

* refectored gpt2 code in clvp file

* weights done and all code refactored

* mostly done except the fast_tokenizer

* doc test fix

* config file's doc fixes

* more config fix

* more comments

* tokenizer comments mostly done

* modeling file mostly refactored and can load modules

* ClvpEncoder tested

* ClvpDecoder, ClvpModel and ClvpForCausalLM tested

* integration and all tests passed

* more fixes

* docs almost done

* ckpt conversion refectored

* style and some failing tests fix

* comments

* temporary output fix but test_assisted_decoding_matches_greedy_search test fails

* majority changes done

* use_cache outputs same now! Along with the asisted_greedy_decoding test fix

* more comments

* more comments

* prepare_inputs_for_generation fixed and _prepare_model_inputs added

* style fix

* clvp.md change

* moved clvpconditionalencoder norms

* add model to new index

* added tokenizer input_ids_with_special_tokens

* small fix

* config mostly done

* added config-tester and changed conversion script

* more comments

* comments

* style fix

* some comments

* tokenizer changed back to prev state

* small commnets

* added output hidden states for the main model

* style fix

* comments

* small change

* revert small change

* .

* Update clvp.md

* Update test_modeling_clvp.py

* :)

* some minor change

* new fixes

* remove to_dict from FE
2023-11-10 13:49:10 +00:00

36 KiB

🤗 Transformers

State-of-the-art Machine Learning for PyTorch, TensorFlow, and JAX.

🤗 Transformers provides APIs and tools to easily download and train state-of-the-art pretrained models. Using pretrained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities, such as:

📝 Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
🖼️ Computer Vision: image classification, object detection, and segmentation.
🗣️ Audio: automatic speech recognition and audio classification.
🐙 Multimodal: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.

🤗 Transformers support framework interoperability between PyTorch, TensorFlow, and JAX. This provides the flexibility to use a different framework at each stage of a model's life; train a model in three lines of code in one framework, and load it for inference in another. Models can also be exported to a format like ONNX and TorchScript for deployment in production environments.

Join the growing community on the Hub, forum, or Discord today!

If you are looking for custom support from the Hugging Face team

HuggingFace Expert Acceleration Program

Contents

The documentation is organized into five sections:

  • GET STARTED provides a quick tour of the library and installation instructions to get up and running.

  • TUTORIALS are a great place to start if you're a beginner. This section will help you gain the basic skills you need to start using the library.

  • HOW-TO GUIDES show you how to achieve a specific goal, like finetuning a pretrained model for language modeling or how to write and share a custom model.

  • CONCEPTUAL GUIDES offers more discussion and explanation of the underlying concepts and ideas behind models, tasks, and the design philosophy of 🤗 Transformers.

  • API describes all classes and functions:

    • MAIN CLASSES details the most important classes like configuration, model, tokenizer, and pipeline.
    • MODELS details the classes and functions related to each model implemented in the library.
    • INTERNAL HELPERS details utility classes and functions used internally.

Supported models and frameworks

The table below represents the current support in the library for each of those models, whether they have a Python tokenizer (called "slow"). A "fast" tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via Flax), PyTorch, and/or TensorFlow.

Model PyTorch support TensorFlow support Flax Support
ALBERT
ALIGN
AltCLIP
Audio Spectrogram Transformer
Autoformer
Bark
BART
BARThez
BARTpho
BEiT
BERT
Bert Generation
BertJapanese
BERTweet
BigBird
BigBird-Pegasus
BioGpt
BiT
Blenderbot
BlenderbotSmall
BLIP
BLIP-2
BLOOM
BORT
BridgeTower
BROS
ByT5
CamemBERT
CANINE
Chinese-CLIP
CLAP
CLIP
CLIPSeg
CLVP
CodeGen
CodeLlama
Conditional DETR
ConvBERT
ConvNeXT
ConvNeXTV2
CPM
CPM-Ant
CTRL
CvT
Data2VecAudio
Data2VecText
Data2VecVision
DeBERTa
DeBERTa-v2
Decision Transformer
Deformable DETR
DeiT
DePlot
DETA
DETR
DialoGPT
DiNAT
DINOv2
DistilBERT
DiT
DonutSwin
DPR
DPT
EfficientFormer
EfficientNet
ELECTRA
EnCodec
Encoder decoder
ERNIE
ErnieM
ESM
FairSeq Machine-Translation
Falcon
FLAN-T5
FLAN-UL2
FlauBERT
FLAVA
FNet
FocalNet
Funnel Transformer
Fuyu
GIT
GLPN
GPT Neo
GPT NeoX
GPT NeoX Japanese
GPT-J
GPT-Sw3
GPTBigCode
GPTSAN-japanese
Graphormer
GroupViT
HerBERT
Hubert
I-BERT
IDEFICS
ImageGPT
Informer
InstructBLIP
Jukebox
KOSMOS-2
LayoutLM
LayoutLMv2
LayoutLMv3
LayoutXLM
LED
LeViT
LiLT
LLaMA
Llama2
Longformer
LongT5
LUKE
LXMERT
M-CTC-T
M2M100
Marian
MarkupLM
Mask2Former
MaskFormer
MatCha
mBART
mBART-50
MEGA
Megatron-BERT
Megatron-GPT2
MGP-STR
Mistral
mLUKE
MMS
MobileBERT
MobileNetV1
MobileNetV2
MobileViT
MobileViTV2
MPNet
MPT
MRA
MT5
MusicGen
MVP
NAT
Nezha
NLLB
NLLB-MOE
Nougat
Nyströmformer
OneFormer
OpenAI GPT
OpenAI GPT-2
OpenLlama
OPT
OWL-ViT
OWLv2
Pegasus
PEGASUS-X
Perceiver
Persimmon
PhoBERT
Pix2Struct
PLBart
PoolFormer
Pop2Piano
ProphetNet
PVT
QDQBert
RAG
REALM
Reformer
RegNet
RemBERT
ResNet
RetriBERT
RoBERTa
RoBERTa-PreLayerNorm
RoCBert
RoFormer
RWKV
SAM
SeamlessM4T
SegFormer
SEW
SEW-D
Speech Encoder decoder
Speech2Text
SpeechT5
Splinter
SqueezeBERT
SwiftFormer
Swin Transformer
Swin Transformer V2
Swin2SR
SwitchTransformers
T5
T5v1.1
Table Transformer
TAPAS
TAPEX
Time Series Transformer
TimeSformer
Trajectory Transformer
Transformer-XL
TrOCR
TVLT
UL2
UMT5
UniSpeech
UniSpeechSat
UPerNet
VAN
VideoMAE
ViLT
Vision Encoder decoder
VisionTextDualEncoder
VisualBERT
ViT
ViT Hybrid
VitDet
ViTMAE
ViTMatte
ViTMSN
VITS
ViViT
Wav2Vec2
Wav2Vec2-Conformer
Wav2Vec2Phoneme
WavLM
Whisper
X-CLIP
X-MOD
XGLM
XLM
XLM-ProphetNet
XLM-RoBERTa
XLM-RoBERTa-XL
XLM-V
XLNet
XLS-R
XLSR-Wav2Vec2
YOLOS
YOSO