mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-21 21:49:06 +06:00

* Initial commit * Just a copy of modeling_idefics.py that will be ported to TF * - Prepend TF to the name of all classes - Convert pytorch ops to TF (not all operations are converted yet) * Add TF imports * Add autotranslated files * Add TF classes to model_tf_auto.py * Add the TF classes in model_doc * include auto-translated code * Adopted from auto-translated version * Add a forgotten super().build * Add test code for TF version. * Fix indentation and load pytorch weights for now * Some fixes. Many tests are still failing but some are passing now. - I have added TODO's for some of the hacks I made to unblock me and I will address them soon - I have the processing_idefics.py hacked in my view to support TF temporarily * Add ALL_LAYERNORM_LAYERS to match pytorch * Revert "Add ALL_LAYERNORM_LAYERS to match pytorch" This reverts commit 7e0a35119b4d7a6284d04d8c543fba1b29e573c9 as it is not needed in the tf implementation. * Fix freeze_relevant_params() * Some more fixes * Fix test_attention_outputs * Add tf stuff to processing_idefics.py processing_idefics.py supports both pytorch and tf now. test_processor_idefics.py for pytorch is passing, so i didn't break anything but still some issues with tf. I also need to add tf tests in test_processor_idefics.py. * Pass return_tensors to image processing code and fix test * Pass return_tensors to the image processor __init__ * Fix several test cases - Make input to some of the forward pass of type `TFModelInputType` - Decorate main layer forward pass with `@unpack_inputs` - Decorate main layer with `@keras_serializable` - Pass `inputs` to TFIdeficsModel * Some more fixes forgotten in last commit * Fix processing code and vision_tf.py * Fix perceiver bug * Import from * Auto-add build() methods + style pass * Fix build() errors due to `None` being passed as shape to some layers * Change name in TFIdeficsForVisionText2Text to attribute in IdeficsForVisionText2Text * Fix pytorch weights load for tf2 There were a lot of `name=` missing in weight initialization code. * Attempt to fix CI * Add back accidently removed line * Remove torch-specific stuff from the TF test file * make fix-copies, make style, remove autotranslated files * Fixes to imports/docstrings * Let's try the from future import in desperation * Fix the core random_attention_mask fn to match the torch/flax behaviour * Clean random_attention_mask up correctly * Remove torch-only test * Fix loss shape, couple of nits * make style * Don't test for OOB embeddings because IDEFICS uses those deliberately * Fix loss computation to handle masking * Fix test failures when flattening * Fix some test failures - Add cross attention gate which was missing and wasn't being passed arround - Fix overwriting of image_attention_mask due to hack I had for dummy inputs * Add a proper stateless scaled_dot_product_attention * make style * Adding missing attribute from the PyTorch version * Small cleanups to decoupledlinearlayer in case that helps * Pass epsilon to LayerNormalization * Attemp to fix pytorch weight cross-loading for TFIdeficsEmbedding * Fix a bug in TFIdeficsGatedCrossAttentionLayer * Patching up build() methods * Constant self.inv_freq * Constant self.inv_freq * First working version The TF implementation works now, there was a bug in the TFIdeficsDecoupledLinear where the weights were mis-intialized (in_features,out_features) when it should be: (out_features, in_features) I have tested this so far with tiny-random and idefics-9b-instruct and gives correct output. I also dumped the final outputs for both pytorch and TF and they are identical. * Fix some test failures * remove print statement * Fix return_tensors * Fix CI test failure check_code_quality * Attempt to fix CI failures by running `make fixup` The hardcoded IDs in test_modeling_tf_idefics.py are for the integration test and makes that file unreadable and should probably be moved to a seperate file. * Attempt to fix tests_pr_documentation_tests * Fix a test failure in test_image_processing_idefics.py * Fix test test_pt_tf_model_equivalence * Fix a few failures * Tiny fix * Some minor fixes * Remove a duplicate test * Override a few test failures for IDEFICS - `test_keras_save_load` is passing now - `test_compile_tf_model` is still failing * Fix processing_idefics.py after rebase * Guard import keras with is_tf_available * fix check code quality * fix check code quality * Minor fixes * Skip test_save_load temporarily This test passed on my local box but fails on the CI, skipping for now to see if there are other remaining failures on the CI. * Run `ruff format tests src utils` * Fix last failing test, `test_compile_tf_model` * Add fixes for vision_tf.py I forgot to add this file in last commit. * Minor fixes * Replace "<<<" with "<<" for doc tests IDEFICS-9B is too big for doctest runner, so don't run it there * Make code more readable * Fix bug after code review I added a layer_norm_eps to IdeficsConfig but I don't even need it since the vision config has a layer_norm_eps. * Fix after code review Use original code tokenizer.convert_tokens_to_ids * Keep PyTorch as the default return_tensors * Fixes to modeling_tf after code review * Fixes from code review - Remove all references of `TF_IDEFICS_PRETRAINED_MODEL_ARCHIVE_LIST` - Pass 1e-5 to LayerNormalization in perceiver * Run ruff * Undo a change * Refactor processing code after Matt's suggestion * Remove TODO's that aren't needed anymore * For pytorch, Use original pytorch processing code from main Since this PR is a TF port it shouldn't make any modifications to pytorch IDEFICS code. This changes undo's the pytorch processing modifications I made and uses original code from main. * Update tests/models/idefics/test_modeling_idefics.py * Update tests/models/idefics/test_modeling_tf_idefics.py * Add missing imports for is_pt_tf_cross_test * [DO NOT MERGE]: This is a commit for debugging and will be reverted The cross test `test_pt_tf_model_equivalence` passes locally but fails when running on the CI. This commit is to help debug that and will be reverted. * Revert "[DO NOT MERGE]: This is a commit for debugging and will be reverted" This reverts commit 8f0d709ec5bd46685fb0b4259d914ffee794875b. * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted * [DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted" This reverts commit 998cc38b8c3d313bf5e5eb55a7f5b7b881897b89. * Revert "[DO NOT MERGE]: This commit is for debugging a CI failure and will be reverted" This reverts commit 1c695ac4219c4ae4d39b330b01744dc27deb7dd4. * Don't skip test_save_load IIRC test_save_load was also failing on the CI but not on my local box, it might be easier to debug that on the CI first than the cross tests * Debugging commit, will be reverted * Revert "Debugging commit, will be reverted" This reverts commit 8eafc8e41e20c4e95a3a90834f06a6e9f445e2d5. * Override `test_save_load` and push model to save Maybe this will help me repro this weird bug * pass my repo_id * add endpoint * Pass a temp (write) token just for this CI * Undo last few commits, still pushing to hub for model debugging The issue seems to be with save_pretrained(), when I looked at the model saved from the CI test failure it is basically empty and has no weights. `self.save_weights(..)` seems to be failing in save_pretrained but needs more debugging * Add logging to modeling tf utils, will be reverted just for debugging * Debugging, will revert * Revert "Debugging, will revert" This reverts commit 9d0d3075fb7c82d8cde3a5c76bc8f3876c5c55d3. * Revert "Add logging to modeling tf utils, will be reverted just for debugging" This reverts commit 774b6b7b1c17b3ce5d7634ade768f2f686cee617. * Remove `test_save_load` The CI failures are gone after my latest rebase, no idea why but I was still saving the model to my hub on HF and the tf_model.h5 file now has everything. * Run make fix-copies * Run ruff format tests src utils * Debugging commit, will be reverted * Run ruff, also trigger CI run * Run ruff again * Undo debugging commit --------- Co-authored-by: Matt <rocketknight1@gmail.com> Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
340 lines
40 KiB
Markdown
340 lines
40 KiB
Markdown
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
-->
|
|
|
|
# 🤗 Transformers
|
|
|
|
State-of-the-art Machine Learning for [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), and [JAX](https://jax.readthedocs.io/en/latest/).
|
|
|
|
🤗 Transformers provides APIs and tools to easily download and train state-of-the-art pretrained models. Using pretrained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities, such as:
|
|
|
|
📝 **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.<br>
|
|
🖼️ **Computer Vision**: image classification, object detection, and segmentation.<br>
|
|
🗣️ **Audio**: automatic speech recognition and audio classification.<br>
|
|
🐙 **Multimodal**: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
|
|
|
|
🤗 Transformers support framework interoperability between PyTorch, TensorFlow, and JAX. This provides the flexibility to use a different framework at each stage of a model's life; train a model in three lines of code in one framework, and load it for inference in another. Models can also be exported to a format like ONNX and TorchScript for deployment in production environments.
|
|
|
|
Join the growing community on the [Hub](https://huggingface.co/models), [forum](https://discuss.huggingface.co/), or [Discord](https://discord.com/invite/JfAtkvEtRb) today!
|
|
|
|
## If you are looking for custom support from the Hugging Face team
|
|
|
|
<a target="_blank" href="https://huggingface.co/support">
|
|
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
|
|
</a>
|
|
|
|
## Contents
|
|
|
|
The documentation is organized into five sections:
|
|
|
|
- **GET STARTED** provides a quick tour of the library and installation instructions to get up and running.
|
|
- **TUTORIALS** are a great place to start if you're a beginner. This section will help you gain the basic skills you need to start using the library.
|
|
- **HOW-TO GUIDES** show you how to achieve a specific goal, like finetuning a pretrained model for language modeling or how to write and share a custom model.
|
|
- **CONCEPTUAL GUIDES** offers more discussion and explanation of the underlying concepts and ideas behind models, tasks, and the design philosophy of 🤗 Transformers.
|
|
- **API** describes all classes and functions:
|
|
|
|
- **MAIN CLASSES** details the most important classes like configuration, model, tokenizer, and pipeline.
|
|
- **MODELS** details the classes and functions related to each model implemented in the library.
|
|
- **INTERNAL HELPERS** details utility classes and functions used internally.
|
|
|
|
|
|
## Supported models and frameworks
|
|
|
|
The table below represents the current support in the library for each of those models, whether they have a Python
|
|
tokenizer (called "slow"). A "fast" tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via
|
|
Flax), PyTorch, and/or TensorFlow.
|
|
|
|
<!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!-->
|
|
|
|
| Model | PyTorch support | TensorFlow support | Flax Support |
|
|
|:------------------------------------------------------------------------:|:---------------:|:------------------:|:------------:|
|
|
| [ALBERT](model_doc/albert) | ✅ | ✅ | ✅ |
|
|
| [ALIGN](model_doc/align) | ✅ | ❌ | ❌ |
|
|
| [AltCLIP](model_doc/altclip) | ✅ | ❌ | ❌ |
|
|
| [Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer) | ✅ | ❌ | ❌ |
|
|
| [Autoformer](model_doc/autoformer) | ✅ | ❌ | ❌ |
|
|
| [Bark](model_doc/bark) | ✅ | ❌ | ❌ |
|
|
| [BART](model_doc/bart) | ✅ | ✅ | ✅ |
|
|
| [BARThez](model_doc/barthez) | ✅ | ✅ | ✅ |
|
|
| [BARTpho](model_doc/bartpho) | ✅ | ✅ | ✅ |
|
|
| [BEiT](model_doc/beit) | ✅ | ❌ | ✅ |
|
|
| [BERT](model_doc/bert) | ✅ | ✅ | ✅ |
|
|
| [Bert Generation](model_doc/bert-generation) | ✅ | ❌ | ❌ |
|
|
| [BertJapanese](model_doc/bert-japanese) | ✅ | ✅ | ✅ |
|
|
| [BERTweet](model_doc/bertweet) | ✅ | ✅ | ✅ |
|
|
| [BigBird](model_doc/big_bird) | ✅ | ❌ | ✅ |
|
|
| [BigBird-Pegasus](model_doc/bigbird_pegasus) | ✅ | ❌ | ❌ |
|
|
| [BioGpt](model_doc/biogpt) | ✅ | ❌ | ❌ |
|
|
| [BiT](model_doc/bit) | ✅ | ❌ | ❌ |
|
|
| [Blenderbot](model_doc/blenderbot) | ✅ | ✅ | ✅ |
|
|
| [BlenderbotSmall](model_doc/blenderbot-small) | ✅ | ✅ | ✅ |
|
|
| [BLIP](model_doc/blip) | ✅ | ✅ | ❌ |
|
|
| [BLIP-2](model_doc/blip-2) | ✅ | ❌ | ❌ |
|
|
| [BLOOM](model_doc/bloom) | ✅ | ❌ | ✅ |
|
|
| [BORT](model_doc/bort) | ✅ | ✅ | ✅ |
|
|
| [BridgeTower](model_doc/bridgetower) | ✅ | ❌ | ❌ |
|
|
| [BROS](model_doc/bros) | ✅ | ❌ | ❌ |
|
|
| [ByT5](model_doc/byt5) | ✅ | ✅ | ✅ |
|
|
| [CamemBERT](model_doc/camembert) | ✅ | ✅ | ❌ |
|
|
| [CANINE](model_doc/canine) | ✅ | ❌ | ❌ |
|
|
| [Chinese-CLIP](model_doc/chinese_clip) | ✅ | ❌ | ❌ |
|
|
| [CLAP](model_doc/clap) | ✅ | ❌ | ❌ |
|
|
| [CLIP](model_doc/clip) | ✅ | ✅ | ✅ |
|
|
| [CLIPSeg](model_doc/clipseg) | ✅ | ❌ | ❌ |
|
|
| [CLVP](model_doc/clvp) | ✅ | ❌ | ❌ |
|
|
| [CodeGen](model_doc/codegen) | ✅ | ❌ | ❌ |
|
|
| [CodeLlama](model_doc/code_llama) | ✅ | ❌ | ✅ |
|
|
| [Cohere](model_doc/cohere) | ✅ | ❌ | ❌ |
|
|
| [Conditional DETR](model_doc/conditional_detr) | ✅ | ❌ | ❌ |
|
|
| [ConvBERT](model_doc/convbert) | ✅ | ✅ | ❌ |
|
|
| [ConvNeXT](model_doc/convnext) | ✅ | ✅ | ❌ |
|
|
| [ConvNeXTV2](model_doc/convnextv2) | ✅ | ✅ | ❌ |
|
|
| [CPM](model_doc/cpm) | ✅ | ✅ | ✅ |
|
|
| [CPM-Ant](model_doc/cpmant) | ✅ | ❌ | ❌ |
|
|
| [CTRL](model_doc/ctrl) | ✅ | ✅ | ❌ |
|
|
| [CvT](model_doc/cvt) | ✅ | ✅ | ❌ |
|
|
| [Data2VecAudio](model_doc/data2vec) | ✅ | ❌ | ❌ |
|
|
| [Data2VecText](model_doc/data2vec) | ✅ | ❌ | ❌ |
|
|
| [Data2VecVision](model_doc/data2vec) | ✅ | ✅ | ❌ |
|
|
| [DBRX](model_doc/dbrx) | ✅ | ❌ | ❌ |
|
|
| [DeBERTa](model_doc/deberta) | ✅ | ✅ | ❌ |
|
|
| [DeBERTa-v2](model_doc/deberta-v2) | ✅ | ✅ | ❌ |
|
|
| [Decision Transformer](model_doc/decision_transformer) | ✅ | ❌ | ❌ |
|
|
| [Deformable DETR](model_doc/deformable_detr) | ✅ | ❌ | ❌ |
|
|
| [DeiT](model_doc/deit) | ✅ | ✅ | ❌ |
|
|
| [DePlot](model_doc/deplot) | ✅ | ❌ | ❌ |
|
|
| [Depth Anything](model_doc/depth_anything) | ✅ | ❌ | ❌ |
|
|
| [DETA](model_doc/deta) | ✅ | ❌ | ❌ |
|
|
| [DETR](model_doc/detr) | ✅ | ❌ | ❌ |
|
|
| [DialoGPT](model_doc/dialogpt) | ✅ | ✅ | ✅ |
|
|
| [DiNAT](model_doc/dinat) | ✅ | ❌ | ❌ |
|
|
| [DINOv2](model_doc/dinov2) | ✅ | ❌ | ❌ |
|
|
| [DistilBERT](model_doc/distilbert) | ✅ | ✅ | ✅ |
|
|
| [DiT](model_doc/dit) | ✅ | ❌ | ✅ |
|
|
| [DonutSwin](model_doc/donut) | ✅ | ❌ | ❌ |
|
|
| [DPR](model_doc/dpr) | ✅ | ✅ | ❌ |
|
|
| [DPT](model_doc/dpt) | ✅ | ❌ | ❌ |
|
|
| [EfficientFormer](model_doc/efficientformer) | ✅ | ✅ | ❌ |
|
|
| [EfficientNet](model_doc/efficientnet) | ✅ | ❌ | ❌ |
|
|
| [ELECTRA](model_doc/electra) | ✅ | ✅ | ✅ |
|
|
| [EnCodec](model_doc/encodec) | ✅ | ❌ | ❌ |
|
|
| [Encoder decoder](model_doc/encoder-decoder) | ✅ | ✅ | ✅ |
|
|
| [ERNIE](model_doc/ernie) | ✅ | ❌ | ❌ |
|
|
| [ErnieM](model_doc/ernie_m) | ✅ | ❌ | ❌ |
|
|
| [ESM](model_doc/esm) | ✅ | ✅ | ❌ |
|
|
| [FairSeq Machine-Translation](model_doc/fsmt) | ✅ | ❌ | ❌ |
|
|
| [Falcon](model_doc/falcon) | ✅ | ❌ | ❌ |
|
|
| [FastSpeech2Conformer](model_doc/fastspeech2_conformer) | ✅ | ❌ | ❌ |
|
|
| [FLAN-T5](model_doc/flan-t5) | ✅ | ✅ | ✅ |
|
|
| [FLAN-UL2](model_doc/flan-ul2) | ✅ | ✅ | ✅ |
|
|
| [FlauBERT](model_doc/flaubert) | ✅ | ✅ | ❌ |
|
|
| [FLAVA](model_doc/flava) | ✅ | ❌ | ❌ |
|
|
| [FNet](model_doc/fnet) | ✅ | ❌ | ❌ |
|
|
| [FocalNet](model_doc/focalnet) | ✅ | ❌ | ❌ |
|
|
| [Funnel Transformer](model_doc/funnel) | ✅ | ✅ | ❌ |
|
|
| [Fuyu](model_doc/fuyu) | ✅ | ❌ | ❌ |
|
|
| [Gemma](model_doc/gemma) | ✅ | ❌ | ✅ |
|
|
| [GIT](model_doc/git) | ✅ | ❌ | ❌ |
|
|
| [GLPN](model_doc/glpn) | ✅ | ❌ | ❌ |
|
|
| [GPT Neo](model_doc/gpt_neo) | ✅ | ❌ | ✅ |
|
|
| [GPT NeoX](model_doc/gpt_neox) | ✅ | ❌ | ❌ |
|
|
| [GPT NeoX Japanese](model_doc/gpt_neox_japanese) | ✅ | ❌ | ❌ |
|
|
| [GPT-J](model_doc/gptj) | ✅ | ✅ | ✅ |
|
|
| [GPT-Sw3](model_doc/gpt-sw3) | ✅ | ✅ | ✅ |
|
|
| [GPTBigCode](model_doc/gpt_bigcode) | ✅ | ❌ | ❌ |
|
|
| [GPTSAN-japanese](model_doc/gptsan-japanese) | ✅ | ❌ | ❌ |
|
|
| [Graphormer](model_doc/graphormer) | ✅ | ❌ | ❌ |
|
|
| [Grounding DINO](model_doc/grounding-dino) | ✅ | ❌ | ❌ |
|
|
| [GroupViT](model_doc/groupvit) | ✅ | ✅ | ❌ |
|
|
| [HerBERT](model_doc/herbert) | ✅ | ✅ | ✅ |
|
|
| [Hubert](model_doc/hubert) | ✅ | ✅ | ❌ |
|
|
| [I-BERT](model_doc/ibert) | ✅ | ❌ | ❌ |
|
|
| [IDEFICS](model_doc/idefics) | ✅ | ✅ | ❌ |
|
|
| [Idefics2](model_doc/idefics2) | ✅ | ❌ | ❌ |
|
|
| [ImageGPT](model_doc/imagegpt) | ✅ | ❌ | ❌ |
|
|
| [Informer](model_doc/informer) | ✅ | ❌ | ❌ |
|
|
| [InstructBLIP](model_doc/instructblip) | ✅ | ❌ | ❌ |
|
|
| [Jamba](model_doc/jamba) | ✅ | ❌ | ❌ |
|
|
| [Jukebox](model_doc/jukebox) | ✅ | ❌ | ❌ |
|
|
| [KOSMOS-2](model_doc/kosmos-2) | ✅ | ❌ | ❌ |
|
|
| [LayoutLM](model_doc/layoutlm) | ✅ | ✅ | ❌ |
|
|
| [LayoutLMv2](model_doc/layoutlmv2) | ✅ | ❌ | ❌ |
|
|
| [LayoutLMv3](model_doc/layoutlmv3) | ✅ | ✅ | ❌ |
|
|
| [LayoutXLM](model_doc/layoutxlm) | ✅ | ❌ | ❌ |
|
|
| [LED](model_doc/led) | ✅ | ✅ | ❌ |
|
|
| [LeViT](model_doc/levit) | ✅ | ❌ | ❌ |
|
|
| [LiLT](model_doc/lilt) | ✅ | ❌ | ❌ |
|
|
| [LLaMA](model_doc/llama) | ✅ | ❌ | ✅ |
|
|
| [Llama2](model_doc/llama2) | ✅ | ❌ | ✅ |
|
|
| [Llama3](model_doc/llama3) | ✅ | ❌ | ✅ |
|
|
| [LLaVa](model_doc/llava) | ✅ | ❌ | ❌ |
|
|
| [LLaVA-NeXT](model_doc/llava_next) | ✅ | ❌ | ❌ |
|
|
| [Longformer](model_doc/longformer) | ✅ | ✅ | ❌ |
|
|
| [LongT5](model_doc/longt5) | ✅ | ❌ | ✅ |
|
|
| [LUKE](model_doc/luke) | ✅ | ❌ | ❌ |
|
|
| [LXMERT](model_doc/lxmert) | ✅ | ✅ | ❌ |
|
|
| [M-CTC-T](model_doc/mctct) | ✅ | ❌ | ❌ |
|
|
| [M2M100](model_doc/m2m_100) | ✅ | ❌ | ❌ |
|
|
| [MADLAD-400](model_doc/madlad-400) | ✅ | ✅ | ✅ |
|
|
| [Mamba](model_doc/mamba) | ✅ | ❌ | ❌ |
|
|
| [Marian](model_doc/marian) | ✅ | ✅ | ✅ |
|
|
| [MarkupLM](model_doc/markuplm) | ✅ | ❌ | ❌ |
|
|
| [Mask2Former](model_doc/mask2former) | ✅ | ❌ | ❌ |
|
|
| [MaskFormer](model_doc/maskformer) | ✅ | ❌ | ❌ |
|
|
| [MatCha](model_doc/matcha) | ✅ | ❌ | ❌ |
|
|
| [mBART](model_doc/mbart) | ✅ | ✅ | ✅ |
|
|
| [mBART-50](model_doc/mbart50) | ✅ | ✅ | ✅ |
|
|
| [MEGA](model_doc/mega) | ✅ | ❌ | ❌ |
|
|
| [Megatron-BERT](model_doc/megatron-bert) | ✅ | ❌ | ❌ |
|
|
| [Megatron-GPT2](model_doc/megatron_gpt2) | ✅ | ✅ | ✅ |
|
|
| [MGP-STR](model_doc/mgp-str) | ✅ | ❌ | ❌ |
|
|
| [Mistral](model_doc/mistral) | ✅ | ❌ | ✅ |
|
|
| [Mixtral](model_doc/mixtral) | ✅ | ❌ | ❌ |
|
|
| [mLUKE](model_doc/mluke) | ✅ | ❌ | ❌ |
|
|
| [MMS](model_doc/mms) | ✅ | ✅ | ✅ |
|
|
| [MobileBERT](model_doc/mobilebert) | ✅ | ✅ | ❌ |
|
|
| [MobileNetV1](model_doc/mobilenet_v1) | ✅ | ❌ | ❌ |
|
|
| [MobileNetV2](model_doc/mobilenet_v2) | ✅ | ❌ | ❌ |
|
|
| [MobileViT](model_doc/mobilevit) | ✅ | ✅ | ❌ |
|
|
| [MobileViTV2](model_doc/mobilevitv2) | ✅ | ❌ | ❌ |
|
|
| [MPNet](model_doc/mpnet) | ✅ | ✅ | ❌ |
|
|
| [MPT](model_doc/mpt) | ✅ | ❌ | ❌ |
|
|
| [MRA](model_doc/mra) | ✅ | ❌ | ❌ |
|
|
| [MT5](model_doc/mt5) | ✅ | ✅ | ✅ |
|
|
| [MusicGen](model_doc/musicgen) | ✅ | ❌ | ❌ |
|
|
| [MusicGen Melody](model_doc/musicgen_melody) | ✅ | ❌ | ❌ |
|
|
| [MVP](model_doc/mvp) | ✅ | ❌ | ❌ |
|
|
| [NAT](model_doc/nat) | ✅ | ❌ | ❌ |
|
|
| [Nezha](model_doc/nezha) | ✅ | ❌ | ❌ |
|
|
| [NLLB](model_doc/nllb) | ✅ | ❌ | ❌ |
|
|
| [NLLB-MOE](model_doc/nllb-moe) | ✅ | ❌ | ❌ |
|
|
| [Nougat](model_doc/nougat) | ✅ | ✅ | ✅ |
|
|
| [Nyströmformer](model_doc/nystromformer) | ✅ | ❌ | ❌ |
|
|
| [OLMo](model_doc/olmo) | ✅ | ❌ | ❌ |
|
|
| [OneFormer](model_doc/oneformer) | ✅ | ❌ | ❌ |
|
|
| [OpenAI GPT](model_doc/openai-gpt) | ✅ | ✅ | ❌ |
|
|
| [OpenAI GPT-2](model_doc/gpt2) | ✅ | ✅ | ✅ |
|
|
| [OpenLlama](model_doc/open-llama) | ✅ | ❌ | ❌ |
|
|
| [OPT](model_doc/opt) | ✅ | ✅ | ✅ |
|
|
| [OWL-ViT](model_doc/owlvit) | ✅ | ❌ | ❌ |
|
|
| [OWLv2](model_doc/owlv2) | ✅ | ❌ | ❌ |
|
|
| [PatchTSMixer](model_doc/patchtsmixer) | ✅ | ❌ | ❌ |
|
|
| [PatchTST](model_doc/patchtst) | ✅ | ❌ | ❌ |
|
|
| [Pegasus](model_doc/pegasus) | ✅ | ✅ | ✅ |
|
|
| [PEGASUS-X](model_doc/pegasus_x) | ✅ | ❌ | ❌ |
|
|
| [Perceiver](model_doc/perceiver) | ✅ | ❌ | ❌ |
|
|
| [Persimmon](model_doc/persimmon) | ✅ | ❌ | ❌ |
|
|
| [Phi](model_doc/phi) | ✅ | ❌ | ❌ |
|
|
| [Phi3](model_doc/phi3) | ✅ | ❌ | ❌ |
|
|
| [PhoBERT](model_doc/phobert) | ✅ | ✅ | ✅ |
|
|
| [Pix2Struct](model_doc/pix2struct) | ✅ | ❌ | ❌ |
|
|
| [PLBart](model_doc/plbart) | ✅ | ❌ | ❌ |
|
|
| [PoolFormer](model_doc/poolformer) | ✅ | ❌ | ❌ |
|
|
| [Pop2Piano](model_doc/pop2piano) | ✅ | ❌ | ❌ |
|
|
| [ProphetNet](model_doc/prophetnet) | ✅ | ❌ | ❌ |
|
|
| [PVT](model_doc/pvt) | ✅ | ❌ | ❌ |
|
|
| [PVTv2](model_doc/pvt_v2) | ✅ | ❌ | ❌ |
|
|
| [QDQBert](model_doc/qdqbert) | ✅ | ❌ | ❌ |
|
|
| [Qwen2](model_doc/qwen2) | ✅ | ❌ | ❌ |
|
|
| [Qwen2MoE](model_doc/qwen2_moe) | ✅ | ❌ | ❌ |
|
|
| [RAG](model_doc/rag) | ✅ | ✅ | ❌ |
|
|
| [REALM](model_doc/realm) | ✅ | ❌ | ❌ |
|
|
| [RecurrentGemma](model_doc/recurrent_gemma) | ✅ | ❌ | ❌ |
|
|
| [Reformer](model_doc/reformer) | ✅ | ❌ | ❌ |
|
|
| [RegNet](model_doc/regnet) | ✅ | ✅ | ✅ |
|
|
| [RemBERT](model_doc/rembert) | ✅ | ✅ | ❌ |
|
|
| [ResNet](model_doc/resnet) | ✅ | ✅ | ✅ |
|
|
| [RetriBERT](model_doc/retribert) | ✅ | ❌ | ❌ |
|
|
| [RoBERTa](model_doc/roberta) | ✅ | ✅ | ✅ |
|
|
| [RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm) | ✅ | ✅ | ✅ |
|
|
| [RoCBert](model_doc/roc_bert) | ✅ | ❌ | ❌ |
|
|
| [RoFormer](model_doc/roformer) | ✅ | ✅ | ✅ |
|
|
| [RWKV](model_doc/rwkv) | ✅ | ❌ | ❌ |
|
|
| [SAM](model_doc/sam) | ✅ | ✅ | ❌ |
|
|
| [SeamlessM4T](model_doc/seamless_m4t) | ✅ | ❌ | ❌ |
|
|
| [SeamlessM4Tv2](model_doc/seamless_m4t_v2) | ✅ | ❌ | ❌ |
|
|
| [SegFormer](model_doc/segformer) | ✅ | ✅ | ❌ |
|
|
| [SegGPT](model_doc/seggpt) | ✅ | ❌ | ❌ |
|
|
| [SEW](model_doc/sew) | ✅ | ❌ | ❌ |
|
|
| [SEW-D](model_doc/sew-d) | ✅ | ❌ | ❌ |
|
|
| [SigLIP](model_doc/siglip) | ✅ | ❌ | ❌ |
|
|
| [Speech Encoder decoder](model_doc/speech-encoder-decoder) | ✅ | ❌ | ✅ |
|
|
| [Speech2Text](model_doc/speech_to_text) | ✅ | ✅ | ❌ |
|
|
| [SpeechT5](model_doc/speecht5) | ✅ | ❌ | ❌ |
|
|
| [Splinter](model_doc/splinter) | ✅ | ❌ | ❌ |
|
|
| [SqueezeBERT](model_doc/squeezebert) | ✅ | ❌ | ❌ |
|
|
| [StableLm](model_doc/stablelm) | ✅ | ❌ | ❌ |
|
|
| [Starcoder2](model_doc/starcoder2) | ✅ | ❌ | ❌ |
|
|
| [SuperPoint](model_doc/superpoint) | ✅ | ❌ | ❌ |
|
|
| [SwiftFormer](model_doc/swiftformer) | ✅ | ✅ | ❌ |
|
|
| [Swin Transformer](model_doc/swin) | ✅ | ✅ | ❌ |
|
|
| [Swin Transformer V2](model_doc/swinv2) | ✅ | ❌ | ❌ |
|
|
| [Swin2SR](model_doc/swin2sr) | ✅ | ❌ | ❌ |
|
|
| [SwitchTransformers](model_doc/switch_transformers) | ✅ | ❌ | ❌ |
|
|
| [T5](model_doc/t5) | ✅ | ✅ | ✅ |
|
|
| [T5v1.1](model_doc/t5v1.1) | ✅ | ✅ | ✅ |
|
|
| [Table Transformer](model_doc/table-transformer) | ✅ | ❌ | ❌ |
|
|
| [TAPAS](model_doc/tapas) | ✅ | ✅ | ❌ |
|
|
| [TAPEX](model_doc/tapex) | ✅ | ✅ | ✅ |
|
|
| [Time Series Transformer](model_doc/time_series_transformer) | ✅ | ❌ | ❌ |
|
|
| [TimeSformer](model_doc/timesformer) | ✅ | ❌ | ❌ |
|
|
| [Trajectory Transformer](model_doc/trajectory_transformer) | ✅ | ❌ | ❌ |
|
|
| [Transformer-XL](model_doc/transfo-xl) | ✅ | ✅ | ❌ |
|
|
| [TrOCR](model_doc/trocr) | ✅ | ❌ | ❌ |
|
|
| [TVLT](model_doc/tvlt) | ✅ | ❌ | ❌ |
|
|
| [TVP](model_doc/tvp) | ✅ | ❌ | ❌ |
|
|
| [UDOP](model_doc/udop) | ✅ | ❌ | ❌ |
|
|
| [UL2](model_doc/ul2) | ✅ | ✅ | ✅ |
|
|
| [UMT5](model_doc/umt5) | ✅ | ❌ | ❌ |
|
|
| [UniSpeech](model_doc/unispeech) | ✅ | ❌ | ❌ |
|
|
| [UniSpeechSat](model_doc/unispeech-sat) | ✅ | ❌ | ❌ |
|
|
| [UnivNet](model_doc/univnet) | ✅ | ❌ | ❌ |
|
|
| [UPerNet](model_doc/upernet) | ✅ | ❌ | ❌ |
|
|
| [VAN](model_doc/van) | ✅ | ❌ | ❌ |
|
|
| [VideoMAE](model_doc/videomae) | ✅ | ❌ | ❌ |
|
|
| [ViLT](model_doc/vilt) | ✅ | ❌ | ❌ |
|
|
| [VipLlava](model_doc/vipllava) | ✅ | ❌ | ❌ |
|
|
| [Vision Encoder decoder](model_doc/vision-encoder-decoder) | ✅ | ✅ | ✅ |
|
|
| [VisionTextDualEncoder](model_doc/vision-text-dual-encoder) | ✅ | ✅ | ✅ |
|
|
| [VisualBERT](model_doc/visual_bert) | ✅ | ❌ | ❌ |
|
|
| [ViT](model_doc/vit) | ✅ | ✅ | ✅ |
|
|
| [ViT Hybrid](model_doc/vit_hybrid) | ✅ | ❌ | ❌ |
|
|
| [VitDet](model_doc/vitdet) | ✅ | ❌ | ❌ |
|
|
| [ViTMAE](model_doc/vit_mae) | ✅ | ✅ | ❌ |
|
|
| [ViTMatte](model_doc/vitmatte) | ✅ | ❌ | ❌ |
|
|
| [ViTMSN](model_doc/vit_msn) | ✅ | ❌ | ❌ |
|
|
| [VITS](model_doc/vits) | ✅ | ❌ | ❌ |
|
|
| [ViViT](model_doc/vivit) | ✅ | ❌ | ❌ |
|
|
| [Wav2Vec2](model_doc/wav2vec2) | ✅ | ✅ | ✅ |
|
|
| [Wav2Vec2-BERT](model_doc/wav2vec2-bert) | ✅ | ❌ | ❌ |
|
|
| [Wav2Vec2-Conformer](model_doc/wav2vec2-conformer) | ✅ | ❌ | ❌ |
|
|
| [Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme) | ✅ | ✅ | ✅ |
|
|
| [WavLM](model_doc/wavlm) | ✅ | ❌ | ❌ |
|
|
| [Whisper](model_doc/whisper) | ✅ | ✅ | ✅ |
|
|
| [X-CLIP](model_doc/xclip) | ✅ | ❌ | ❌ |
|
|
| [X-MOD](model_doc/xmod) | ✅ | ❌ | ❌ |
|
|
| [XGLM](model_doc/xglm) | ✅ | ✅ | ✅ |
|
|
| [XLM](model_doc/xlm) | ✅ | ✅ | ❌ |
|
|
| [XLM-ProphetNet](model_doc/xlm-prophetnet) | ✅ | ❌ | ❌ |
|
|
| [XLM-RoBERTa](model_doc/xlm-roberta) | ✅ | ✅ | ✅ |
|
|
| [XLM-RoBERTa-XL](model_doc/xlm-roberta-xl) | ✅ | ❌ | ❌ |
|
|
| [XLM-V](model_doc/xlm-v) | ✅ | ✅ | ✅ |
|
|
| [XLNet](model_doc/xlnet) | ✅ | ✅ | ❌ |
|
|
| [XLS-R](model_doc/xls_r) | ✅ | ✅ | ✅ |
|
|
| [XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2) | ✅ | ✅ | ✅ |
|
|
| [YOLOS](model_doc/yolos) | ✅ | ❌ | ❌ |
|
|
| [YOSO](model_doc/yoso) | ✅ | ❌ | ❌ |
|
|
|
|
<!-- End table-->
|