mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00

* Commit with BTModel and latest HF code * Placeholder classes for BTForMLM and BTForITR * Importing Bert classes from transformers * Removed objectives.py and dist_utils.py * Removed swin_transformer.py * Add image normalization, BridgeTowerForImageAndTextRetrieval * Add center_crop * Removing bert tokenizer and LCI references * Tested config loading from HF transformers hub * Removed state_dict updates and added path to hub * Enable center crop * Getting image_size from config, renaming num_heads and num_layers * Handling max_length in BridgeTowerProcessor * Add BridgeTowerForMaskedLM * Add doc string for BridgeTowerConfig * Add doc strings for BT config, processor, image processor * Adding docs, removed swin * Removed convert_bridgetower_original_to_pytorch.py * Added doc files for bridgetower, removed is_vision * Add support attention_mask=None and BridgeTowerModelOutput * Fix formatting * Fixes with 'make style', 'make quality', 'make fixup' * Remove downstream tasks from BridgeTowerModel * Formatting fixes, add return_dict to BT models * Clean up after doc_test * Update BTModelOutput return type, fix todo in doc * Remove loss_names from init * implement tests and update tuples returned by models * Add image reference to bridgetower.mdx * after make fix-copies, make fixup, make style, make quality, make repo-consistency * Rename class names with BridgeTower prefix * Fix for image_size in BTImageProcessor * implement feature extraction bridgetower tests * Update image_mean and image_std to be list * remove unused import * Removed old comments * Rework CLIP * update config in tests followed config update * Formatting fixes * Add copied from for BridgeTowerPredictionHeadTransform * Update bridgetower.mdx * Update test_feature_extraction_bridgetower.py * Update bridgetower.mdx * BridgeTowerForMaskedLM is conditioned on image too * Add BridgeTowerForMaskedLM * Fixes * Call post_init to init weights * Move freeze layers into method * Remove BTFeatureExtractor, add BT under multimodal models * Remove BTFeatureExtractor, add BT under multimodal models * Code review feedback - cleanup * Rename variables * Formatting and style to PR review feedback * Move center crop after resize * Use named parameters * Style fix for modeling_bridgetower.py * Update docs/source/en/model_doc/bridgetower.mdx Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update docs/source/en/model_doc/bridgetower.mdx Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update docs/source/en/model_doc/bridgetower.mdx Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/bridgetower/modeling_bridgetower.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/bridgetower/modeling_bridgetower.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update docs/source/en/model_doc/bridgetower.mdx Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * Update src/transformers/models/bridgetower/modeling_bridgetower.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Rename config params, copy BERT classes, clean comments * Cleanup irtr * Replace Roberta imports, add BTTextConfig and Model * Update docs, add visionconfig, consistent arg names * make fixup * Comments for forward in BTModel and make fixup * correct tests * Remove inconsistent roberta copied from * Add BridgeTowerTextModel to dummy_pt_objects.py * Add BridgeTowerTextModel to IGNORE_NON_TESTED * Update docs for BT Text and Vision Configs * Treat BridgeTowerTextModel as a private model * BridgeTowerTextModel as private * Run make fix-copies * Adding BTTextModel to PRIVATE_MODELS * Fix for issue with BT Text and Image configs * make style changes * Update README_ja.md Add から to BridgeTower's description * Clean up config, .mdx and arg names * Fix init_weights. Remove nn.Sequential * Formatting and style fixes * Re-add tie_word_embeddings in config * update test implementation * update style * remove commented out * fix style * Update README with abs for BridgeTower * fix style * fix mdx file * Update bridgetower.mdx * Update img src in bridgetower.mdx * Update README.md * Update README.md * resolve style failed * Update _toctree.yml * Update README_ja.md * Removed mlp_ratio, rename feats, rename BTCLIPModel * Replace BTCLIP with BTVisionModel,pass in vision_config to BTVisionModel * Add test_initialization support * Add support for output_hidden_states * Update support for output_hidden_states * Add support for output_attentions * Add docstring for output_hidden_states * update tests * add bridgetowervisionmodel as private model * rerun the PR test * Remove model_type, pass configs to classes, renames * Change self.device to use weight device * Remove image_size * Style check fixes * Add hidden_size and num_hidden_layers to BridgeTowerTransformer * Update device setting * cosmetic update * trigger test again * trigger tests again * Update test_modeling_bridgetower.py trigger tests again * Update test_modeling_bridgetower.py * minor update * re-trigger tests * Update docs/source/en/model_doc/bridgetower.mdx Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Remove pad, update max_text_len, doc cleanup, pass eps to LayerNorm * Added copied to, some more review feedback * make fixup * Use BridgeTowerVisionEmbeddings * Code cleanup * Fixes for BridgeTowerVisionEmbeddings * style checks * re-tests * fix embedding * address comment on init file * retrigger tests * update import prepare_image_inputs * update test_image_processing_bridgetower.py to reflect test_image_processing_common.py * retrigger tests Co-authored-by: Shaoyen Tseng <shao-yen.tseng@intel.com> Co-authored-by: Tiep Le <tiep.le@intel.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by: Tiep Le <97980157+tileintel@users.noreply.github.com>
601 lines
17 KiB
YAML
Executable File
601 lines
17 KiB
YAML
Executable File
- sections:
|
|
- local: index
|
|
title: 🤗 Transformers
|
|
- local: quicktour
|
|
title: Quick tour
|
|
- local: installation
|
|
title: Installation
|
|
title: Get started
|
|
- sections:
|
|
- local: pipeline_tutorial
|
|
title: Pipelines for inference
|
|
- local: autoclass_tutorial
|
|
title: Load pretrained instances with an AutoClass
|
|
- local: preprocessing
|
|
title: Preprocess
|
|
- local: training
|
|
title: Fine-tune a pretrained model
|
|
- local: accelerate
|
|
title: Distributed training with 🤗 Accelerate
|
|
- local: model_sharing
|
|
title: Share a model
|
|
title: Tutorials
|
|
- sections:
|
|
- sections:
|
|
- local: create_a_model
|
|
title: Create a custom architecture
|
|
- local: custom_models
|
|
title: Sharing custom models
|
|
- local: run_scripts
|
|
title: Train with a script
|
|
- local: sagemaker
|
|
title: Run training on Amazon SageMaker
|
|
- local: converting_tensorflow_models
|
|
title: Converting from TensorFlow checkpoints
|
|
- local: serialization
|
|
title: Export to ONNX
|
|
- local: torchscript
|
|
title: Export to TorchScript
|
|
- local: troubleshooting
|
|
title: Troubleshoot
|
|
title: General usage
|
|
- sections:
|
|
- local: fast_tokenizers
|
|
title: Use tokenizers from 🤗 Tokenizers
|
|
- local: multilingual
|
|
title: Inference for multilingual models
|
|
- local: generation_strategies
|
|
title: Text generation strategies
|
|
- sections:
|
|
- local: tasks/sequence_classification
|
|
title: Text classification
|
|
- local: tasks/token_classification
|
|
title: Token classification
|
|
- local: tasks/question_answering
|
|
title: Question answering
|
|
- local: tasks/language_modeling
|
|
title: Language modeling
|
|
- local: tasks/translation
|
|
title: Translation
|
|
- local: tasks/summarization
|
|
title: Summarization
|
|
- local: tasks/multiple_choice
|
|
title: Multiple choice
|
|
title: Task guides
|
|
isExpanded: false
|
|
title: Natural Language Processing
|
|
- sections:
|
|
- local: tasks/audio_classification
|
|
title: Audio classification
|
|
- local: tasks/asr
|
|
title: Automatic speech recognition
|
|
title: Audio
|
|
- sections:
|
|
- local: tasks/image_classification
|
|
title: Image classification
|
|
- local: tasks/semantic_segmentation
|
|
title: Semantic segmentation
|
|
- local: tasks/video_classification
|
|
title: Video classification
|
|
- local: tasks/object_detection
|
|
title: Object detection
|
|
title: Computer Vision
|
|
- sections:
|
|
- local: performance
|
|
title: Overview
|
|
- local: perf_train_gpu_one
|
|
title: Training on one GPU
|
|
- local: perf_train_gpu_many
|
|
title: Training on many GPUs
|
|
- local: perf_train_cpu
|
|
title: Training on CPU
|
|
- local: perf_train_cpu_many
|
|
title: Training on many CPUs
|
|
- local: perf_train_tpu
|
|
title: Training on TPUs
|
|
- local: perf_train_special
|
|
title: Training on Specialized Hardware
|
|
- local: perf_infer_cpu
|
|
title: Inference on CPU
|
|
- local: perf_infer_gpu_one
|
|
title: Inference on one GPU
|
|
- local: perf_infer_gpu_many
|
|
title: Inference on many GPUs
|
|
- local: perf_infer_special
|
|
title: Inference on Specialized Hardware
|
|
- local: perf_hardware
|
|
title: Custom hardware for training
|
|
- local: big_models
|
|
title: Instantiating a big model
|
|
- local: debugging
|
|
title: Debugging
|
|
- local: hpo_train
|
|
title: Hyperparameter Search using Trainer API
|
|
- local: tf_xla
|
|
title: XLA Integration for TensorFlow Models
|
|
title: Performance and scalability
|
|
- sections:
|
|
- local: contributing
|
|
title: How to contribute to transformers?
|
|
- local: add_new_model
|
|
title: How to add a model to 🤗 Transformers?
|
|
- local: add_tensorflow_model
|
|
title: How to convert a 🤗 Transformers model to TensorFlow?
|
|
- local: add_new_pipeline
|
|
title: How to add a pipeline to 🤗 Transformers?
|
|
- local: testing
|
|
title: Testing
|
|
- local: pr_checks
|
|
title: Checks on a Pull Request
|
|
title: Contribute
|
|
- local: notebooks
|
|
title: 🤗 Transformers Notebooks
|
|
- local: community
|
|
title: Community resources
|
|
- local: benchmarks
|
|
title: Benchmarks
|
|
- local: migration
|
|
title: Migrating from previous packages
|
|
title: How-to guides
|
|
- sections:
|
|
- local: philosophy
|
|
title: Philosophy
|
|
- local: glossary
|
|
title: Glossary
|
|
- local: task_summary
|
|
title: What 🤗 Transformers can do
|
|
- local: model_summary
|
|
title: Summary of the models
|
|
- local: tokenizer_summary
|
|
title: Summary of the tokenizers
|
|
- local: pad_truncation
|
|
title: Padding and truncation
|
|
- local: bertology
|
|
title: BERTology
|
|
- local: perplexity
|
|
title: Perplexity of fixed-length models
|
|
- local: pipeline_webserver
|
|
title: Pipelines for webserver inference
|
|
title: Conceptual guides
|
|
- sections:
|
|
- sections:
|
|
- local: model_doc/auto
|
|
title: Auto Classes
|
|
- local: main_classes/callback
|
|
title: Callbacks
|
|
- local: main_classes/configuration
|
|
title: Configuration
|
|
- local: main_classes/data_collator
|
|
title: Data Collator
|
|
- local: main_classes/keras_callbacks
|
|
title: Keras callbacks
|
|
- local: main_classes/logging
|
|
title: Logging
|
|
- local: main_classes/model
|
|
title: Models
|
|
- local: main_classes/text_generation
|
|
title: Text Generation
|
|
- local: main_classes/onnx
|
|
title: ONNX
|
|
- local: main_classes/optimizer_schedules
|
|
title: Optimization
|
|
- local: main_classes/output
|
|
title: Model outputs
|
|
- local: main_classes/pipelines
|
|
title: Pipelines
|
|
- local: main_classes/processors
|
|
title: Processors
|
|
- local: main_classes/tokenizer
|
|
title: Tokenizer
|
|
- local: main_classes/trainer
|
|
title: Trainer
|
|
- local: main_classes/deepspeed
|
|
title: DeepSpeed Integration
|
|
- local: main_classes/feature_extractor
|
|
title: Feature Extractor
|
|
- local: main_classes/image_processor
|
|
title: Image Processor
|
|
title: Main Classes
|
|
- sections:
|
|
- isExpanded: false
|
|
sections:
|
|
- local: model_doc/albert
|
|
title: ALBERT
|
|
- local: model_doc/bart
|
|
title: BART
|
|
- local: model_doc/barthez
|
|
title: BARThez
|
|
- local: model_doc/bartpho
|
|
title: BARTpho
|
|
- local: model_doc/bert
|
|
title: BERT
|
|
- local: model_doc/bert-generation
|
|
title: BertGeneration
|
|
- local: model_doc/bert-japanese
|
|
title: BertJapanese
|
|
- local: model_doc/bertweet
|
|
title: Bertweet
|
|
- local: model_doc/big_bird
|
|
title: BigBird
|
|
- local: model_doc/bigbird_pegasus
|
|
title: BigBirdPegasus
|
|
- local: model_doc/biogpt
|
|
title: BioGpt
|
|
- local: model_doc/blenderbot
|
|
title: Blenderbot
|
|
- local: model_doc/blenderbot-small
|
|
title: Blenderbot Small
|
|
- local: model_doc/bloom
|
|
title: BLOOM
|
|
- local: model_doc/bort
|
|
title: BORT
|
|
- local: model_doc/byt5
|
|
title: ByT5
|
|
- local: model_doc/camembert
|
|
title: CamemBERT
|
|
- local: model_doc/canine
|
|
title: CANINE
|
|
- local: model_doc/codegen
|
|
title: CodeGen
|
|
- local: model_doc/convbert
|
|
title: ConvBERT
|
|
- local: model_doc/cpm
|
|
title: CPM
|
|
- local: model_doc/ctrl
|
|
title: CTRL
|
|
- local: model_doc/deberta
|
|
title: DeBERTa
|
|
- local: model_doc/deberta-v2
|
|
title: DeBERTa-v2
|
|
- local: model_doc/dialogpt
|
|
title: DialoGPT
|
|
- local: model_doc/distilbert
|
|
title: DistilBERT
|
|
- local: model_doc/dpr
|
|
title: DPR
|
|
- local: model_doc/electra
|
|
title: ELECTRA
|
|
- local: model_doc/encoder-decoder
|
|
title: Encoder Decoder Models
|
|
- local: model_doc/ernie
|
|
title: ERNIE
|
|
- local: model_doc/esm
|
|
title: ESM
|
|
- local: model_doc/flan-t5
|
|
title: FLAN-T5
|
|
- local: model_doc/flaubert
|
|
title: FlauBERT
|
|
- local: model_doc/fnet
|
|
title: FNet
|
|
- local: model_doc/fsmt
|
|
title: FSMT
|
|
- local: model_doc/funnel
|
|
title: Funnel Transformer
|
|
- local: model_doc/openai-gpt
|
|
title: GPT
|
|
- local: model_doc/gpt_neo
|
|
title: GPT Neo
|
|
- local: model_doc/gpt_neox
|
|
title: GPT NeoX
|
|
- local: model_doc/gpt_neox_japanese
|
|
title: GPT NeoX Japanese
|
|
- local: model_doc/gptj
|
|
title: GPT-J
|
|
- local: model_doc/gpt2
|
|
title: GPT2
|
|
- local: model_doc/gpt-sw3
|
|
title: GPTSw3
|
|
- local: model_doc/herbert
|
|
title: HerBERT
|
|
- local: model_doc/ibert
|
|
title: I-BERT
|
|
- local: model_doc/jukebox
|
|
title: Jukebox
|
|
- local: model_doc/layoutlm
|
|
title: LayoutLM
|
|
- local: model_doc/led
|
|
title: LED
|
|
- local: model_doc/lilt
|
|
title: LiLT
|
|
- local: model_doc/longformer
|
|
title: Longformer
|
|
- local: model_doc/longt5
|
|
title: LongT5
|
|
- local: model_doc/luke
|
|
title: LUKE
|
|
- local: model_doc/m2m_100
|
|
title: M2M100
|
|
- local: model_doc/marian
|
|
title: MarianMT
|
|
- local: model_doc/markuplm
|
|
title: MarkupLM
|
|
- local: model_doc/mbart
|
|
title: MBart and MBart-50
|
|
- local: model_doc/megatron-bert
|
|
title: MegatronBERT
|
|
- local: model_doc/megatron_gpt2
|
|
title: MegatronGPT2
|
|
- local: model_doc/mluke
|
|
title: mLUKE
|
|
- local: model_doc/mobilebert
|
|
title: MobileBERT
|
|
- local: model_doc/mpnet
|
|
title: MPNet
|
|
- local: model_doc/mt5
|
|
title: MT5
|
|
- local: model_doc/mvp
|
|
title: MVP
|
|
- local: model_doc/nezha
|
|
title: NEZHA
|
|
- local: model_doc/nllb
|
|
title: NLLB
|
|
- local: model_doc/nystromformer
|
|
title: Nyströmformer
|
|
- local: model_doc/opt
|
|
title: OPT
|
|
- local: model_doc/pegasus
|
|
title: Pegasus
|
|
- local: model_doc/pegasus_x
|
|
title: PEGASUS-X
|
|
- local: model_doc/phobert
|
|
title: PhoBERT
|
|
- local: model_doc/plbart
|
|
title: PLBart
|
|
- local: model_doc/prophetnet
|
|
title: ProphetNet
|
|
- local: model_doc/qdqbert
|
|
title: QDQBert
|
|
- local: model_doc/rag
|
|
title: RAG
|
|
- local: model_doc/realm
|
|
title: REALM
|
|
- local: model_doc/reformer
|
|
title: Reformer
|
|
- local: model_doc/rembert
|
|
title: RemBERT
|
|
- local: model_doc/retribert
|
|
title: RetriBERT
|
|
- local: model_doc/roberta
|
|
title: RoBERTa
|
|
- local: model_doc/roberta-prelayernorm
|
|
title: RoBERTa-PreLayerNorm
|
|
- local: model_doc/roc_bert
|
|
title: RoCBert
|
|
- local: model_doc/roformer
|
|
title: RoFormer
|
|
- local: model_doc/splinter
|
|
title: Splinter
|
|
- local: model_doc/squeezebert
|
|
title: SqueezeBERT
|
|
- local: model_doc/switch_transformers
|
|
title: SwitchTransformers
|
|
- local: model_doc/t5
|
|
title: T5
|
|
- local: model_doc/t5v1.1
|
|
title: T5v1.1
|
|
- local: model_doc/tapas
|
|
title: TAPAS
|
|
- local: model_doc/tapex
|
|
title: TAPEX
|
|
- local: model_doc/transfo-xl
|
|
title: Transformer XL
|
|
- local: model_doc/ul2
|
|
title: UL2
|
|
- local: model_doc/xglm
|
|
title: XGLM
|
|
- local: model_doc/xlm
|
|
title: XLM
|
|
- local: model_doc/xlm-prophetnet
|
|
title: XLM-ProphetNet
|
|
- local: model_doc/xlm-roberta
|
|
title: XLM-RoBERTa
|
|
- local: model_doc/xlm-roberta-xl
|
|
title: XLM-RoBERTa-XL
|
|
- local: model_doc/xlnet
|
|
title: XLNet
|
|
- local: model_doc/yoso
|
|
title: YOSO
|
|
title: Text models
|
|
- isExpanded: false
|
|
sections:
|
|
- local: model_doc/beit
|
|
title: BEiT
|
|
- local: model_doc/bit
|
|
title: BiT
|
|
- local: model_doc/conditional_detr
|
|
title: Conditional DETR
|
|
- local: model_doc/convnext
|
|
title: ConvNeXT
|
|
- local: model_doc/cvt
|
|
title: CvT
|
|
- local: model_doc/deformable_detr
|
|
title: Deformable DETR
|
|
- local: model_doc/deit
|
|
title: DeiT
|
|
- local: model_doc/detr
|
|
title: DETR
|
|
- local: model_doc/dinat
|
|
title: DiNAT
|
|
- local: model_doc/dit
|
|
title: DiT
|
|
- local: model_doc/dpt
|
|
title: DPT
|
|
- local: model_doc/efficientformer
|
|
title: EfficientFormer
|
|
- local: model_doc/glpn
|
|
title: GLPN
|
|
- local: model_doc/imagegpt
|
|
title: ImageGPT
|
|
- local: model_doc/levit
|
|
title: LeViT
|
|
- local: model_doc/mask2former
|
|
title: Mask2Former
|
|
- local: model_doc/maskformer
|
|
title: MaskFormer
|
|
- local: model_doc/mobilenet_v1
|
|
title: MobileNetV1
|
|
- local: model_doc/mobilenet_v2
|
|
title: MobileNetV2
|
|
- local: model_doc/mobilevit
|
|
title: MobileViT
|
|
- local: model_doc/nat
|
|
title: NAT
|
|
- local: model_doc/poolformer
|
|
title: PoolFormer
|
|
- local: model_doc/regnet
|
|
title: RegNet
|
|
- local: model_doc/resnet
|
|
title: ResNet
|
|
- local: model_doc/segformer
|
|
title: SegFormer
|
|
- local: model_doc/swin
|
|
title: Swin Transformer
|
|
- local: model_doc/swinv2
|
|
title: Swin Transformer V2
|
|
- local: model_doc/swin2sr
|
|
title: Swin2SR
|
|
- local: model_doc/table-transformer
|
|
title: Table Transformer
|
|
- local: model_doc/timesformer
|
|
title: TimeSformer
|
|
- local: model_doc/upernet
|
|
title: UperNet
|
|
- local: model_doc/van
|
|
title: VAN
|
|
- local: model_doc/videomae
|
|
title: VideoMAE
|
|
- local: model_doc/vit
|
|
title: Vision Transformer (ViT)
|
|
- local: model_doc/vit_hybrid
|
|
title: ViT Hybrid
|
|
- local: model_doc/vit_mae
|
|
title: ViTMAE
|
|
- local: model_doc/vit_msn
|
|
title: ViTMSN
|
|
- local: model_doc/yolos
|
|
title: YOLOS
|
|
title: Vision models
|
|
- isExpanded: false
|
|
sections:
|
|
- local: model_doc/audio-spectrogram-transformer
|
|
title: Audio Spectrogram Transformer
|
|
- local: model_doc/hubert
|
|
title: Hubert
|
|
- local: model_doc/mctct
|
|
title: MCTCT
|
|
- local: model_doc/sew
|
|
title: SEW
|
|
- local: model_doc/sew-d
|
|
title: SEW-D
|
|
- local: model_doc/speech_to_text
|
|
title: Speech2Text
|
|
- local: model_doc/speech_to_text_2
|
|
title: Speech2Text2
|
|
- local: model_doc/unispeech
|
|
title: UniSpeech
|
|
- local: model_doc/unispeech-sat
|
|
title: UniSpeech-SAT
|
|
- local: model_doc/wav2vec2
|
|
title: Wav2Vec2
|
|
- local: model_doc/wav2vec2-conformer
|
|
title: Wav2Vec2-Conformer
|
|
- local: model_doc/wav2vec2_phoneme
|
|
title: Wav2Vec2Phoneme
|
|
- local: model_doc/wavlm
|
|
title: WavLM
|
|
- local: model_doc/whisper
|
|
title: Whisper
|
|
- local: model_doc/xls_r
|
|
title: XLS-R
|
|
- local: model_doc/xlsr_wav2vec2
|
|
title: XLSR-Wav2Vec2
|
|
title: Audio models
|
|
- isExpanded: false
|
|
sections:
|
|
- local: model_doc/altclip
|
|
title: AltCLIP
|
|
- local: model_doc/blip
|
|
title: BLIP
|
|
- local: model_doc/bridgetower
|
|
title: BridgeTower
|
|
- local: model_doc/chinese_clip
|
|
title: Chinese-CLIP
|
|
- local: model_doc/clip
|
|
title: CLIP
|
|
- local: model_doc/clipseg
|
|
title: CLIPSeg
|
|
- local: model_doc/data2vec
|
|
title: Data2Vec
|
|
- local: model_doc/donut
|
|
title: Donut
|
|
- local: model_doc/flava
|
|
title: FLAVA
|
|
- local: model_doc/git
|
|
title: GIT
|
|
- local: model_doc/groupvit
|
|
title: GroupViT
|
|
- local: model_doc/layoutlmv2
|
|
title: LayoutLMV2
|
|
- local: model_doc/layoutlmv3
|
|
title: LayoutLMV3
|
|
- local: model_doc/layoutxlm
|
|
title: LayoutXLM
|
|
- local: model_doc/lxmert
|
|
title: LXMERT
|
|
- local: model_doc/oneformer
|
|
title: OneFormer
|
|
- local: model_doc/owlvit
|
|
title: OWL-ViT
|
|
- local: model_doc/perceiver
|
|
title: Perceiver
|
|
- local: model_doc/speech-encoder-decoder
|
|
title: Speech Encoder Decoder Models
|
|
- local: model_doc/trocr
|
|
title: TrOCR
|
|
- local: model_doc/vilt
|
|
title: ViLT
|
|
- local: model_doc/vision-encoder-decoder
|
|
title: Vision Encoder Decoder Models
|
|
- local: model_doc/vision-text-dual-encoder
|
|
title: Vision Text Dual Encoder
|
|
- local: model_doc/visual_bert
|
|
title: VisualBERT
|
|
- local: model_doc/xclip
|
|
title: X-CLIP
|
|
title: Multimodal models
|
|
- isExpanded: false
|
|
sections:
|
|
- local: model_doc/decision_transformer
|
|
title: Decision Transformer
|
|
- local: model_doc/trajectory_transformer
|
|
title: Trajectory Transformer
|
|
title: Reinforcement learning models
|
|
- isExpanded: false
|
|
sections:
|
|
- local: model_doc/time_series_transformer
|
|
title: Time Series Transformer
|
|
title: Time series models
|
|
- isExpanded: false
|
|
sections:
|
|
- local: model_doc/graphormer
|
|
title: Graphormer
|
|
title: Graph models
|
|
title: Models
|
|
- sections:
|
|
- local: internal/modeling_utils
|
|
title: Custom Layers and Utilities
|
|
- local: internal/pipelines_utils
|
|
title: Utilities for pipelines
|
|
- local: internal/tokenization_utils
|
|
title: Utilities for Tokenizers
|
|
- local: internal/trainer_utils
|
|
title: Utilities for Trainer
|
|
- local: internal/generation_utils
|
|
title: Utilities for Generation
|
|
- local: internal/image_processing_utils
|
|
title: Utilities for Image Processors
|
|
- local: internal/file_utils
|
|
title: General Utilities
|
|
title: Internal Helpers
|
|
title: API
|