mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 13:20:12 +06:00

* First draft * Update self-attention of RoBERTa as proposition * Improve conversion script * Add TrOCR decoder-only model * More improvements * Make forward pass with pretrained weights work * More improvements * Some more improvements * More improvements * Make conversion work * Clean up print statements * Add documentation, processor * Add test files * Small improvements * Some more improvements * Make fix-copies, improve docs * Make all vision encoder decoder model tests pass * Make conversion script support other models * Update URL for OCR image * Update conversion script * Fix style & quality * Add support for the large-printed model * Fix some issues * Add print statement for debugging * Add print statements for debugging * Make possible fix for sinusoidal embedding * Further debugging * Potential fix v2 * Add more print statements for debugging * Add more print statements for debugging * Deubg more * Comment out print statements * Make conversion of large printed model possible, address review comments * Make it possible to convert the stage1 checkpoints * Clean up code, apply suggestions from code review * Apply suggestions from code review, use Microsoft models in tests * Rename encoder_hidden_size to cross_attention_hidden_size * Improve docs
42 lines
2.0 KiB
ReStructuredText
42 lines
2.0 KiB
ReStructuredText
..
|
|
Copyright 2021 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
Vision Encoder Decoder Models
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
The :class:`~transformers.VisionEncoderDecoderModel` can be used to initialize an image-to-text-sequence model with any
|
|
pretrained vision autoencoding model as the encoder (*e.g.* :doc:`ViT <vit>`, :doc:`BEiT <beit>`, :doc:`DeiT <deit>`)
|
|
and any pretrained language model as the decoder (*e.g.* :doc:`RoBERTa <roberta>`, :doc:`GPT2 <gpt2>`, :doc:`BERT
|
|
<bert>`).
|
|
|
|
The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for
|
|
example) `TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models
|
|
<https://arxiv.org/abs/2109.10282>`__ by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang,
|
|
Zhoujun Li, Furu Wei.
|
|
|
|
An example of how to use a :class:`~transformers.VisionEncoderDecoderModel` for inference can be seen in :doc:`TrOCR
|
|
<trocr>`.
|
|
|
|
|
|
VisionEncoderDecoderConfig
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.VisionEncoderDecoderConfig
|
|
:members:
|
|
|
|
|
|
VisionEncoderDecoderModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.VisionEncoderDecoderModel
|
|
:members: forward, from_encoder_decoder_pretrained
|