mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 06:10:04 +06:00

* Remove disclaimer * First draft * Fix rebase * Improve docs some more * Add inference section * Improve example scripts section * Improve code examples of modeling files * Add docs regarding task prefix * Address @craffel's comments * Apply suggestions from @patrickvonplaten's review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Add suggestions from code review * Apply @sgugger's suggestions * Fix Flax code examples * Fix index.rst Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
67 lines
3.1 KiB
ReStructuredText
67 lines
3.1 KiB
ReStructuredText
..
|
|
Copyright 2021 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
T5v1.1
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
T5v1.1 was released in the `google-research/text-to-text-transfer-transformer
|
|
<https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511>`__
|
|
repository by Colin Raffel et al. It's an improved version of the original T5 model.
|
|
|
|
One can directly plug in the weights of T5v1.1 into a T5 model, like so:
|
|
|
|
.. code-block::
|
|
|
|
from transformers import T5ForConditionalGeneration
|
|
|
|
model = T5ForConditionalGeneration.from_pretrained('google/t5-v1_1-base')
|
|
|
|
T5 Version 1.1 includes the following improvements compared to the original T5 model:
|
|
|
|
- GEGLU activation in the feed-forward hidden layer, rather than ReLU. See `this paper
|
|
<https://arxiv.org/abs/2002.05202>`__.
|
|
|
|
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
|
|
|
|
- Pre-trained on C4 only without mixing in the downstream tasks.
|
|
|
|
- No parameter sharing between the embedding and classifier layer.
|
|
|
|
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger :obj:`d_model` and smaller
|
|
:obj:`num_heads` and :obj:`d_ff`.
|
|
|
|
Note: T5 Version 1.1 was only pre-trained on `C4 <https://huggingface.co/datasets/c4>`__ excluding any supervised
|
|
training. Therefore, this model has to be fine-tuned before it is useable on a downstream task, unlike the original T5
|
|
model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
|
|
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
|
|
|
|
Google has released the following variants:
|
|
|
|
- `google/t5-v1_1-small <https://huggingface.co/google/t5-v1_1-small>`__
|
|
|
|
- `google/t5-v1_1-base <https://huggingface.co/google/t5-v1_1-base>`__
|
|
|
|
- `google/t5-v1_1-large <https://huggingface.co/google/t5-v1_1-large>`__
|
|
|
|
- `google/t5-v1_1-xl <https://huggingface.co/google/t5-v1_1-xl>`__
|
|
|
|
- `google/t5-v1_1-xxl <https://huggingface.co/google/t5-v1_1-xxl>`__.
|
|
|
|
One can refer to :doc:`T5's documentation page <t5>` for all tips, code examples and notebooks.
|
|
|
|
This model was contributed by `patrickvonplaten <https://huggingface.co/patrickvonplaten>`__. The original code can be
|
|
found `here
|
|
<https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511>`__.
|