Revert open-in-colab and add perceiver (#14683)

This commit is contained in:
Sylvain Gugger 2021-12-08 13:52:31 -05:00 committed by GitHub
parent f6b87c5f30
commit 01b8cd5932
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 2 additions and 18 deletions

View File

@ -218,6 +218,8 @@
title: GPT Neo
- local: model_doc/hubert
title: Hubert
- local: model_doc/perceiver
title: Perceiver
- local: model_doc/pegasus
title: Pegasus
- local: model_doc/phobert

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Benchmarks
[[open-in-colab]]
Let's take a look at how 🤗 Transformer models can be benchmarked, best practices, and already available benchmarks.
A notebook explaining in more detail how to benchmark 🤗 Transformer models can be found [here](https://github.com/huggingface/transformers/tree/master/notebooks/05-benchmark.ipynb).

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# How to fine-tune a model for common downstream tasks
[[open-in-colab]]
This guide will show you how to fine-tune 🤗 Transformers models for common downstream tasks. You will use the 🤗
Datasets library to quickly load and preprocess the datasets, getting them ready for training with PyTorch and
TensorFlow.

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Multi-lingual models
[[open-in-colab]]
Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual
models are available and have a different mechanisms than mono-lingual models. This page details the usage of these
models.

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Perplexity of fixed-length models
[[open-in-colab]]
Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note
that the metric applies specifically to classical language models (sometimes called autoregressive or causal language
models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)).

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Preprocessing data
[[open-in-colab]]
In this tutorial, we'll explore how to preprocess your data using 🤗 Transformers. The main tool for this is what we
call a [tokenizer](main_classes/tokenizer). You can build one using the tokenizer class associated to the model
you would like to use, or directly with the [`AutoTokenizer`] class.

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Quick tour
[[open-in-colab]]
Let's have a quick look at the 🤗 Transformers library features. The library downloads pretrained models for Natural
Language Understanding (NLU) tasks, such as analyzing the sentiment of a text, and Natural Language Generation (NLG),
such as completing a prompt with new text or translating in another language.

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Summary of the tasks
[[open-in-colab]]
This page shows the most frequent use-cases when using the library. The models available allow for many different
configurations and a great versatility in use-cases. The most simple ones are presented here, showcasing usage for
tasks such as question answering, sequence classification, named entity recognition and others.

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Summary of the tokenizers
[[open-in-colab]]
On this page, we will have a closer look at tokenization.
<Youtube id="VFp38yj8h3A"/>

View File

@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
# Fine-tuning a pretrained model
[[open-in-colab]]
In this tutorial, we will show you how to fine-tune a pretrained model from the Transformers library. In TensorFlow,
models can be directly trained using Keras and the `fit` method. In PyTorch, there is no generic training loop so
the 🤗 Transformers library provides an API with the class [`Trainer`] to let you fine-tune or train