From 01b8cd59324565a713a736fe77bc2bd9d60494cb Mon Sep 17 00:00:00 2001 From: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Date: Wed, 8 Dec 2021 13:52:31 -0500 Subject: [PATCH] Revert open-in-colab and add perceiver (#14683) --- docs/source/_toctree.yml | 2 ++ docs/source/benchmarks.mdx | 2 -- docs/source/custom_datasets.mdx | 2 -- docs/source/multilingual.mdx | 2 -- docs/source/perplexity.mdx | 2 -- docs/source/preprocessing.mdx | 2 -- docs/source/quicktour.mdx | 2 -- docs/source/task_summary.mdx | 2 -- docs/source/tokenizer_summary.mdx | 2 -- docs/source/training.mdx | 2 -- 10 files changed, 2 insertions(+), 18 deletions(-) diff --git a/docs/source/_toctree.yml b/docs/source/_toctree.yml index f101b7601e7..0b36829404f 100644 --- a/docs/source/_toctree.yml +++ b/docs/source/_toctree.yml @@ -218,6 +218,8 @@ title: GPT Neo - local: model_doc/hubert title: Hubert + - local: model_doc/perceiver + title: Perceiver - local: model_doc/pegasus title: Pegasus - local: model_doc/phobert diff --git a/docs/source/benchmarks.mdx b/docs/source/benchmarks.mdx index 4b32c5c629e..8181d19e2d1 100644 --- a/docs/source/benchmarks.mdx +++ b/docs/source/benchmarks.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Benchmarks -[[open-in-colab]] - Let's take a look at how 🤗 Transformer models can be benchmarked, best practices, and already available benchmarks. A notebook explaining in more detail how to benchmark 🤗 Transformer models can be found [here](https://github.com/huggingface/transformers/tree/master/notebooks/05-benchmark.ipynb). diff --git a/docs/source/custom_datasets.mdx b/docs/source/custom_datasets.mdx index 4ffcbbcbf91..af6c2e25f9d 100644 --- a/docs/source/custom_datasets.mdx +++ b/docs/source/custom_datasets.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # How to fine-tune a model for common downstream tasks -[[open-in-colab]] - This guide will show you how to fine-tune 🤗 Transformers models for common downstream tasks. You will use the 🤗 Datasets library to quickly load and preprocess the datasets, getting them ready for training with PyTorch and TensorFlow. diff --git a/docs/source/multilingual.mdx b/docs/source/multilingual.mdx index 49b366b8283..a5c55586d6f 100644 --- a/docs/source/multilingual.mdx +++ b/docs/source/multilingual.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Multi-lingual models -[[open-in-colab]] - Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual models are available and have a different mechanisms than mono-lingual models. This page details the usage of these models. diff --git a/docs/source/perplexity.mdx b/docs/source/perplexity.mdx index 98a7bdd95d4..0a33b28fbef 100644 --- a/docs/source/perplexity.mdx +++ b/docs/source/perplexity.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Perplexity of fixed-length models -[[open-in-colab]] - Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note that the metric applies specifically to classical language models (sometimes called autoregressive or causal language models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)). diff --git a/docs/source/preprocessing.mdx b/docs/source/preprocessing.mdx index b53bb00731d..ee072af5a19 100644 --- a/docs/source/preprocessing.mdx +++ b/docs/source/preprocessing.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Preprocessing data -[[open-in-colab]] - In this tutorial, we'll explore how to preprocess your data using 🤗 Transformers. The main tool for this is what we call a [tokenizer](main_classes/tokenizer). You can build one using the tokenizer class associated to the model you would like to use, or directly with the [`AutoTokenizer`] class. diff --git a/docs/source/quicktour.mdx b/docs/source/quicktour.mdx index 7c2862c74ed..1c282bcd220 100644 --- a/docs/source/quicktour.mdx +++ b/docs/source/quicktour.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Quick tour -[[open-in-colab]] - Let's have a quick look at the 🤗 Transformers library features. The library downloads pretrained models for Natural Language Understanding (NLU) tasks, such as analyzing the sentiment of a text, and Natural Language Generation (NLG), such as completing a prompt with new text or translating in another language. diff --git a/docs/source/task_summary.mdx b/docs/source/task_summary.mdx index 02b0f314baa..bdad50b7bb9 100644 --- a/docs/source/task_summary.mdx +++ b/docs/source/task_summary.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Summary of the tasks -[[open-in-colab]] - This page shows the most frequent use-cases when using the library. The models available allow for many different configurations and a great versatility in use-cases. The most simple ones are presented here, showcasing usage for tasks such as question answering, sequence classification, named entity recognition and others. diff --git a/docs/source/tokenizer_summary.mdx b/docs/source/tokenizer_summary.mdx index db0f9d95dc5..1fcbed269d3 100644 --- a/docs/source/tokenizer_summary.mdx +++ b/docs/source/tokenizer_summary.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Summary of the tokenizers -[[open-in-colab]] - On this page, we will have a closer look at tokenization. diff --git a/docs/source/training.mdx b/docs/source/training.mdx index 805323df82f..92dd1f61064 100644 --- a/docs/source/training.mdx +++ b/docs/source/training.mdx @@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License. # Fine-tuning a pretrained model -[[open-in-colab]] - In this tutorial, we will show you how to fine-tune a pretrained model from the Transformers library. In TensorFlow, models can be directly trained using Keras and the `fit` method. In PyTorch, there is no generic training loop so the 🤗 Transformers library provides an API with the class [`Trainer`] to let you fine-tune or train