mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-01 18:51:14 +06:00
Revert open-in-colab and add perceiver (#14683)
This commit is contained in:
parent
f6b87c5f30
commit
01b8cd5932
@ -218,6 +218,8 @@
|
|||||||
title: GPT Neo
|
title: GPT Neo
|
||||||
- local: model_doc/hubert
|
- local: model_doc/hubert
|
||||||
title: Hubert
|
title: Hubert
|
||||||
|
- local: model_doc/perceiver
|
||||||
|
title: Perceiver
|
||||||
- local: model_doc/pegasus
|
- local: model_doc/pegasus
|
||||||
title: Pegasus
|
title: Pegasus
|
||||||
- local: model_doc/phobert
|
- local: model_doc/phobert
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Benchmarks
|
# Benchmarks
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
Let's take a look at how 🤗 Transformer models can be benchmarked, best practices, and already available benchmarks.
|
Let's take a look at how 🤗 Transformer models can be benchmarked, best practices, and already available benchmarks.
|
||||||
|
|
||||||
A notebook explaining in more detail how to benchmark 🤗 Transformer models can be found [here](https://github.com/huggingface/transformers/tree/master/notebooks/05-benchmark.ipynb).
|
A notebook explaining in more detail how to benchmark 🤗 Transformer models can be found [here](https://github.com/huggingface/transformers/tree/master/notebooks/05-benchmark.ipynb).
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# How to fine-tune a model for common downstream tasks
|
# How to fine-tune a model for common downstream tasks
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
This guide will show you how to fine-tune 🤗 Transformers models for common downstream tasks. You will use the 🤗
|
This guide will show you how to fine-tune 🤗 Transformers models for common downstream tasks. You will use the 🤗
|
||||||
Datasets library to quickly load and preprocess the datasets, getting them ready for training with PyTorch and
|
Datasets library to quickly load and preprocess the datasets, getting them ready for training with PyTorch and
|
||||||
TensorFlow.
|
TensorFlow.
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Multi-lingual models
|
# Multi-lingual models
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual
|
Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual
|
||||||
models are available and have a different mechanisms than mono-lingual models. This page details the usage of these
|
models are available and have a different mechanisms than mono-lingual models. This page details the usage of these
|
||||||
models.
|
models.
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Perplexity of fixed-length models
|
# Perplexity of fixed-length models
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note
|
Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note
|
||||||
that the metric applies specifically to classical language models (sometimes called autoregressive or causal language
|
that the metric applies specifically to classical language models (sometimes called autoregressive or causal language
|
||||||
models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)).
|
models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)).
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Preprocessing data
|
# Preprocessing data
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
In this tutorial, we'll explore how to preprocess your data using 🤗 Transformers. The main tool for this is what we
|
In this tutorial, we'll explore how to preprocess your data using 🤗 Transformers. The main tool for this is what we
|
||||||
call a [tokenizer](main_classes/tokenizer). You can build one using the tokenizer class associated to the model
|
call a [tokenizer](main_classes/tokenizer). You can build one using the tokenizer class associated to the model
|
||||||
you would like to use, or directly with the [`AutoTokenizer`] class.
|
you would like to use, or directly with the [`AutoTokenizer`] class.
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Quick tour
|
# Quick tour
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
Let's have a quick look at the 🤗 Transformers library features. The library downloads pretrained models for Natural
|
Let's have a quick look at the 🤗 Transformers library features. The library downloads pretrained models for Natural
|
||||||
Language Understanding (NLU) tasks, such as analyzing the sentiment of a text, and Natural Language Generation (NLG),
|
Language Understanding (NLU) tasks, such as analyzing the sentiment of a text, and Natural Language Generation (NLG),
|
||||||
such as completing a prompt with new text or translating in another language.
|
such as completing a prompt with new text or translating in another language.
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Summary of the tasks
|
# Summary of the tasks
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
This page shows the most frequent use-cases when using the library. The models available allow for many different
|
This page shows the most frequent use-cases when using the library. The models available allow for many different
|
||||||
configurations and a great versatility in use-cases. The most simple ones are presented here, showcasing usage for
|
configurations and a great versatility in use-cases. The most simple ones are presented here, showcasing usage for
|
||||||
tasks such as question answering, sequence classification, named entity recognition and others.
|
tasks such as question answering, sequence classification, named entity recognition and others.
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Summary of the tokenizers
|
# Summary of the tokenizers
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
On this page, we will have a closer look at tokenization.
|
On this page, we will have a closer look at tokenization.
|
||||||
|
|
||||||
<Youtube id="VFp38yj8h3A"/>
|
<Youtube id="VFp38yj8h3A"/>
|
||||||
|
@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
|
|||||||
|
|
||||||
# Fine-tuning a pretrained model
|
# Fine-tuning a pretrained model
|
||||||
|
|
||||||
[[open-in-colab]]
|
|
||||||
|
|
||||||
In this tutorial, we will show you how to fine-tune a pretrained model from the Transformers library. In TensorFlow,
|
In this tutorial, we will show you how to fine-tune a pretrained model from the Transformers library. In TensorFlow,
|
||||||
models can be directly trained using Keras and the `fit` method. In PyTorch, there is no generic training loop so
|
models can be directly trained using Keras and the `fit` method. In PyTorch, there is no generic training loop so
|
||||||
the 🤗 Transformers library provides an API with the class [`Trainer`] to let you fine-tune or train
|
the 🤗 Transformers library provides an API with the class [`Trainer`] to let you fine-tune or train
|
||||||
|
Loading…
Reference in New Issue
Block a user