mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-02 11:11:05 +06:00
Adding task guides to resources (#21704)
* added resources: links to task guides that support these models * minor polishing * conflict resolved * link fix * Update docs/source/en/model_doc/vision-encoder-decoder.mdx Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> --------- Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
This commit is contained in:
parent
03aaac3502
commit
78a53d59cb
@ -56,6 +56,14 @@ Next sentence prediction is replaced by a sentence ordering prediction: in the i
|
||||
This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by
|
||||
[kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/ALBERT).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## AlbertConfig
|
||||
|
||||
[[autodoc]] AlbertConfig
|
||||
|
@ -47,6 +47,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
|
||||
- A notebook illustrating inference with AST for audio classification can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/AST).
|
||||
- [`ASTForAudioClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb).
|
||||
- See also: [Audio classification](./tasks/audio_classification).
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -109,6 +109,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb).
|
||||
- [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization).
|
||||
- [Summarization](https://huggingface.co/course/chapter7/5?fw=pt#summarization) chapter of the 🤗 Hugging Face course.
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
<PipelineTag pipeline="fill-mask"/>
|
||||
|
||||
@ -116,12 +117,19 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb).
|
||||
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
<PipelineTag pipeline="translation"/>
|
||||
|
||||
- A notebook on how to [finetune mBART using Seq2SeqTrainer for Hindi to English translation](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb). 🌎
|
||||
- [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb).
|
||||
- [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb).
|
||||
- [Translation task guide](./tasks/translation)
|
||||
|
||||
See also:
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## BartConfig
|
||||
|
||||
|
@ -74,6 +74,10 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`BeitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
**Semantic segmentation**
|
||||
- [Semantic segmentation task guide](./tasks/semantic_segmentation)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -72,6 +72,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`BertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
|
||||
- [`TFBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb).
|
||||
- [`FlaxBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb).
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
|
||||
<PipelineTag pipeline="token-classification"/>
|
||||
|
||||
@ -81,6 +82,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb).
|
||||
- [`FlaxBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification).
|
||||
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
|
||||
<PipelineTag pipeline="fill-mask"/>
|
||||
|
||||
@ -88,6 +90,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [`FlaxBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb).
|
||||
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
<PipelineTag pipeline="question-answering"/>
|
||||
|
||||
@ -95,10 +98,12 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
|
||||
- [`FlaxBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering).
|
||||
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
**Multiple choice**
|
||||
- [`BertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb).
|
||||
- [`TFBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb).
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
⚡️ **Inference**
|
||||
- A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker).
|
||||
|
@ -52,6 +52,15 @@ Tips:
|
||||
This model was contributed by [vasudevgupta](https://huggingface.co/vasudevgupta). The original code can be found
|
||||
[here](https://github.com/google-research/bigbird).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## BigBirdConfig
|
||||
|
||||
[[autodoc]] BigBirdConfig
|
||||
|
@ -52,6 +52,14 @@ Tips:
|
||||
|
||||
The original code can be found [here](https://github.com/google-research/bigbird).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## BigBirdPegasusConfig
|
||||
|
||||
[[autodoc]] BigBirdPegasusConfig
|
||||
|
@ -29,6 +29,10 @@ Tips:
|
||||
|
||||
This model was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/BioGPT).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## BioGptConfig
|
||||
|
||||
[[autodoc]] BioGptConfig
|
||||
|
@ -37,6 +37,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`BitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -42,7 +42,13 @@ Tips:
|
||||
the left.
|
||||
|
||||
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The authors' code can be
|
||||
found [here](https://github.com/facebookresearch/ParlAI) .
|
||||
found [here](https://github.com/facebookresearch/ParlAI).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## BlenderbotSmallConfig
|
||||
|
||||
|
@ -66,6 +66,12 @@ Here is an example of model usage:
|
||||
["<s> That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?</s>"]
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## BlenderbotConfig
|
||||
|
||||
[[autodoc]] BlenderbotConfig
|
||||
|
@ -27,13 +27,19 @@ Several smaller versions of the models have been trained on the same dataset. BL
|
||||
|
||||
## Resources
|
||||
|
||||
|
||||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLOOM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
<PipelineTag pipeline="text-generation"/>
|
||||
|
||||
- [`BloomForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
|
||||
|
||||
See also:
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
|
||||
⚡️ Inference
|
||||
- A blog on [Optimization story: Bloom inference](https://huggingface.co/blog/bloom-inference-optimization).
|
||||
- A blog on [Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate](https://huggingface.co/blog/bloom-inference-pytorch-scripts).
|
||||
|
@ -37,6 +37,15 @@ Tips:
|
||||
|
||||
This model was contributed by [camembert](https://huggingface.co/camembert). The original code can be found [here](https://camembert-model.fr/).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## CamembertConfig
|
||||
|
||||
[[autodoc]] CamembertConfig
|
||||
|
@ -92,6 +92,13 @@ sequences to the same length):
|
||||
>>> sequence_output = outputs.last_hidden_state
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## CANINE specific outputs
|
||||
|
||||
[[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling
|
||||
|
@ -56,6 +56,10 @@ def hello_world():
|
||||
hello_world()
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## CodeGenConfig
|
||||
|
||||
[[autodoc]] CodeGenConfig
|
||||
|
@ -27,6 +27,9 @@ alt="drawing" width="600"/>
|
||||
|
||||
This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The original code can be found [here](https://github.com/Atten4Vis/ConditionalDETR).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Object detection task guide](./tasks/object_detection)
|
||||
|
||||
## ConditionalDetrConfig
|
||||
|
||||
|
@ -45,6 +45,14 @@ ConvBERT training tips are similar to those of BERT.
|
||||
This model was contributed by [abhishek](https://huggingface.co/abhishek). The original implementation can be found
|
||||
here: https://github.com/yitu-opensource/ConvBert
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## ConvBertConfig
|
||||
|
||||
[[autodoc]] ConvBertConfig
|
||||
|
@ -47,6 +47,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`ConvNextForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -55,6 +55,10 @@ Tips:
|
||||
This model was contributed by [keskarnitishr](https://huggingface.co/keskarnitishr). The original code can be found
|
||||
[here](https://github.com/salesforce/ctrl).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## CTRLConfig
|
||||
|
||||
|
@ -45,6 +45,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`CvtForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -54,6 +54,22 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`Data2VecVisionForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- To fine-tune [`TFData2VecVisionForImageClassification`] on a custom dataset, see [this notebook](https://colab.research.google.com/github/sayakpaul/TF-2.0-Hacks/blob/master/data2vec_vision_image_classification.ipynb).
|
||||
|
||||
**Data2VecText documentation resources**
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
**Data2VecAudio documentation resources**
|
||||
- [Audio classification task guide](./tasks/audio_classification)
|
||||
- [Automatic speech recognition task guide](./tasks/asr)
|
||||
|
||||
**Data2VecVision documentation resources**
|
||||
- [Image classification](./tasks/image_classification)
|
||||
- [Semantic segmentation](./tasks/semantic_segmentation)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
## Data2VecTextConfig
|
||||
|
@ -58,6 +58,13 @@ New in v2:
|
||||
This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was
|
||||
contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/DeBERTa).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## DebertaV2Config
|
||||
|
||||
|
@ -48,6 +48,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- A blog post on [Supercharged Customer Service with Machine Learning](https://huggingface.co/blog/supercharge-customer-service-with-machine-learning) with DeBERTa.
|
||||
- [`DebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
|
||||
- [`TFDebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb).
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
|
||||
<PipelineTag pipeline="token-classification" />
|
||||
|
||||
@ -55,18 +56,21 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFDebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb).
|
||||
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Byte-Pair Encoding tokenization](https://huggingface.co/course/chapter6/5?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
|
||||
<PipelineTag pipeline="fill-mask"/>
|
||||
|
||||
- [`DebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
|
||||
- [`TFDebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
<PipelineTag pipeline="question-answering"/>
|
||||
|
||||
- [`DebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
|
||||
- [`TFDebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
|
||||
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
## DebertaConfig
|
||||
|
||||
|
@ -40,6 +40,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="object-detection"/>
|
||||
|
||||
- Demo notebooks regarding inference + fine-tuning on a custom dataset for [`DeformableDetrForObjectDetection`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Deformable-DETR).
|
||||
- See also: [Object detection task guide](./tasks/object_detection).
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -78,6 +78,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`DeiTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
Besides that:
|
||||
|
||||
|
@ -39,6 +39,7 @@ The original code can be found [here](https://github.com/jozhang97/DETA).
|
||||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA.
|
||||
|
||||
- Demo notebooks for DETA can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETA).
|
||||
- See also: [Object detection task guide](./tasks/object_detection)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -157,6 +157,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="object-detection"/>
|
||||
|
||||
- All example notebooks illustrating fine-tuning [`DetrForObjectDetection`] and [`DetrForSegmentation`] on a custom dataset an be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR).
|
||||
- See also: [Object detection task guide](./tasks/object_detection)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -68,6 +68,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`DinatForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -75,6 +75,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`DistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
|
||||
- [`TFDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb).
|
||||
- [`FlaxDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb).
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
|
||||
|
||||
<PipelineTag pipeline="token-classification"/>
|
||||
@ -83,6 +84,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb).
|
||||
- [`FlaxDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification).
|
||||
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
|
||||
|
||||
<PipelineTag pipeline="fill-mask"/>
|
||||
@ -91,6 +93,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [`FlaxDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb).
|
||||
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
<PipelineTag pipeline="question-answering"/>
|
||||
|
||||
@ -98,10 +101,12 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
|
||||
- [`FlaxDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering).
|
||||
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
**Multiple choice**
|
||||
- [`DistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb).
|
||||
- [`TFDistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb).
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
⚗️ Optimization
|
||||
|
||||
|
@ -33,6 +33,7 @@ This model was contributed by [nielsr](https://huggingface.co/nielsr). The origi
|
||||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT.
|
||||
|
||||
- Demo notebooks for [`DPTForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT).
|
||||
- See also: [Semantic segmentation task guide](./tasks/semantic_segmentation)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -39,6 +39,9 @@ reach extremely low latency on mobile devices while maintaining high performance
|
||||
This model was contributed by [novice03](https://huggingface.co/novice03) and [Bearnardd](https://huggingface.co/Bearnardd).
|
||||
The original code can be found [here](https://github.com/snap-research/EfficientFormer).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
## EfficientFormerConfig
|
||||
|
||||
|
@ -64,6 +64,14 @@ Tips:
|
||||
|
||||
This model was contributed by [lysandre](https://huggingface.co/lysandre). The original code can be found [here](https://github.com/google-research/electra).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## ElectraConfig
|
||||
|
||||
|
@ -47,6 +47,15 @@ You can find all the supported models from huggingface's model hub: [huggingface
|
||||
repo: [PaddleNLP](https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html)
|
||||
and [ERNIE](https://github.com/PaddlePaddle/ERNIE/blob/repro).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## ErnieConfig
|
||||
|
||||
[[autodoc]] ErnieConfig
|
||||
|
@ -32,6 +32,13 @@ Tips:
|
||||
|
||||
This model was contributed by [Susnato Dhar](https://huggingface.co/susnato). The original code can be found [here](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/paddlenlp/transformers/ernie_m).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## ErnieMConfig
|
||||
|
||||
[[autodoc]] ErnieMConfig
|
||||
|
@ -86,6 +86,12 @@ help throughout the process!
|
||||
The HuggingFace port of ESMFold uses portions of the [openfold](https://github.com/aqlaboratory/openfold) library.
|
||||
The `openfold` library is licensed under the Apache License 2.0.
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
## EsmConfig
|
||||
|
||||
[[autodoc]] EsmConfig
|
||||
|
@ -46,7 +46,13 @@ This model was contributed by [formiel](https://huggingface.co/formiel). The ori
|
||||
Tips:
|
||||
- Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## FlaubertConfig
|
||||
|
||||
|
@ -41,6 +41,14 @@ Tips on usage:
|
||||
|
||||
This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The original code can be found [here](https://github.com/google-research/google-research/tree/master/f_net).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## FNetConfig
|
||||
|
||||
[[autodoc]] FNetConfig
|
||||
|
@ -60,6 +60,14 @@ Tips:
|
||||
|
||||
This model was contributed by [sgugger](https://huggingface.co/sgugger). The original code can be found [here](https://github.com/laiguokun/Funnel-Transformer).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
|
||||
## FunnelConfig
|
||||
|
||||
|
@ -41,6 +41,7 @@ The original code can be found [here](https://github.com/microsoft/GenerativeIma
|
||||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GIT.
|
||||
|
||||
- Demo notebooks regarding inference + fine-tuning GIT on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/GIT).
|
||||
- See also: [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
|
||||
The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
@ -48,6 +48,12 @@ Example usage:
|
||||
Träd är fina för att de är färgstarka. Men ibland är det fint
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## GPTSw3Tokenizer
|
||||
|
||||
[[autodoc]] GPTSw3Tokenizer
|
||||
|
@ -73,7 +73,9 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`GPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
|
||||
- [`TFGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [`FlaxGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb).
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## GPT2Config
|
||||
|
||||
|
@ -50,6 +50,11 @@ The `generate()` method can be used to generate text using GPT Neo model.
|
||||
>>> gen_text = tokenizer.batch_decode(gen_tokens)[0]
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## GPTNeoConfig
|
||||
|
||||
[[autodoc]] GPTNeoConfig
|
||||
|
@ -57,6 +57,10 @@ The `generate()` method can be used to generate text using GPT Neo model.
|
||||
>>> gen_text = tokenizer.batch_decode(gen_tokens)[0]
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## GPTNeoXConfig
|
||||
|
||||
[[autodoc]] GPTNeoXConfig
|
||||
|
@ -47,6 +47,10 @@ The `generate()` method can be used to generate text using GPT NeoX Japanese mod
|
||||
人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## GPTNeoXJapaneseConfig
|
||||
|
||||
[[autodoc]] GPTNeoXJapaneseConfig
|
||||
|
@ -122,6 +122,11 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [`FlaxGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb).
|
||||
|
||||
**Documentation resources**
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## GPTJConfig
|
||||
|
||||
[[autodoc]] GPTJConfig
|
||||
|
@ -40,6 +40,10 @@ Tips:
|
||||
|
||||
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Audio classification task guide](./tasks/audio_classification)
|
||||
- [Automatic speech recognition task guide](./tasks/asr)
|
||||
|
||||
## HubertConfig
|
||||
|
||||
|
@ -36,6 +36,13 @@ been open-sourced.*
|
||||
|
||||
This model was contributed by [kssteven](https://huggingface.co/kssteven). The original code can be found [here](https://github.com/kssteven418/I-BERT).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/masked_language_modeling)
|
||||
|
||||
## IBertConfig
|
||||
|
||||
|
@ -77,6 +77,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
|
||||
- Demo notebooks for ImageGPT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ImageGPT).
|
||||
- [`ImageGPTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -88,13 +88,20 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
|
||||
- A notebook on how to [fine-tune LayoutLM on the FUNSD dataset with image embeddings](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Add_image_embeddings_to_LayoutLM.ipynb).
|
||||
|
||||
- See also: [Document question answering task guide](./tasks/document_question_answering)
|
||||
|
||||
<PipelineTag pipeline="text-classification" />
|
||||
|
||||
- A notebook on how to [fine-tune LayoutLM for sequence classification on the RVL-CDIP dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb).
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
|
||||
<PipelineTag pipeline="token-classification" />
|
||||
|
||||
- A notebook on how to [ fine-tune LayoutLM for token classification on the FUNSD dataset](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb).
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
|
||||
**Other resources**
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
🚀 Deploy
|
||||
|
||||
|
@ -266,6 +266,13 @@ print(encoding.keys())
|
||||
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Document question answering task guide](./tasks/document_question_answering)
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
## LayoutLMv2Config
|
||||
|
||||
[[autodoc]] LayoutLMv2Config
|
||||
|
@ -52,17 +52,22 @@ LayoutLMv3 is nearly identical to LayoutLMv2, so we've also included LayoutLMv2
|
||||
<PipelineTag pipeline="text-classification"/>
|
||||
|
||||
- [`LayoutLMv2ForSequenceClassification`] is supported by this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb).
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
|
||||
<PipelineTag pipeline="token-classification"/>
|
||||
|
||||
- [`LayoutLMv3ForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3) and [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb).
|
||||
- A [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Inference_with_LayoutLMv2ForTokenClassification.ipynb) for how to perform inference with [`LayoutLMv2ForTokenClassification`] and a [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb) for how to perform inference when no labels are available with [`LayoutLMv2ForTokenClassification`].
|
||||
- A [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb) for how to finetune [`LayoutLMv2ForTokenClassification`] with the 🤗 Trainer.
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
|
||||
<PipelineTag pipeline="question-answering"/>
|
||||
|
||||
- [`LayoutLMv2ForQuestionAnswering`] is supported by this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb).
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
**Document question answering**
|
||||
- [Document question answering task guide](./tasks/document_question_answering)
|
||||
|
||||
## LayoutLMv3Config
|
||||
|
||||
|
@ -55,6 +55,12 @@ Tips:
|
||||
|
||||
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## LEDConfig
|
||||
|
||||
|
@ -68,6 +68,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`LevitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -52,6 +52,11 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
|
||||
- Demo notebooks for LiLT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT).
|
||||
|
||||
**Documentation resources**
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
## LiltConfig
|
||||
|
@ -89,6 +89,14 @@ mlm_labels = tokenizer.encode("This is a sentence from the training data", retur
|
||||
loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0]
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## LongformerConfig
|
||||
|
||||
[[autodoc]] LongformerConfig
|
||||
|
@ -86,6 +86,10 @@ The complexity of this mechanism is `O(l(r + l/k))`.
|
||||
This model was contributed by [stancld](https://huggingface.co/stancld).
|
||||
The original code can be found [here](https://github.com/google-research/longt5).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## LongT5Config
|
||||
|
||||
|
@ -117,6 +117,13 @@ Example:
|
||||
|
||||
This model was contributed by [ikuyamada](https://huggingface.co/ikuyamada) and [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/studio-ousia/luke).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## LukeConfig
|
||||
|
||||
|
@ -51,6 +51,9 @@ Tips:
|
||||
|
||||
This model was contributed by [eltoto1219](https://huggingface.co/eltoto1219). The original code can be found [here](https://github.com/airsplay/lxmert).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
## LxmertConfig
|
||||
|
||||
|
@ -91,6 +91,11 @@ loss = model(**model_inputs).loss # forward pass
|
||||
"Life is like a box of chocolate."
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## M2M100Config
|
||||
|
||||
[[autodoc]] M2M100Config
|
||||
|
@ -161,6 +161,12 @@ Example of translating english to many romance languages, using old-style 2 char
|
||||
'Y esto al español']
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## MarianConfig
|
||||
|
||||
[[autodoc]] MarianConfig
|
||||
|
@ -193,6 +193,12 @@ all nodes and xpaths yourself, you can provide them directly to the processor. M
|
||||
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
## MarkupLMConfig
|
||||
|
||||
[[autodoc]] MarkupLMConfig
|
||||
|
@ -152,6 +152,15 @@ tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
||||
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## MBartConfig
|
||||
|
||||
[[autodoc]] MBartConfig
|
||||
|
@ -31,6 +31,9 @@ performance for many languages that also transfers well to LibriSpeech.*
|
||||
|
||||
This model was contributed by [cwkeam](https://huggingface.co/cwkeam). The original code can be found [here](https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Automatic speech recognition task guide](./tasks/asr)
|
||||
|
||||
Tips:
|
||||
|
||||
|
@ -78,6 +78,15 @@ This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The o
|
||||
Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and
|
||||
"pipeline parallel" techniques.
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## MegatronBertConfig
|
||||
|
||||
[[autodoc]] MegatronBertConfig
|
||||
|
@ -43,6 +43,14 @@ Tips:
|
||||
|
||||
This model was contributed by [vshampor](https://huggingface.co/vshampor). The original code can be found [here](https://github.com/google-research/mobilebert).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## MobileBertConfig
|
||||
|
||||
[[autodoc]] MobileBertConfig
|
||||
|
@ -51,6 +51,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`MobileNetV1ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -55,6 +55,10 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`MobileNetV2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
**Semantic segmentation**
|
||||
- [Semantic segmentation task guide](./tasks/semantic_segmentation)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -64,6 +64,10 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`MobileViTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
**Semantic segmentation**
|
||||
- [Semantic segmentation task guide](./tasks/semantic_segmentation)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -40,6 +40,14 @@ Tips:
|
||||
|
||||
The original code can be found [here](https://github.com/microsoft/MPNet).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## MPNetConfig
|
||||
|
||||
[[autodoc]] MPNetConfig
|
||||
|
@ -56,6 +56,11 @@ Google has released the following variants:
|
||||
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be
|
||||
found [here](https://github.com/google-research/multilingual-t5).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## MT5Config
|
||||
|
||||
[[autodoc]] MT5Config
|
||||
|
@ -100,6 +100,15 @@ For lightweight tuning, *i.e.*, fixing the model and only tuning prompts, you ca
|
||||
>>> model.set_lightweight_tuning()
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## MvpConfig
|
||||
|
||||
[[autodoc]] MvpConfig
|
||||
|
@ -63,6 +63,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`NatForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -31,6 +31,14 @@ and natural language inference (XNLI).*
|
||||
|
||||
This model was contributed by [sijunhe](https://huggingface.co/sijunhe). The original code can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## NezhaConfig
|
||||
|
||||
[[autodoc]] NezhaConfig
|
||||
|
@ -88,6 +88,11 @@ See example below for a translation from romanian to german:
|
||||
UN-Chef sagt, es gibt keine militärische Lösung in Syrien
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## NllbTokenizer
|
||||
|
||||
[[autodoc]] NllbTokenizer
|
||||
|
@ -33,6 +33,14 @@ favorably relative to other efficient self-attention methods. Our code is availa
|
||||
|
||||
This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/Nystromformer).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## NystromformerConfig
|
||||
|
||||
[[autodoc]] NystromformerConfig
|
||||
|
@ -73,6 +73,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="text-classification"/>
|
||||
|
||||
- A blog post on [outperforming OpenAI GPT-3 with SetFit for text-classification](https://www.philschmid.de/getting-started-setfit).
|
||||
- See also: [Text classification task guide](./tasks/sequence_classification)
|
||||
|
||||
<PipelineTag pipeline="text-generation"/>
|
||||
|
||||
@ -86,6 +87,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course.
|
||||
- [`OpenAIGPTLMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-generation/run_generation.py) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
|
||||
- [`TFOpenAIGPTLMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- See also: [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
<PipelineTag pipeline="token-classification"/>
|
||||
|
||||
|
@ -45,7 +45,7 @@ The resource should ideally demonstrate something new instead of duplicating an
|
||||
|
||||
<PipelineTag pipeline="text-classification" />
|
||||
|
||||
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Text classification task guide](sequence_classification.mdx)
|
||||
- [`OPTForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
|
||||
|
||||
<PipelineTag pipeline="question-answering" />
|
||||
@ -56,7 +56,7 @@ The resource should ideally demonstrate something new instead of duplicating an
|
||||
|
||||
⚡️ Inference
|
||||
|
||||
- A blog bost on [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT.
|
||||
- A blog post on [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT.
|
||||
|
||||
## OPTConfig
|
||||
|
||||
|
@ -102,6 +102,12 @@ All the [checkpoints](https://huggingface.co/models?search=pegasus) are fine-tun
|
||||
... )
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## PegasusConfig
|
||||
|
||||
[[autodoc]] PegasusConfig
|
||||
|
@ -28,6 +28,11 @@ Tips:
|
||||
|
||||
This model was contributed by [zphang](<https://huggingface.co/zphang). The original code can be found [here](https://github.com/google-research/pegasus).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## PegasusXConfig
|
||||
|
||||
[[autodoc]] PegasusXConfig
|
||||
|
@ -90,6 +90,12 @@ audio classification, video classification, etc.
|
||||
|
||||
- Perceiver does **not** work with `torch.nn.DataParallel` due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035)
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
## Perceiver specific outputs
|
||||
|
||||
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverModelOutput
|
||||
|
@ -78,6 +78,13 @@ it's passed with the `text_target` keyword argument.
|
||||
"Returns the maximum value of a b c."
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## PLBartConfig
|
||||
|
||||
[[autodoc]] PLBartConfig
|
||||
|
@ -48,6 +48,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`PoolFormerForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -53,6 +53,11 @@ Tips:
|
||||
|
||||
The Authors' code can be found [here](https://github.com/microsoft/ProphetNet).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Translation task guide](./tasks/translation)
|
||||
- [Summarization task guide](./tasks/summarization)
|
||||
|
||||
## ProphetNetConfig
|
||||
|
||||
|
@ -114,6 +114,15 @@ the instructions in [torch.onnx](https://pytorch.org/docs/stable/onnx.html). Exa
|
||||
>>> torch.onnx.export(...)
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## QDQBertConfig
|
||||
|
||||
[[autodoc]] QDQBertConfig
|
||||
|
@ -151,6 +151,13 @@ input_ids = tokenizer.encode("This is a sentence from the training data", return
|
||||
loss = model(input_ids, labels=input_ids)[0]
|
||||
```
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
## ReformerConfig
|
||||
|
||||
[[autodoc]] ReformerConfig
|
||||
|
@ -38,6 +38,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`RegNetForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -37,6 +37,15 @@ embedding layer. The embeddings are not tied in pre-training, in contrast with B
|
||||
embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
|
||||
also similar to the Albert one rather than the BERT one.
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## RemBertConfig
|
||||
|
||||
[[autodoc]] RemBertConfig
|
||||
|
@ -40,6 +40,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`ResNetForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -29,6 +29,14 @@ Tips:
|
||||
This model was contributed by [andreasmaden](https://huggingface.co/andreasmaden).
|
||||
The original code can be found [here](https://github.com/princeton-nlp/DinkyTrain).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## RobertaPreLayerNormConfig
|
||||
|
||||
|
@ -70,6 +70,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`RobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
|
||||
- [`TFRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb).
|
||||
- [`FlaxRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb).
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
|
||||
<PipelineTag pipeline="token-classification"/>
|
||||
|
||||
@ -77,6 +78,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb).
|
||||
- [`FlaxRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification).
|
||||
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
|
||||
<PipelineTag pipeline="fill-mask"/>
|
||||
|
||||
@ -85,6 +87,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [`FlaxRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb).
|
||||
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
|
||||
<PipelineTag pipeline="question-answering"/>
|
||||
|
||||
@ -93,10 +96,12 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
- [`TFRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
|
||||
- [`FlaxRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering).
|
||||
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
|
||||
**Multiple choice**
|
||||
- [`RobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb).
|
||||
- [`TFRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb).
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## RobertaConfig
|
||||
|
||||
|
@ -31,6 +31,15 @@ in the toxic content detection task under human-made attacks.*
|
||||
|
||||
This model was contributed by [weiweishi](https://huggingface.co/weiweishi).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## RoCBertConfig
|
||||
|
||||
[[autodoc]] RoCBertConfig
|
||||
|
@ -37,6 +37,15 @@ Tips:
|
||||
|
||||
This model was contributed by [junnyu](https://huggingface.co/junnyu). The original code can be found [here](https://github.com/ZhuiyiTechnology/roformer).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## RoFormerConfig
|
||||
|
||||
[[autodoc]] RoFormerConfig
|
||||
|
@ -91,6 +91,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`SegformerForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
Semantic segmentation:
|
||||
|
||||
@ -98,6 +99,7 @@ Semantic segmentation:
|
||||
- A blog on fine-tuning SegFormer on a custom dataset can be found [here](https://huggingface.co/blog/fine-tune-segformer).
|
||||
- More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer).
|
||||
- [`TFSegformerForSemanticSegmentation`] is supported by this [example notebook](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb).
|
||||
- [Semantic segmentation task guide](./tasks/semantic_segmentation)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
|
@ -36,6 +36,10 @@ Tips:
|
||||
|
||||
This model was contributed by [anton-l](https://huggingface.co/anton-l).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Audio classification task guide](./tasks/audio_classification)
|
||||
- [Automatic speech recognition task guide](./tasks/asr)
|
||||
|
||||
## SEWDConfig
|
||||
|
||||
|
@ -36,6 +36,10 @@ Tips:
|
||||
|
||||
This model was contributed by [anton-l](https://huggingface.co/anton-l).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Audio classification task guide](./tasks/audio_classification)
|
||||
- [Automatic speech recognition task guide](./tasks/asr)
|
||||
|
||||
## SEWConfig
|
||||
|
||||
|
@ -94,6 +94,9 @@ predicted token ids.
|
||||
|
||||
See [model hub](https://huggingface.co/models?filter=speech2text2) to look for Speech2Text2 checkpoints.
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Causal language modeling task guide](./tasks/language_modeling)
|
||||
|
||||
## Speech2Text2Config
|
||||
|
||||
|
@ -47,6 +47,10 @@ Tips:
|
||||
|
||||
This model was contributed by [yuvalkirstain](https://huggingface.co/yuvalkirstain) and [oriram](https://huggingface.co/oriram). The original code can be found [here](https://github.com/oriram/splinter).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Question answering task guide](./tasks/question-answering)
|
||||
|
||||
## SplinterConfig
|
||||
|
||||
[[autodoc]] SplinterConfig
|
||||
|
@ -46,6 +46,13 @@ Tips:
|
||||
|
||||
This model was contributed by [forresti](https://huggingface.co/forresti).
|
||||
|
||||
## Documentation resources
|
||||
|
||||
- [Text classification task guide](./tasks/sequence_classification)
|
||||
- [Token classification task guide](./tasks/token_classification)
|
||||
- [Question answering task guide](./tasks/question_answering)
|
||||
- [Masked language modeling task guide](./tasks/masked_language_modeling)
|
||||
- [Multiple choice task guide](./tasks/multiple_choice)
|
||||
|
||||
## SqueezeBertConfig
|
||||
|
||||
|
@ -52,6 +52,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`SwinForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
Besides that:
|
||||
|
||||
|
@ -33,6 +33,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
|
||||
<PipelineTag pipeline="image-classification"/>
|
||||
|
||||
- [`Swinv2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||||
- See also: [Image classification task guide](./tasks/image_classification)
|
||||
|
||||
Besides that:
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user