Add clip resources to the transformers documentation (#20190)

* WIP: Added CLIP resources from HuggingFace blog

* ADD: Notebooks documentation to clip

* Add link straight to notebook

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Change notebook links to colab

Co-authored-by: Ambuj Pawar <your_email@abc.example>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
This commit is contained in:
Ambuj Pawar 2022-11-15 19:26:46 +01:00 committed by GitHub
parent 5b62f8ea2b
commit c19aa7acce
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -75,6 +75,25 @@ encode the text and prepare the images. The following example shows how to get t
This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/openai/CLIP).
## Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP. If you're
interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-to-image"/>
- A blog post on [How to use CLIP to retrieve images from text](https://huggingface.co/blog/fine-tune-clip-rsicd).
- A blog bost on [How to use CLIP for Japanese text to image generation](https://huggingface.co/blog/japanese-stable-diffusion).
<PipelineTag pipeline="image-to-text"/>
- A notebook showing [Video to text matching with CLIP for videos](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/X-CLIP/Video_text_matching_with_X_CLIP.ipynb).
<PipelineTag pipeline="zero-shot-classification"/>
- A notebook showing [Zero shot video classification using CLIP for video](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/X-CLIP/Zero_shot_classify_a_YouTube_video_with_X_CLIP.ipynb).
## CLIPConfig
[[autodoc]] CLIPConfig