Fix broken code blocks in README.md (#15967)

at transformers/examples/pytorch/contrastive-image-text
This commit is contained in:
Shotaro Ishihara 2022-03-10 01:07:52 +09:00 committed by GitHub
parent 1e8f37992f
commit 8feede229c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -39,13 +39,14 @@ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
wget http://images.cocodataset.org/annotations/image_info_test2017.zip
cd ..
```
```suggestion
Having downloaded COCO dataset manually you should be able to load with the `ydshieh/coc_dataset_script` dataset loading script:
```py
COCO_DIR = "data"
ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR)
```
### Create a model from a vision encoder model and a text decoder model
Next, we create a [VisionTextDualEncoderModel](https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder#visiontextdualencoder).
The `VisionTextDualEncoderModel` class let's you load any vision and text encoder model to create a dual encoder.