mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Fix broken code blocks in README.md (#15967)
at transformers/examples/pytorch/contrastive-image-text
This commit is contained in:
parent
1e8f37992f
commit
8feede229c
@ -39,13 +39,14 @@ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
|
||||
wget http://images.cocodataset.org/annotations/image_info_test2017.zip
|
||||
cd ..
|
||||
```
|
||||
```suggestion
|
||||
|
||||
Having downloaded COCO dataset manually you should be able to load with the `ydshieh/coc_dataset_script` dataset loading script:
|
||||
|
||||
```py
|
||||
COCO_DIR = "data"
|
||||
ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR)
|
||||
```
|
||||
|
||||
### Create a model from a vision encoder model and a text decoder model
|
||||
Next, we create a [VisionTextDualEncoderModel](https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder#visiontextdualencoder).
|
||||
The `VisionTextDualEncoderModel` class let's you load any vision and text encoder model to create a dual encoder.
|
||||
@ -95,4 +96,4 @@ python examples/pytorch/contrastive-image-text/run_clip.py \
|
||||
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
|
||||
--overwrite_output_dir \
|
||||
--push_to_hub
|
||||
```
|
||||
```
|
||||
|
Loading…
Reference in New Issue
Block a user