mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
[docs
] Update preprocessing.md (#28719)
* Update preprocessing.md adjust ImageProcessor link to working target (same as in lower section of file) * Update preprocessing.md
This commit is contained in:
parent
1f47a24aa1
commit
3a46e30dd1
@ -22,7 +22,7 @@ Before you can train a model on a dataset, it needs to be preprocessed into the
|
||||
|
||||
* Text, use a [Tokenizer](./main_classes/tokenizer) to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors.
|
||||
* Speech and audio, use a [Feature extractor](./main_classes/feature_extractor) to extract sequential features from audio waveforms and convert them into tensors.
|
||||
* Image inputs use a [ImageProcessor](./main_classes/image) to convert images into tensors.
|
||||
* Image inputs use a [ImageProcessor](./main_classes/image_processor) to convert images into tensors.
|
||||
* Multimodal inputs, use a [Processor](./main_classes/processors) to combine a tokenizer and a feature extractor or image processor.
|
||||
|
||||
<Tip>
|
||||
@ -397,7 +397,7 @@ width are expected, for others only the `shortest_edge` is defined.
|
||||
>>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)])
|
||||
```
|
||||
|
||||
2. The model accepts [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)
|
||||
2. The model accepts [`pixel_values`](model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)
|
||||
as its input. `ImageProcessor` can take care of normalizing the images, and generating appropriate tensors.
|
||||
Create a function that combines image augmentation and image preprocessing for a batch of images and generates `pixel_values`:
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user