mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
![]() * add init and base image processing functions * add add_fast_image_processor to transformers-cli * add working fast image processor clip * add fast image processor to doc, working tests * remove "to be implemented" SigLip * fix unprotected import * fix unprotected vision import * update ViTImageProcessorFast * increase threshold slow fast ewuivalence * add fast img blip * add fast class in tests with cli * improve cli * add fast image processor convnext * add LlavaPatchingMixin and fast image processor for llava_next and llava_onevision * add device kwarg to ImagesKwargs for fast processing on cuda * cleanup * fix unprotected import * group images by sizes and add batch processing * Add batch equivalence tests, skip when center_crop is used * cleanup * update init and cli * fix-copies * refactor convnext, cleanup base * fix * remove patching mixins, add piped torchvision transforms for ViT * fix unbatched processing * fix f strings * protect imports * change llava onevision to class transforms (test) * fix convnext * improve formatting (following Pavel review) * fix handling device arg * improve cli * fix * fix inits * Add distinction between preprocess and _preprocess, and support for arbitrary kwargs through valid_extra_kwargs * uniformize qwen2_vl fast * fix docstrings * add add fast image processor llava * remove min_pixels max_pixels from accepted size * nit * nit * refactor fast image processors docstrings * cleanup and remove fast class transforms * update add fast image processor transformers cli * cleanup docstring * uniformize pixtral fast and make _process_image explicit * fix prepare image structure llava next/onevision * Use typed kwargs instead of explicit args * nit fix import Unpack * clearly separate pops and gets in base preprocess. Use explicit typed kwargs * make qwen2_vl preprocess arguments hashable |
||
---|---|---|
.. | ||
internal | ||
main_classes | ||
model_doc | ||
tasks | ||
_toctree.yml | ||
accelerate.md | ||
add_new_model.md | ||
attention.md | ||
autoclass_tutorial.md | ||
bertology.md | ||
big_models.md | ||
chat_templating.md | ||
community.md | ||
create_a_model.md | ||
custom_models.md | ||
custom_tools.md | ||
fast_tokenizers.md | ||
generation_strategies.md | ||
glossary.md | ||
hpo_train.md | ||
index.md | ||
installation.md | ||
llm_tutorial.md | ||
model_memory_anatomy.md | ||
model_sharing.md | ||
model_summary.md | ||
multilingual.md | ||
pad_truncation.md | ||
peft.md | ||
perf_hardware.md | ||
perf_infer_cpu.md | ||
perf_infer_gpu_many.md | ||
perf_infer_gpu_one.md | ||
perf_infer_special.md | ||
perf_torch_compile.md | ||
perf_train_cpu_many.md | ||
perf_train_cpu.md | ||
perf_train_gpu_many.md | ||
perf_train_gpu_one.md | ||
perf_train_special.md | ||
perf_train_tpu_tf.md | ||
perf_train_tpu.md | ||
performance.md | ||
perplexity.md | ||
philosophy.md | ||
pipeline_tutorial.md | ||
pipeline_webserver.md | ||
pr_checks.md | ||
preprocessing.md | ||
quicktour.md | ||
run_scripts.md | ||
serialization.md | ||
task_summary.md | ||
tasks_explained.md | ||
testing.md | ||
tf_xla.md | ||
tflite.md | ||
tokenizer_summary.md | ||
torchscript.md | ||
training.md | ||
transformers_agents.md | ||
troubleshooting.md |