mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
![]() * SAM * CLIP * SigLIP * GOT-OCR2 (depends on SAM) * SigLIP2 (depends on SigLIP) * trigger tests * Fix SAM * Fix missed indexing, use named attributes * Llama * Aria * Bamba * Update llama: missed outputs return type * (fixup) Aria * DiffLlama * Emu3 * Gemma * Gemma2 * Paligemma * Fix paligemma * Gemma3 * GLM * Helium * JetMoe * Jamba * Mistral * Mistral * Mixtral * Nemotron * Olmo * Olmo2 * Persimmon * Phi * Phi3 * PhiMoe * Qwen2 * Qwen2_moe * StableLM * Starcoder2 * Add return_dict decorator * SAM * Update decorator: compile, export, trace - friendly * Llama (decorator) * SAM (decorator) * Add decorator `can_return_tuple` * Llama * Update to decorator * Update CLIP * Update decorator to store `_is_top_level_module` in self * Update decorator to correctly handle compile/export * Remove is_torchdynamo_compiling constraint, all work fine with self attribute assignment * Typing * GPT NeoX * Fixup * Fix attribute Granite * Fix return type mixtral * Update Gemma3 * Fix Cohere amd Cohere2 * Fixup * Fix corner case for Phi4, when activation is shared * (fix-copies) deepseekv3, phi4 * Fixup * Apply to qwen3/qwen3_moe * Fix |
||
---|---|---|
.. | ||
agents | ||
bettertransformer | ||
deepspeed | ||
extended | ||
fixtures | ||
fsdp | ||
generation | ||
models | ||
optimization | ||
peft_integration | ||
pipelines | ||
quantization | ||
repo_utils | ||
sagemaker | ||
tensor_parallel | ||
tokenization | ||
trainer | ||
utils | ||
__init__.py | ||
test_backbone_common.py | ||
test_configuration_common.py | ||
test_feature_extraction_common.py | ||
test_image_processing_common.py | ||
test_image_transforms.py | ||
test_modeling_common.py | ||
test_modeling_flax_common.py | ||
test_modeling_tf_common.py | ||
test_pipeline_mixin.py | ||
test_processing_common.py | ||
test_sequence_feature_extraction_common.py | ||
test_tokenization_common.py | ||
test_training_args.py |