Simplify soft dependencies and update the dummy-creation process (#36827)

* Reverse dependency map shouldn't be created when test_all is set

* [test_all] Remove dummies

* Modular fixes

* Update utils/check_repo.py

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* [test_all] Better docs

* [test_all] Update src/transformers/commands/chat.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* [test_all] Remove deprecated AdaptiveEmbeddings from the tests

* [test_all] Doc builder

* [test_all] is_dummy

* [test_all] Import utils

* [test_all] Doc building should not require all deps

---------

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
This commit is contained in:
Lysandre Debut 2025-04-11 11:08:36 +02:00 committed by GitHub
parent 931126b929
commit 54a123f068
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
295 changed files with 2167 additions and 27702 deletions

View File

@ -1078,6 +1078,8 @@
title: Utilities for Audio processing
- local: internal/file_utils
title: General Utilities
- local: internal/import_utils
title: Importing Utilities
- local: internal/time_series_utils
title: Utilities for Time Series
title: Internal helpers

View File

@ -0,0 +1,91 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Import Utilities
This page goes through the transformers utilities to enable lazy and fast object import.
While we strive for minimal dependencies, some models have specific dependencies requirements that cannot be
worked around. We don't want for all users of `transformers` to have to install those dependencies to use other models,
we therefore mark those as soft dependencies rather than hard dependencies.
The transformers toolkit is not made to error-out on import of a model that has a specific dependency; instead, an
object for which you are lacking a dependency will error-out when calling any method on it. As an example, if
`torchvision` isn't installed, the fast image processors will not be available.
This object is still importable:
```python
>>> from transformers import DetrImageProcessorFast
>>> print(DetrImageProcessorFast)
<class 'DetrImageProcessorFast'>
```
However, no method can be called on that object:
```python
>>> DetrImageProcessorFast.from_pretrained()
ImportError:
DetrImageProcessorFast requires the Torchvision library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
Please note that you may need to restart your runtime after installation.
```
Let's see how to specify specific object dependencies.
## Specifying Object Dependencies
### Filename-based
All objects under a given filename have an automatic dependency to the tool linked to the filename
**TensorFlow**: All files starting with `modeling_tf_` have an automatic TensorFlow dependency.
**Flax**: All files starting with `modeling_flax_` have an automatic Flax dependency
**PyTorch**: All files starting with `modeling_` and not valid with the above (TensorFlow and Flax) have an automatic
PyTorch dependency
**Tokenizers**: All files starting with `tokenization_` and ending with `_fast` have an automatic `tokenizers` dependency
**Vision**: All files starting with `image_processing_` have an automatic dependency to the `vision` dependency group;
at the time of writing, this only contains the `pillow` dependency.
**Vision + Torch + Torchvision**: All files starting with `image_processing_` and ending with `_fast` have an automatic
dependency to `vision`, `torch`, and `torchvision`.
All of these automatic dependencies are added on top of the explicit dependencies that are detailed below.
### Explicit Object Dependencies
We add a method called `requires` that is used to explicitly specify the dependencies of a given object. As an
example, the `Trainer` class has two hard dependencies: `torch` and `accelerate`. Here is how we specify these
required dependencies:
```python
from .utils.import_utils import requires
@requires(backends=("torch", "accelerate"))
class Trainer:
...
```
Backends that can be added here are all the backends that are available in the `import_utils.py` module.
## Methods
[[autodoc]] utils.import_utils.define_import_structure
[[autodoc]] utils.import_utils.requires

View File

@ -88,6 +88,11 @@ The original code can be found [here](https://github.com/salesforce/BLIP).
[[autodoc]] BlipTextModel
- forward
## BlipTextLMHeadModel
[[autodoc]] BlipTextLMHeadModel
- forward
## BlipVisionModel
[[autodoc]] BlipVisionModel
@ -123,6 +128,11 @@ The original code can be found [here](https://github.com/salesforce/BLIP).
[[autodoc]] TFBlipTextModel
- call
## TFBlipTextLMHeadModel
[[autodoc]] TFBlipTextLMHeadModel
- forward
## TFBlipVisionModel
[[autodoc]] TFBlipVisionModel

File diff suppressed because it is too large Load Diff

View File

@ -22,6 +22,7 @@ from .image_processing_base import BatchFeature, ImageProcessingMixin
from .image_transforms import center_crop, normalize, rescale
from .image_utils import ChannelDimension, get_image_size
from .utils import logging
from .utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -33,6 +34,7 @@ INIT_SERVICE_KWARGS = [
]
@requires(backends=("vision",))
class BaseImageProcessor(ImageProcessingMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)

View File

@ -68,6 +68,8 @@ if is_torchvision_available():
from torchvision.transforms.v2 import functional as F
else:
from torchvision.transforms import functional as F
else:
pil_torch_interpolation_mapping = None
logger = logging.get_logger(__name__)

View File

@ -72,6 +72,8 @@ if is_vision_available():
PILImageResampling.BICUBIC: InterpolationMode.BICUBIC,
PILImageResampling.LANCZOS: InterpolationMode.LANCZOS,
}
else:
pil_torch_interpolation_mapping = {}
if TYPE_CHECKING:

View File

@ -20,7 +20,7 @@ import re
from contextlib import contextmanager
from typing import Optional
from transformers.utils.import_utils import export
from transformers.utils.import_utils import requires
from .utils import is_torch_available
@ -225,7 +225,7 @@ def _attach_debugger_logic(model, class_name, debug_path: str):
break # exit the loop after finding one (unsure, but should be just one call.)
@export(backends=("torch",))
@requires(backends=("torch",))
def model_addition_debugger(cls):
"""
# Model addition debugger - a model adder tracer
@ -282,7 +282,7 @@ def model_addition_debugger(cls):
return cls
@export(backends=("torch",))
@requires(backends=("torch",))
@contextmanager
def model_addition_debugger_context(model, debug_path: Optional[str] = None):
"""

View File

@ -11,316 +11,320 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from . import (
albert,
align,
altclip,
aria,
audio_spectrogram_transformer,
auto,
autoformer,
aya_vision,
bamba,
bark,
bart,
barthez,
bartpho,
beit,
bert,
bert_generation,
bert_japanese,
bertweet,
big_bird,
bigbird_pegasus,
biogpt,
bit,
blenderbot,
blenderbot_small,
blip,
blip_2,
bloom,
bridgetower,
bros,
byt5,
camembert,
canine,
chameleon,
chinese_clip,
clap,
clip,
clipseg,
clvp,
code_llama,
codegen,
cohere,
cohere2,
colpali,
conditional_detr,
convbert,
convnext,
convnextv2,
cpm,
cpmant,
ctrl,
cvt,
dab_detr,
dac,
data2vec,
dbrx,
deberta,
deberta_v2,
decision_transformer,
deepseek_v3,
deformable_detr,
deit,
deprecated,
depth_anything,
depth_pro,
detr,
dialogpt,
diffllama,
dinat,
dinov2,
dinov2_with_registers,
distilbert,
dit,
donut,
dpr,
dpt,
efficientnet,
electra,
emu3,
encodec,
encoder_decoder,
ernie,
esm,
falcon,
falcon_mamba,
fastspeech2_conformer,
flaubert,
flava,
fnet,
focalnet,
fsmt,
funnel,
fuyu,
gemma,
gemma2,
gemma3,
git,
glm,
glm4,
glpn,
got_ocr2,
gpt2,
gpt_bigcode,
gpt_neo,
gpt_neox,
gpt_neox_japanese,
gpt_sw3,
gptj,
granite,
granitemoe,
granitemoeshared,
grounding_dino,
groupvit,
helium,
herbert,
hiera,
hubert,
ibert,
idefics,
idefics2,
idefics3,
ijepa,
imagegpt,
informer,
instructblip,
instructblipvideo,
jamba,
jetmoe,
kosmos2,
layoutlm,
layoutlmv2,
layoutlmv3,
layoutxlm,
led,
levit,
lilt,
llama,
llama4,
llava,
llava_next,
llava_next_video,
llava_onevision,
longformer,
longt5,
luke,
lxmert,
m2m_100,
mamba,
mamba2,
marian,
markuplm,
mask2former,
maskformer,
mbart,
mbart50,
megatron_bert,
megatron_gpt2,
mgp_str,
mimi,
mistral,
mistral3,
mixtral,
mllama,
mluke,
mobilebert,
mobilenet_v1,
mobilenet_v2,
mobilevit,
mobilevitv2,
modernbert,
moonshine,
moshi,
mpnet,
mpt,
mra,
mt5,
musicgen,
musicgen_melody,
mvp,
myt5,
nemotron,
nllb,
nllb_moe,
nougat,
nystromformer,
olmo,
olmo2,
olmoe,
omdet_turbo,
oneformer,
openai,
opt,
owlv2,
owlvit,
paligemma,
patchtsmixer,
patchtst,
pegasus,
pegasus_x,
perceiver,
persimmon,
phi,
phi3,
phi4_multimodal,
phimoe,
phobert,
pix2struct,
pixtral,
plbart,
poolformer,
pop2piano,
prompt_depth_anything,
prophetnet,
pvt,
pvt_v2,
qwen2,
qwen2_5_vl,
qwen2_audio,
qwen2_moe,
qwen2_vl,
qwen3,
qwen3_moe,
rag,
recurrent_gemma,
reformer,
regnet,
rembert,
resnet,
roberta,
roberta_prelayernorm,
roc_bert,
roformer,
rt_detr,
rt_detr_v2,
rwkv,
sam,
seamless_m4t,
seamless_m4t_v2,
segformer,
seggpt,
sew,
sew_d,
shieldgemma2,
siglip,
siglip2,
smolvlm,
speech_encoder_decoder,
speech_to_text,
speecht5,
splinter,
squeezebert,
stablelm,
starcoder2,
superglue,
superpoint,
swiftformer,
swin,
swin2sr,
swinv2,
switch_transformers,
t5,
table_transformer,
tapas,
textnet,
time_series_transformer,
timesformer,
timm_backbone,
timm_wrapper,
trocr,
tvp,
udop,
umt5,
unispeech,
unispeech_sat,
univnet,
upernet,
video_llava,
videomae,
vilt,
vipllava,
vision_encoder_decoder,
vision_text_dual_encoder,
visual_bert,
vit,
vit_mae,
vit_msn,
vitdet,
vitmatte,
vitpose,
vitpose_backbone,
vits,
vivit,
wav2vec2,
wav2vec2_bert,
wav2vec2_conformer,
wav2vec2_phoneme,
wav2vec2_with_lm,
wavlm,
whisper,
x_clip,
xglm,
xlm,
xlm_roberta,
xlm_roberta_xl,
xlnet,
xmod,
yolos,
yoso,
zamba,
zamba2,
zoedepth,
)
from ..utils import _LazyModule
from ..utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .albert import *
from .align import *
from .altclip import *
from .aria import *
from .audio_spectrogram_transformer import *
from .auto import *
from .autoformer import *
from .aya_vision import *
from .bamba import *
from .bark import *
from .bart import *
from .barthez import *
from .bartpho import *
from .beit import *
from .bert import *
from .bert_generation import *
from .bert_japanese import *
from .bertweet import *
from .big_bird import *
from .bigbird_pegasus import *
from .biogpt import *
from .bit import *
from .blenderbot import *
from .blenderbot_small import *
from .blip import *
from .blip_2 import *
from .bloom import *
from .bridgetower import *
from .bros import *
from .byt5 import *
from .camembert import *
from .canine import *
from .chameleon import *
from .chinese_clip import *
from .clap import *
from .clip import *
from .clipseg import *
from .clvp import *
from .code_llama import *
from .codegen import *
from .cohere import *
from .cohere2 import *
from .colpali import *
from .conditional_detr import *
from .convbert import *
from .convnext import *
from .convnextv2 import *
from .cpm import *
from .cpmant import *
from .ctrl import *
from .cvt import *
from .dab_detr import *
from .dac import *
from .data2vec import *
from .dbrx import *
from .deberta import *
from .deberta_v2 import *
from .decision_transformer import *
from .deformable_detr import *
from .deit import *
from .deprecated import *
from .depth_anything import *
from .depth_pro import *
from .detr import *
from .dialogpt import *
from .diffllama import *
from .dinat import *
from .dinov2 import *
from .dinov2_with_registers import *
from .distilbert import *
from .dit import *
from .donut import *
from .dpr import *
from .dpt import *
from .efficientnet import *
from .electra import *
from .emu3 import *
from .encodec import *
from .encoder_decoder import *
from .ernie import *
from .esm import *
from .falcon import *
from .falcon_mamba import *
from .fastspeech2_conformer import *
from .flaubert import *
from .flava import *
from .fnet import *
from .focalnet import *
from .fsmt import *
from .funnel import *
from .fuyu import *
from .gemma import *
from .gemma2 import *
from .gemma3 import *
from .git import *
from .glm import *
from .glm4 import *
from .glpn import *
from .got_ocr2 import *
from .gpt2 import *
from .gpt_bigcode import *
from .gpt_neo import *
from .gpt_neox import *
from .gpt_neox_japanese import *
from .gpt_sw3 import *
from .gptj import *
from .granite import *
from .granitemoe import *
from .granitemoeshared import *
from .grounding_dino import *
from .groupvit import *
from .helium import *
from .herbert import *
from .hiera import *
from .hubert import *
from .ibert import *
from .idefics import *
from .idefics2 import *
from .idefics3 import *
from .ijepa import *
from .imagegpt import *
from .informer import *
from .instructblip import *
from .instructblipvideo import *
from .jamba import *
from .jetmoe import *
from .kosmos2 import *
from .layoutlm import *
from .layoutlmv2 import *
from .layoutlmv3 import *
from .layoutxlm import *
from .led import *
from .levit import *
from .lilt import *
from .llama import *
from .llama4 import *
from .llava import *
from .llava_next import *
from .llava_next_video import *
from .llava_onevision import *
from .longformer import *
from .longt5 import *
from .luke import *
from .lxmert import *
from .m2m_100 import *
from .mamba import *
from .mamba2 import *
from .marian import *
from .markuplm import *
from .mask2former import *
from .maskformer import *
from .mbart import *
from .mbart50 import *
from .megatron_bert import *
from .megatron_gpt2 import *
from .mgp_str import *
from .mimi import *
from .mistral import *
from .mistral3 import *
from .mixtral import *
from .mllama import *
from .mluke import *
from .mobilebert import *
from .mobilenet_v1 import *
from .mobilenet_v2 import *
from .mobilevit import *
from .mobilevitv2 import *
from .modernbert import *
from .moonshine import *
from .moshi import *
from .mpnet import *
from .mpt import *
from .mra import *
from .mt5 import *
from .musicgen import *
from .musicgen_melody import *
from .mvp import *
from .myt5 import *
from .nemotron import *
from .nllb import *
from .nllb_moe import *
from .nougat import *
from .nystromformer import *
from .olmo import *
from .olmo2 import *
from .olmoe import *
from .omdet_turbo import *
from .oneformer import *
from .openai import *
from .opt import *
from .owlv2 import *
from .owlvit import *
from .paligemma import *
from .patchtsmixer import *
from .patchtst import *
from .pegasus import *
from .pegasus_x import *
from .perceiver import *
from .persimmon import *
from .phi import *
from .phi3 import *
from .phi4_multimodal import *
from .phimoe import *
from .phobert import *
from .pix2struct import *
from .pixtral import *
from .plbart import *
from .poolformer import *
from .pop2piano import *
from .prophetnet import *
from .pvt import *
from .pvt_v2 import *
from .qwen2 import *
from .qwen2_5_vl import *
from .qwen2_audio import *
from .qwen2_moe import *
from .qwen2_vl import *
from .rag import *
from .recurrent_gemma import *
from .reformer import *
from .regnet import *
from .rembert import *
from .resnet import *
from .roberta import *
from .roberta_prelayernorm import *
from .roc_bert import *
from .roformer import *
from .rt_detr import *
from .rt_detr_v2 import *
from .rwkv import *
from .sam import *
from .seamless_m4t import *
from .seamless_m4t_v2 import *
from .segformer import *
from .seggpt import *
from .sew import *
from .sew_d import *
from .siglip import *
from .siglip2 import *
from .smolvlm import *
from .speech_encoder_decoder import *
from .speech_to_text import *
from .speecht5 import *
from .splinter import *
from .squeezebert import *
from .stablelm import *
from .starcoder2 import *
from .superglue import *
from .superpoint import *
from .swiftformer import *
from .swin import *
from .swin2sr import *
from .swinv2 import *
from .switch_transformers import *
from .t5 import *
from .table_transformer import *
from .tapas import *
from .textnet import *
from .time_series_transformer import *
from .timesformer import *
from .timm_backbone import *
from .timm_wrapper import *
from .trocr import *
from .tvp import *
from .udop import *
from .umt5 import *
from .unispeech import *
from .unispeech_sat import *
from .univnet import *
from .upernet import *
from .video_llava import *
from .videomae import *
from .vilt import *
from .vipllava import *
from .vision_encoder_decoder import *
from .vision_text_dual_encoder import *
from .visual_bert import *
from .vit import *
from .vit_mae import *
from .vit_msn import *
from .vitdet import *
from .vitmatte import *
from .vitpose import *
from .vitpose_backbone import *
from .vits import *
from .vivit import *
from .wav2vec2 import *
from .wav2vec2_bert import *
from .wav2vec2_conformer import *
from .wav2vec2_phoneme import *
from .wav2vec2_with_lm import *
from .wavlm import *
from .whisper import *
from .x_clip import *
from .xglm import *
from .xlm import *
from .xlm_roberta import *
from .xlm_roberta_xl import *
from .xlnet import *
from .xmod import *
from .yolos import *
from .yoso import *
from .zamba import *
from .zamba2 import *
from .zoedepth import *
else:
import sys
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -23,7 +23,7 @@ import sentencepiece as spm
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import logging
from ...utils.import_utils import export
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -33,7 +33,7 @@ VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
SPIECE_UNDERLINE = ""
@export(backends=("sentencepiece",))
@requires(backends=("sentencepiece",))
class AlbertTokenizer(PreTrainedTokenizer):
"""
Construct an ALBERT tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).

View File

@ -11,399 +11,24 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from ...utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_flax_available,
is_tf_available,
is_torch_available,
)
_import_structure = {
"auto_factory": ["get_values"],
"configuration_auto": ["CONFIG_MAPPING", "MODEL_NAMES_MAPPING", "AutoConfig"],
"feature_extraction_auto": ["FEATURE_EXTRACTOR_MAPPING", "AutoFeatureExtractor"],
"image_processing_auto": ["IMAGE_PROCESSOR_MAPPING", "AutoImageProcessor"],
"processing_auto": ["PROCESSOR_MAPPING", "AutoProcessor"],
"tokenization_auto": ["TOKENIZER_MAPPING", "AutoTokenizer"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_auto"] = [
"MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
"MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING",
"MODEL_FOR_AUDIO_XVECTOR_MAPPING",
"MODEL_FOR_BACKBONE_MAPPING",
"MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING",
"MODEL_FOR_CAUSAL_LM_MAPPING",
"MODEL_FOR_CTC_MAPPING",
"MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
"MODEL_FOR_DEPTH_ESTIMATION_MAPPING",
"MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
"MODEL_FOR_IMAGE_MAPPING",
"MODEL_FOR_IMAGE_SEGMENTATION_MAPPING",
"MODEL_FOR_IMAGE_TO_IMAGE_MAPPING",
"MODEL_FOR_KEYPOINT_DETECTION_MAPPING",
"MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING",
"MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
"MODEL_FOR_MASKED_LM_MAPPING",
"MODEL_FOR_MASK_GENERATION_MAPPING",
"MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
"MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
"MODEL_FOR_OBJECT_DETECTION_MAPPING",
"MODEL_FOR_PRETRAINING_MAPPING",
"MODEL_FOR_QUESTION_ANSWERING_MAPPING",
"MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
"MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
"MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
"MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
"MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
"MODEL_FOR_TEXT_ENCODING_MAPPING",
"MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING",
"MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING",
"MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
"MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING",
"MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING",
"MODEL_FOR_VISION_2_SEQ_MAPPING",
"MODEL_FOR_RETRIEVAL_MAPPING",
"MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING",
"MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING",
"MODEL_MAPPING",
"MODEL_WITH_LM_HEAD_MAPPING",
"MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
"MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING",
"MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING",
"MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING",
"AutoModel",
"AutoBackbone",
"AutoModelForAudioClassification",
"AutoModelForAudioFrameClassification",
"AutoModelForAudioXVector",
"AutoModelForCausalLM",
"AutoModelForCTC",
"AutoModelForDepthEstimation",
"AutoModelForImageClassification",
"AutoModelForImageSegmentation",
"AutoModelForImageToImage",
"AutoModelForInstanceSegmentation",
"AutoModelForKeypointDetection",
"AutoModelForMaskGeneration",
"AutoModelForTextEncoding",
"AutoModelForMaskedImageModeling",
"AutoModelForMaskedLM",
"AutoModelForMultipleChoice",
"AutoModelForNextSentencePrediction",
"AutoModelForObjectDetection",
"AutoModelForPreTraining",
"AutoModelForQuestionAnswering",
"AutoModelForSemanticSegmentation",
"AutoModelForSeq2SeqLM",
"AutoModelForSequenceClassification",
"AutoModelForSpeechSeq2Seq",
"AutoModelForTableQuestionAnswering",
"AutoModelForTextToSpectrogram",
"AutoModelForTextToWaveform",
"AutoModelForTokenClassification",
"AutoModelForUniversalSegmentation",
"AutoModelForVideoClassification",
"AutoModelForVision2Seq",
"AutoModelForVisualQuestionAnswering",
"AutoModelForDocumentQuestionAnswering",
"AutoModelWithLMHead",
"AutoModelForZeroShotImageClassification",
"AutoModelForZeroShotObjectDetection",
"AutoModelForImageTextToText",
]
try:
if not is_tf_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_tf_auto"] = [
"TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_CAUSAL_LM_MAPPING",
"TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_MASK_GENERATION_MAPPING",
"TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
"TF_MODEL_FOR_MASKED_LM_MAPPING",
"TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
"TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
"TF_MODEL_FOR_PRETRAINING_MAPPING",
"TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
"TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
"TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
"TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
"TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
"TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
"TF_MODEL_FOR_TEXT_ENCODING_MAPPING",
"TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_VISION_2_SEQ_MAPPING",
"TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
"TF_MODEL_MAPPING",
"TF_MODEL_WITH_LM_HEAD_MAPPING",
"TFAutoModel",
"TFAutoModelForAudioClassification",
"TFAutoModelForCausalLM",
"TFAutoModelForImageClassification",
"TFAutoModelForMaskedImageModeling",
"TFAutoModelForMaskedLM",
"TFAutoModelForMaskGeneration",
"TFAutoModelForMultipleChoice",
"TFAutoModelForNextSentencePrediction",
"TFAutoModelForPreTraining",
"TFAutoModelForDocumentQuestionAnswering",
"TFAutoModelForQuestionAnswering",
"TFAutoModelForSemanticSegmentation",
"TFAutoModelForSeq2SeqLM",
"TFAutoModelForSequenceClassification",
"TFAutoModelForSpeechSeq2Seq",
"TFAutoModelForTableQuestionAnswering",
"TFAutoModelForTextEncoding",
"TFAutoModelForTokenClassification",
"TFAutoModelForVision2Seq",
"TFAutoModelForZeroShotImageClassification",
"TFAutoModelWithLMHead",
]
try:
if not is_flax_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_flax_auto"] = [
"FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_CAUSAL_LM_MAPPING",
"FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_MASKED_LM_MAPPING",
"FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
"FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
"FLAX_MODEL_FOR_PRETRAINING_MAPPING",
"FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
"FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
"FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
"FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING",
"FLAX_MODEL_MAPPING",
"FlaxAutoModel",
"FlaxAutoModelForCausalLM",
"FlaxAutoModelForImageClassification",
"FlaxAutoModelForMaskedLM",
"FlaxAutoModelForMultipleChoice",
"FlaxAutoModelForNextSentencePrediction",
"FlaxAutoModelForPreTraining",
"FlaxAutoModelForQuestionAnswering",
"FlaxAutoModelForSeq2SeqLM",
"FlaxAutoModelForSequenceClassification",
"FlaxAutoModelForSpeechSeq2Seq",
"FlaxAutoModelForTokenClassification",
"FlaxAutoModelForVision2Seq",
]
from ...utils import _LazyModule
from ...utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .auto_factory import get_values
from .configuration_auto import CONFIG_MAPPING, MODEL_NAMES_MAPPING, AutoConfig
from .feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
from .image_processing_auto import IMAGE_PROCESSOR_MAPPING, AutoImageProcessor
from .processing_auto import PROCESSOR_MAPPING, AutoProcessor
from .tokenization_auto import TOKENIZER_MAPPING, AutoTokenizer
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_auto import (
MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING,
MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING,
MODEL_FOR_AUDIO_XVECTOR_MAPPING,
MODEL_FOR_BACKBONE_MAPPING,
MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING,
MODEL_FOR_CAUSAL_LM_MAPPING,
MODEL_FOR_CTC_MAPPING,
MODEL_FOR_DEPTH_ESTIMATION_MAPPING,
MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING,
MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING,
MODEL_FOR_IMAGE_MAPPING,
MODEL_FOR_IMAGE_SEGMENTATION_MAPPING,
MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING,
MODEL_FOR_IMAGE_TO_IMAGE_MAPPING,
MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING,
MODEL_FOR_KEYPOINT_DETECTION_MAPPING,
MODEL_FOR_MASK_GENERATION_MAPPING,
MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING,
MODEL_FOR_MASKED_LM_MAPPING,
MODEL_FOR_MULTIPLE_CHOICE_MAPPING,
MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING,
MODEL_FOR_OBJECT_DETECTION_MAPPING,
MODEL_FOR_PRETRAINING_MAPPING,
MODEL_FOR_QUESTION_ANSWERING_MAPPING,
MODEL_FOR_RETRIEVAL_MAPPING,
MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING,
MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,
MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING,
MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING,
MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING,
MODEL_FOR_TEXT_ENCODING_MAPPING,
MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING,
MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING,
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING,
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING,
MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING,
MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING,
MODEL_FOR_VISION_2_SEQ_MAPPING,
MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING,
MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING,
MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING,
MODEL_MAPPING,
MODEL_WITH_LM_HEAD_MAPPING,
AutoBackbone,
AutoModel,
AutoModelForAudioClassification,
AutoModelForAudioFrameClassification,
AutoModelForAudioXVector,
AutoModelForCausalLM,
AutoModelForCTC,
AutoModelForDepthEstimation,
AutoModelForDocumentQuestionAnswering,
AutoModelForImageClassification,
AutoModelForImageSegmentation,
AutoModelForImageTextToText,
AutoModelForImageToImage,
AutoModelForInstanceSegmentation,
AutoModelForKeypointDetection,
AutoModelForMaskedImageModeling,
AutoModelForMaskedLM,
AutoModelForMaskGeneration,
AutoModelForMultipleChoice,
AutoModelForNextSentencePrediction,
AutoModelForObjectDetection,
AutoModelForPreTraining,
AutoModelForQuestionAnswering,
AutoModelForSemanticSegmentation,
AutoModelForSeq2SeqLM,
AutoModelForSequenceClassification,
AutoModelForSpeechSeq2Seq,
AutoModelForTableQuestionAnswering,
AutoModelForTextEncoding,
AutoModelForTextToSpectrogram,
AutoModelForTextToWaveform,
AutoModelForTokenClassification,
AutoModelForUniversalSegmentation,
AutoModelForVideoClassification,
AutoModelForVision2Seq,
AutoModelForVisualQuestionAnswering,
AutoModelForZeroShotImageClassification,
AutoModelForZeroShotObjectDetection,
AutoModelWithLMHead,
)
try:
if not is_tf_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_tf_auto import (
TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING,
TF_MODEL_FOR_CAUSAL_LM_MAPPING,
TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING,
TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING,
TF_MODEL_FOR_MASK_GENERATION_MAPPING,
TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING,
TF_MODEL_FOR_MASKED_LM_MAPPING,
TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING,
TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING,
TF_MODEL_FOR_PRETRAINING_MAPPING,
TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING,
TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING,
TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,
TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING,
TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING,
TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING,
TF_MODEL_FOR_TEXT_ENCODING_MAPPING,
TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
TF_MODEL_FOR_VISION_2_SEQ_MAPPING,
TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING,
TF_MODEL_MAPPING,
TF_MODEL_WITH_LM_HEAD_MAPPING,
TFAutoModel,
TFAutoModelForAudioClassification,
TFAutoModelForCausalLM,
TFAutoModelForDocumentQuestionAnswering,
TFAutoModelForImageClassification,
TFAutoModelForMaskedImageModeling,
TFAutoModelForMaskedLM,
TFAutoModelForMaskGeneration,
TFAutoModelForMultipleChoice,
TFAutoModelForNextSentencePrediction,
TFAutoModelForPreTraining,
TFAutoModelForQuestionAnswering,
TFAutoModelForSemanticSegmentation,
TFAutoModelForSeq2SeqLM,
TFAutoModelForSequenceClassification,
TFAutoModelForSpeechSeq2Seq,
TFAutoModelForTableQuestionAnswering,
TFAutoModelForTextEncoding,
TFAutoModelForTokenClassification,
TFAutoModelForVision2Seq,
TFAutoModelForZeroShotImageClassification,
TFAutoModelWithLMHead,
)
try:
if not is_flax_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_flax_auto import (
FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING,
FLAX_MODEL_FOR_CAUSAL_LM_MAPPING,
FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING,
FLAX_MODEL_FOR_MASKED_LM_MAPPING,
FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING,
FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING,
FLAX_MODEL_FOR_PRETRAINING_MAPPING,
FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING,
FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,
FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING,
FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING,
FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING,
FLAX_MODEL_MAPPING,
FlaxAutoModel,
FlaxAutoModelForCausalLM,
FlaxAutoModelForImageClassification,
FlaxAutoModelForMaskedLM,
FlaxAutoModelForMultipleChoice,
FlaxAutoModelForNextSentencePrediction,
FlaxAutoModelForPreTraining,
FlaxAutoModelForQuestionAnswering,
FlaxAutoModelForSeq2SeqLM,
FlaxAutoModelForSequenceClassification,
FlaxAutoModelForSpeechSeq2Seq,
FlaxAutoModelForTokenClassification,
FlaxAutoModelForVision2Seq,
)
from .auto_factory import *
from .configuration_auto import *
from .feature_extraction_auto import *
from .image_processing_auto import *
from .modeling_auto import *
from .modeling_flax_auto import *
from .modeling_tf_auto import *
from .processing_auto import *
from .tokenization_auto import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -844,3 +844,6 @@ class _LazyAutoMapping(OrderedDict):
raise ValueError(f"'{key}' is already used by a Transformers model.")
self._extra_content[key] = value
__all__ = ["get_values"]

View File

@ -1173,3 +1173,6 @@ class AutoConfig:
"match!"
)
CONFIG_MAPPING.register(model_type, config, exist_ok=exist_ok)
__all__ = ["CONFIG_MAPPING", "MODEL_NAMES_MAPPING", "AutoConfig"]

View File

@ -406,3 +406,6 @@ class AutoFeatureExtractor:
feature_extractor_class ([`FeatureExtractorMixin`]): The feature extractor to register.
"""
FEATURE_EXTRACTOR_MAPPING.register(config_class, feature_extractor_class, exist_ok=exist_ok)
__all__ = ["FEATURE_EXTRACTOR_MAPPING", "AutoFeatureExtractor"]

View File

@ -36,6 +36,7 @@ from ...utils import (
is_vision_available,
logging,
)
from ...utils.import_utils import requires
from .auto_factory import _LazyAutoMapping
from .configuration_auto import (
CONFIG_MAPPING_NAMES,
@ -324,6 +325,7 @@ def _warning_fast_image_processor_available(fast_class):
)
@requires(backends=("vision", "torchvision"))
class AutoImageProcessor:
r"""
This is a generic image processor class that will be instantiated as one of the image processor classes of the
@ -640,3 +642,6 @@ class AutoImageProcessor:
IMAGE_PROCESSOR_MAPPING.register(
config_class, (slow_image_processor_class, fast_image_processor_class), exist_ok=exist_ok
)
__all__ = ["IMAGE_PROCESSOR_MAPPING", "AutoImageProcessor"]

View File

@ -1955,3 +1955,90 @@ class AutoModelWithLMHead(_AutoModelWithLMHead):
FutureWarning,
)
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
__all__ = [
"MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
"MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING",
"MODEL_FOR_AUDIO_XVECTOR_MAPPING",
"MODEL_FOR_BACKBONE_MAPPING",
"MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING",
"MODEL_FOR_CAUSAL_LM_MAPPING",
"MODEL_FOR_CTC_MAPPING",
"MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
"MODEL_FOR_DEPTH_ESTIMATION_MAPPING",
"MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
"MODEL_FOR_IMAGE_MAPPING",
"MODEL_FOR_IMAGE_SEGMENTATION_MAPPING",
"MODEL_FOR_IMAGE_TO_IMAGE_MAPPING",
"MODEL_FOR_KEYPOINT_DETECTION_MAPPING",
"MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING",
"MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
"MODEL_FOR_MASKED_LM_MAPPING",
"MODEL_FOR_MASK_GENERATION_MAPPING",
"MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
"MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
"MODEL_FOR_OBJECT_DETECTION_MAPPING",
"MODEL_FOR_PRETRAINING_MAPPING",
"MODEL_FOR_QUESTION_ANSWERING_MAPPING",
"MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
"MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
"MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
"MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
"MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
"MODEL_FOR_TEXT_ENCODING_MAPPING",
"MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING",
"MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING",
"MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
"MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING",
"MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING",
"MODEL_FOR_VISION_2_SEQ_MAPPING",
"MODEL_FOR_RETRIEVAL_MAPPING",
"MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING",
"MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING",
"MODEL_MAPPING",
"MODEL_WITH_LM_HEAD_MAPPING",
"MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
"MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING",
"MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING",
"MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING",
"AutoModel",
"AutoBackbone",
"AutoModelForAudioClassification",
"AutoModelForAudioFrameClassification",
"AutoModelForAudioXVector",
"AutoModelForCausalLM",
"AutoModelForCTC",
"AutoModelForDepthEstimation",
"AutoModelForImageClassification",
"AutoModelForImageSegmentation",
"AutoModelForImageToImage",
"AutoModelForInstanceSegmentation",
"AutoModelForKeypointDetection",
"AutoModelForMaskGeneration",
"AutoModelForTextEncoding",
"AutoModelForMaskedImageModeling",
"AutoModelForMaskedLM",
"AutoModelForMultipleChoice",
"AutoModelForNextSentencePrediction",
"AutoModelForObjectDetection",
"AutoModelForPreTraining",
"AutoModelForQuestionAnswering",
"AutoModelForSemanticSegmentation",
"AutoModelForSeq2SeqLM",
"AutoModelForSequenceClassification",
"AutoModelForSpeechSeq2Seq",
"AutoModelForTableQuestionAnswering",
"AutoModelForTextToSpectrogram",
"AutoModelForTextToWaveform",
"AutoModelForTokenClassification",
"AutoModelForUniversalSegmentation",
"AutoModelForVideoClassification",
"AutoModelForVision2Seq",
"AutoModelForVisualQuestionAnswering",
"AutoModelForDocumentQuestionAnswering",
"AutoModelWithLMHead",
"AutoModelForZeroShotImageClassification",
"AutoModelForZeroShotObjectDetection",
"AutoModelForImageTextToText",
]

View File

@ -381,3 +381,33 @@ class FlaxAutoModelForSpeechSeq2Seq(_BaseAutoModelClass):
FlaxAutoModelForSpeechSeq2Seq = auto_class_update(
FlaxAutoModelForSpeechSeq2Seq, head_doc="sequence-to-sequence speech-to-text modeling"
)
__all__ = [
"FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_CAUSAL_LM_MAPPING",
"FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_MASKED_LM_MAPPING",
"FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
"FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
"FLAX_MODEL_FOR_PRETRAINING_MAPPING",
"FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
"FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
"FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
"FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
"FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING",
"FLAX_MODEL_MAPPING",
"FlaxAutoModel",
"FlaxAutoModelForCausalLM",
"FlaxAutoModelForImageClassification",
"FlaxAutoModelForMaskedLM",
"FlaxAutoModelForMultipleChoice",
"FlaxAutoModelForNextSentencePrediction",
"FlaxAutoModelForPreTraining",
"FlaxAutoModelForQuestionAnswering",
"FlaxAutoModelForSeq2SeqLM",
"FlaxAutoModelForSequenceClassification",
"FlaxAutoModelForSpeechSeq2Seq",
"FlaxAutoModelForTokenClassification",
"FlaxAutoModelForVision2Seq",
]

View File

@ -726,3 +726,51 @@ class TFAutoModelWithLMHead(_TFAutoModelWithLMHead):
FutureWarning,
)
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
__all__ = [
"TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_CAUSAL_LM_MAPPING",
"TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_MASK_GENERATION_MAPPING",
"TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
"TF_MODEL_FOR_MASKED_LM_MAPPING",
"TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
"TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
"TF_MODEL_FOR_PRETRAINING_MAPPING",
"TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
"TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
"TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
"TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
"TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
"TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
"TF_MODEL_FOR_TEXT_ENCODING_MAPPING",
"TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
"TF_MODEL_FOR_VISION_2_SEQ_MAPPING",
"TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
"TF_MODEL_MAPPING",
"TF_MODEL_WITH_LM_HEAD_MAPPING",
"TFAutoModel",
"TFAutoModelForAudioClassification",
"TFAutoModelForCausalLM",
"TFAutoModelForImageClassification",
"TFAutoModelForMaskedImageModeling",
"TFAutoModelForMaskedLM",
"TFAutoModelForMaskGeneration",
"TFAutoModelForMultipleChoice",
"TFAutoModelForNextSentencePrediction",
"TFAutoModelForPreTraining",
"TFAutoModelForDocumentQuestionAnswering",
"TFAutoModelForQuestionAnswering",
"TFAutoModelForSemanticSegmentation",
"TFAutoModelForSeq2SeqLM",
"TFAutoModelForSequenceClassification",
"TFAutoModelForSpeechSeq2Seq",
"TFAutoModelForTableQuestionAnswering",
"TFAutoModelForTextEncoding",
"TFAutoModelForTokenClassification",
"TFAutoModelForVision2Seq",
"TFAutoModelForZeroShotImageClassification",
"TFAutoModelWithLMHead",
]

View File

@ -389,3 +389,6 @@ class AutoProcessor:
processor_class ([`ProcessorMixin`]): The processor to register.
"""
PROCESSOR_MAPPING.register(config_class, processor_class, exist_ok=exist_ok)
__all__ = ["PROCESSOR_MAPPING", "AutoProcessor"]

View File

@ -1083,3 +1083,6 @@ class AutoTokenizer:
fast_tokenizer_class = existing_fast
TOKENIZER_MAPPING.register(config_class, (slow_tokenizer_class, fast_tokenizer_class), exist_ok=exist_ok)
__all__ = ["TOKENIZER_MAPPING", "AutoTokenizer"]

View File

@ -13,45 +13,15 @@
# limitations under the License.
from typing import TYPE_CHECKING
# rely on isort to merge the imports
from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
_import_structure = {
"configuration_autoformer": ["AutoformerConfig"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_autoformer"] = [
"AutoformerForPrediction",
"AutoformerModel",
"AutoformerPreTrainedModel",
]
from ...utils import _LazyModule
from ...utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_autoformer import (
AutoformerConfig,
)
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_autoformer import (
AutoformerForPrediction,
AutoformerModel,
AutoformerPreTrainedModel,
)
from .configuration_autoformer import *
from .modeling_autoformer import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -240,3 +240,6 @@ class AutoformerConfig(PretrainedConfig):
+ self.num_static_real_features
+ self.input_size * 2 # the log1p(abs(loc)) and log(scale) features
)
__all__ = ["AutoformerConfig"]

View File

@ -2147,3 +2147,6 @@ class AutoformerForPrediction(AutoformerPreTrainedModel):
(-1, num_parallel_samples, self.config.prediction_length) + self.target_shape,
)
)
__all__ = ["AutoformerForPrediction", "AutoformerModel", "AutoformerPreTrainedModel"]

View File

@ -22,6 +22,7 @@ import sentencepiece as spm
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -34,6 +35,7 @@ SPIECE_UNDERLINE = "▁"
# TODO this class is useless. This is the most standard sentencpiece model. Let's find which one is closest and nuke this.
@requires(backends=("sentencepiece",))
class BarthezTokenizer(PreTrainedTokenizer):
"""
Adapted from [`CamembertTokenizer`] and [`BartTokenizer`]. Construct a BARThez tokenizer. Based on

View File

@ -22,6 +22,7 @@ import sentencepiece as spm
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -31,6 +32,7 @@ SPIECE_UNDERLINE = "▁"
VOCAB_FILES_NAMES = {"vocab_file": "sentencepiece.bpe.model", "monolingual_vocab_file": "dict.txt"}
@requires(backends=("sentencepiece",))
class BartphoTokenizer(PreTrainedTokenizer):
"""
Adapted from [`XLMRobertaTokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece).

View File

@ -17,12 +17,14 @@
import warnings
from ...utils import logging
from ...utils.import_utils import requires
from .image_processing_beit import BeitImageProcessor
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class BeitFeatureExtractor(BeitImageProcessor):
def __init__(self, *args, **kwargs) -> None:
warnings.warn(

View File

@ -42,6 +42,7 @@ from ...utils import (
logging,
)
from ...utils.deprecation import deprecate_kwarg
from ...utils.import_utils import requires
if is_vision_available():
@ -54,6 +55,7 @@ if is_torch_available():
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class BeitImageProcessor(BaseImageProcessor):
r"""
Constructs a BEiT image processor.

View File

@ -6,9 +6,11 @@ from tensorflow_text import BertTokenizer as BertTokenizerLayer
from tensorflow_text import FastBertTokenizer, ShrinkLongestTrimmer, case_fold_utf8, combine_segments, pad_model_inputs
from ...modeling_tf_utils import keras
from ...utils.import_utils import requires
from .tokenization_bert import BertTokenizer
@requires(backends=("tf", "tensorflow_text"))
class TFBertTokenizer(keras.layers.Layer):
"""
This is an in-graph tokenizer for BERT. It should be initialized similarly to other tokenizers, using the

View File

@ -22,6 +22,7 @@ import sentencepiece as spm
from ...tokenization_utils import PreTrainedTokenizer
from ...utils import logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -29,6 +30,7 @@ logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
@requires(backends=("sentencepiece",))
class BertGenerationTokenizer(PreTrainedTokenizer):
"""
Construct a BertGeneration tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).

View File

@ -23,6 +23,7 @@ import sentencepiece as spm
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -30,6 +31,7 @@ logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
@requires(backends=("sentencepiece",))
class BigBirdTokenizer(PreTrainedTokenizer):
"""
Construct a BigBird tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).

View File

@ -22,7 +22,9 @@ if TYPE_CHECKING:
from .image_processing_blip import *
from .image_processing_blip_fast import *
from .modeling_blip import *
from .modeling_blip_text import *
from .modeling_tf_blip import *
from .modeling_tf_blip_text import *
from .processing_blip import *
else:
import sys

View File

@ -955,3 +955,6 @@ class BlipTextLMHeadModel(BlipTextPreTrainedModel, GenerationMixin):
tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
)
return reordered_past
__all__ = ["BlipTextModel", "BlipTextLMHeadModel", "BlipTextPreTrainedModel"]

View File

@ -1120,3 +1120,6 @@ class TFBlipTextLMHeadModel(TFBlipTextPreTrainedModel):
if getattr(self, "cls", None) is not None:
with tf.name_scope(self.cls.name):
self.cls.build(None)
__all__ = ["TFBlipTextLMHeadModel", "TFBlipTextModel", "TFBlipTextPreTrainedModel"]

View File

@ -22,6 +22,7 @@ import sentencepiece as spm
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -32,6 +33,7 @@ VOCAB_FILES_NAMES = {"vocab_file": "sentencepiece.bpe.model"}
SPIECE_UNDERLINE = ""
@requires(backends=("sentencepiece",))
class CamembertTokenizer(PreTrainedTokenizer):
"""
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Construct a CamemBERT tokenizer. Based on

View File

@ -17,12 +17,14 @@
import warnings
from ...utils import logging
from ...utils.import_utils import requires
from .image_processing_chinese_clip import ChineseCLIPImageProcessor
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class ChineseCLIPFeatureExtractor(ChineseCLIPImageProcessor):
def __init__(self, *args, **kwargs) -> None:
warnings.warn(

View File

@ -41,13 +41,17 @@ from ...image_utils import (
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
logger = logging.get_logger(__name__)
if is_vision_available():
import PIL
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class ChineseCLIPImageProcessor(BaseImageProcessor):
r"""
Constructs a Chinese-CLIP image processor.

View File

@ -24,11 +24,13 @@ from ...audio_utils import mel_filter_bank, spectrogram, window_function
from ...feature_extraction_sequence_utils import SequenceFeatureExtractor
from ...feature_extraction_utils import BatchFeature
from ...utils import TensorType, logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@requires(backends=("torch",))
class ClapFeatureExtractor(SequenceFeatureExtractor):
r"""
Constructs a CLAP feature extractor.

View File

@ -17,12 +17,14 @@
import warnings
from ...utils import logging
from ...utils.import_utils import requires
from .image_processing_clip import CLIPImageProcessor
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class CLIPFeatureExtractor(CLIPImageProcessor):
def __init__(self, *args, **kwargs) -> None:
warnings.warn(

View File

@ -40,6 +40,7 @@ from ...image_utils import (
validate_preprocess_arguments,
)
from ...utils import TensorType, is_vision_available, logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -49,6 +50,7 @@ if is_vision_available():
import PIL
@requires(backends=("vision",))
class CLIPImageProcessor(BaseImageProcessor):
r"""
Constructs a CLIP image processor.

View File

@ -25,6 +25,7 @@ import sentencepiece as spm
from ...convert_slow_tokenizer import import_protobuf
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import logging, requires_backends
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -46,6 +47,7 @@ correct. If you don't know the answer to a question, please don't share false in
# fmt: on
@requires(backends=("sentencepiece",))
class CodeLlamaTokenizer(PreTrainedTokenizer):
"""
Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as

View File

@ -288,6 +288,5 @@ class ColPaliForRetrieval(ColPaliPreTrainedModel):
__all__ = [
"ColPaliForRetrieval",
"ColPaliForRetrievalOutput",
"ColPaliPreTrainedModel",
]

View File

@ -18,6 +18,7 @@ import warnings
from ...image_transforms import rgb_to_id as _rgb_to_id
from ...utils import logging
from ...utils.import_utils import requires
from .image_processing_conditional_detr import ConditionalDetrImageProcessor
@ -33,6 +34,7 @@ def rgb_to_id(x):
return _rgb_to_id(x)
@requires(backends=("vision",))
class ConditionalDetrFeatureExtractor(ConditionalDetrImageProcessor):
def __init__(self, *args, **kwargs) -> None:
warnings.warn(

View File

@ -64,6 +64,7 @@ from ...utils import (
is_vision_available,
logging,
)
from ...utils.import_utils import requires
if is_torch_available():
@ -801,6 +802,7 @@ def compute_segments(
return segmentation, segments
@requires(backends=("vision",))
class ConditionalDetrImageProcessor(BaseImageProcessor):
r"""
Constructs a Conditional Detr image processor.

View File

@ -17,12 +17,14 @@
import warnings
from ...utils import logging
from ...utils.import_utils import requires
from .image_processing_convnext import ConvNextImageProcessor
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class ConvNextFeatureExtractor(ConvNextImageProcessor):
def __init__(self, *args, **kwargs) -> None:
warnings.warn(

View File

@ -39,6 +39,7 @@ from ...image_utils import (
validate_preprocess_arguments,
)
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
from ...utils.import_utils import requires
if is_vision_available():
@ -48,6 +49,7 @@ if is_vision_available():
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class ConvNextImageProcessor(BaseImageProcessor):
r"""
Constructs a ConvNeXT image processor.

View File

@ -23,6 +23,7 @@ import sentencepiece as spm
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import SPIECE_UNDERLINE, logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -30,6 +31,7 @@ logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
@requires(backends=("sentencepiece",))
class CpmTokenizer(PreTrainedTokenizer):
"""Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models."""

View File

@ -22,6 +22,7 @@ import sentencepiece as sp
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
from ...utils import logging
from ...utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -30,6 +31,7 @@ logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "spm.model"}
@requires(backends=("sentencepiece",))
class DebertaV2Tokenizer(PreTrainedTokenizer):
r"""
Constructs a DeBERTa-v2 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).

View File

@ -18,6 +18,7 @@ import warnings
from ...image_transforms import rgb_to_id as _rgb_to_id
from ...utils import logging
from ...utils.import_utils import requires
from .image_processing_deformable_detr import DeformableDetrImageProcessor
@ -33,6 +34,7 @@ def rgb_to_id(x):
return _rgb_to_id(x)
@requires(backends=("vision",))
class DeformableDetrFeatureExtractor(DeformableDetrImageProcessor):
def __init__(self, *args, **kwargs) -> None:
warnings.warn(

View File

@ -64,6 +64,7 @@ from ...utils import (
is_vision_available,
logging,
)
from ...utils.import_utils import requires
if is_torch_available():
@ -799,6 +800,7 @@ def compute_segments(
return segmentation, segments
@requires(backends=("torch", "vision"))
class DeformableDetrImageProcessor(BaseImageProcessor):
r"""
Constructs a Deformable DETR image processor.

View File

@ -39,6 +39,7 @@ from ...utils import (
is_torchvision_v2_available,
logging,
)
from ...utils.import_utils import requires
from .image_processing_deformable_detr import get_size_with_aspect_ratio
@ -288,6 +289,7 @@ def prepare_coco_panoptic_annotation(
Whether to return segmentation masks.
""",
)
@requires(backends=("torchvision", "torch"))
class DeformableDetrImageProcessorFast(BaseImageProcessorFast):
resample = PILImageResampling.BILINEAR
image_mean = IMAGENET_DEFAULT_MEAN

View File

@ -17,12 +17,14 @@
import warnings
from ...utils import logging
from ...utils.import_utils import requires
from .image_processing_deit import DeiTImageProcessor
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class DeiTFeatureExtractor(DeiTImageProcessor):
def __init__(self, *args, **kwargs) -> None:
warnings.warn(

View File

@ -34,6 +34,7 @@ from ...image_utils import (
validate_preprocess_arguments,
)
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
from ...utils.import_utils import requires
if is_vision_available():
@ -43,6 +44,7 @@ if is_vision_available():
logger = logging.get_logger(__name__)
@requires(backends=("vision",))
class DeiTImageProcessor(BaseImageProcessor):
r"""
Constructs a DeiT image processor.

View File

@ -0,0 +1,49 @@
# Copyright 2020 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from ...utils import _LazyModule
from ...utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .bort import *
from .deta import *
from .efficientformer import *
from .ernie_m import *
from .gptsan_japanese import *
from .graphormer import *
from .jukebox import *
from .mctct import *
from .mega import *
from .mmbt import *
from .nat import *
from .nezha import *
from .open_llama import *
from .qdqbert import *
from .realm import *
from .retribert import *
from .speech_to_text_2 import *
from .tapex import *
from .trajectory_transformer import *
from .transfo_xl import *
from .tvlt import *
from .van import *
from .vit_hybrid import *
from .xlm_prophetnet import *
else:
import sys
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -11,61 +11,18 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available, is_vision_available
_import_structure = {
"configuration_deta": ["DetaConfig"],
}
try:
if not is_vision_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["image_processing_deta"] = ["DetaImageProcessor"]
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_deta"] = [
"DetaForObjectDetection",
"DetaModel",
"DetaPreTrainedModel",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_deta import DetaConfig
try:
if not is_vision_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .image_processing_deta import DetaImageProcessor
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_deta import (
DetaForObjectDetection,
DetaModel,
DetaPreTrainedModel,
)
from .configuration_deta import *
from .image_processing_deta import *
from .modeling_deta import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -265,3 +265,6 @@ class DetaConfig(PretrainedConfig):
@property
def hidden_size(self) -> int:
return self.d_model
__all__ = ["DetaConfig"]

View File

@ -1222,3 +1222,6 @@ class DetaImageProcessor(BaseImageProcessor):
)
return results
__all__ = ["DetaImageProcessor"]

View File

@ -2822,3 +2822,6 @@ class DetaStage1Assigner(nn.Module):
def postprocess_indices(self, pr_inds, gt_inds, iou):
return sample_topk_per_gt(pr_inds, gt_inds, iou, self.k)
__all__ = ["DetaForObjectDetection", "DetaModel", "DetaPreTrainedModel"]

View File

@ -13,88 +13,17 @@
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_tf_available,
is_torch_available,
is_vision_available,
)
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
_import_structure = {"configuration_efficientformer": ["EfficientFormerConfig"]}
try:
if not is_vision_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["image_processing_efficientformer"] = ["EfficientFormerImageProcessor"]
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_efficientformer"] = [
"EfficientFormerForImageClassification",
"EfficientFormerForImageClassificationWithTeacher",
"EfficientFormerModel",
"EfficientFormerPreTrainedModel",
]
try:
if not is_tf_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_tf_efficientformer"] = [
"TFEfficientFormerForImageClassification",
"TFEfficientFormerForImageClassificationWithTeacher",
"TFEfficientFormerModel",
"TFEfficientFormerPreTrainedModel",
]
if TYPE_CHECKING:
from .configuration_efficientformer import EfficientFormerConfig
try:
if not is_vision_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .image_processing_efficientformer import EfficientFormerImageProcessor
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_efficientformer import (
EfficientFormerForImageClassification,
EfficientFormerForImageClassificationWithTeacher,
EfficientFormerModel,
EfficientFormerPreTrainedModel,
)
try:
if not is_tf_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_tf_efficientformer import (
TFEfficientFormerForImageClassification,
TFEfficientFormerForImageClassificationWithTeacher,
TFEfficientFormerModel,
TFEfficientFormerPreTrainedModel,
)
from .configuration_efficientformer import *
from .image_processing_efficientformer import *
from .modeling_efficientformer import *
from .modeling_tf_efficientformer import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -165,3 +165,8 @@ class EfficientFormerConfig(PretrainedConfig):
self.layer_scale_init_value = layer_scale_init_value
self.image_size = image_size
self.batch_norm_eps = batch_norm_eps
__all__ = [
"EfficientFormerConfig",
]

View File

@ -319,3 +319,6 @@ class EfficientFormerImageProcessor(BaseImageProcessor):
data = {"pixel_values": images}
return BatchFeature(data=data, tensor_type=return_tensors)
__all__ = ["EfficientFormerImageProcessor"]

View File

@ -797,3 +797,11 @@ class EfficientFormerForImageClassificationWithTeacher(EfficientFormerPreTrained
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
__all__ = [
"EfficientFormerForImageClassification",
"EfficientFormerForImageClassificationWithTeacher",
"EfficientFormerModel",
"EfficientFormerPreTrainedModel",
]

View File

@ -1188,3 +1188,11 @@ class TFEfficientFormerForImageClassificationWithTeacher(TFEfficientFormerPreTra
if hasattr(self.distillation_classifier, "name"):
with tf.name_scope(self.distillation_classifier.name):
self.distillation_classifier.build([None, None, self.config.hidden_sizes[-1]])
__all__ = [
"TFEfficientFormerForImageClassification",
"TFEfficientFormerForImageClassificationWithTeacher",
"TFEfficientFormerModel",
"TFEfficientFormerPreTrainedModel",
]

View File

@ -13,68 +13,16 @@
# limitations under the License.
from typing import TYPE_CHECKING
# rely on isort to merge the imports
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_sentencepiece_available, is_torch_available
_import_structure = {
"configuration_ernie_m": ["ErnieMConfig"],
}
try:
if not is_sentencepiece_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["tokenization_ernie_m"] = ["ErnieMTokenizer"]
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_ernie_m"] = [
"ErnieMForMultipleChoice",
"ErnieMForQuestionAnswering",
"ErnieMForSequenceClassification",
"ErnieMForTokenClassification",
"ErnieMModel",
"ErnieMPreTrainedModel",
"ErnieMForInformationExtraction",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_ernie_m import ErnieMConfig
try:
if not is_sentencepiece_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .tokenization_ernie_m import ErnieMTokenizer
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_ernie_m import (
ErnieMForInformationExtraction,
ErnieMForMultipleChoice,
ErnieMForQuestionAnswering,
ErnieMForSequenceClassification,
ErnieMForTokenClassification,
ErnieMModel,
ErnieMPreTrainedModel,
)
from .configuration_ernie_m import *
from .modeling_ernie_m import *
from .tokenization_ernie_m import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -109,3 +109,6 @@ class ErnieMConfig(PretrainedConfig):
self.layer_norm_eps = layer_norm_eps
self.classifier_dropout = classifier_dropout
self.act_dropout = act_dropout
__all__ = ["ErnieMConfig"]

View File

@ -1045,3 +1045,14 @@ class ErnieMForInformationExtraction(ErnieMPreTrainedModel):
hidden_states=result.hidden_states,
attentions=result.attentions,
)
__all__ = [
"ErnieMForMultipleChoice",
"ErnieMForQuestionAnswering",
"ErnieMForSequenceClassification",
"ErnieMForTokenClassification",
"ErnieMModel",
"ErnieMPreTrainedModel",
"ErnieMForInformationExtraction",
]

View File

@ -23,6 +23,7 @@ import sentencepiece as spm
from ....tokenization_utils import PreTrainedTokenizer
from ....utils import logging
from ....utils.import_utils import requires
logger = logging.get_logger(__name__)
@ -38,6 +39,7 @@ RESOURCE_FILES_NAMES = {
# Adapted from paddlenlp.transformers.ernie_m.tokenizer.ErnieMTokenizer
@requires(backends=("sentencepiece",))
class ErnieMTokenizer(PreTrainedTokenizer):
r"""
Constructs a Ernie-M tokenizer. It uses the `sentencepiece` tools to cut the words to sub-words.
@ -403,3 +405,6 @@ class ErnieMTokenizer(PreTrainedTokenizer):
fi.write(content_spiece_model)
return (vocab_file,)
__all__ = ["ErnieMTokenizer"]

View File

@ -11,58 +11,18 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_flax_available,
is_tf_available,
is_torch_available,
)
_import_structure = {
"configuration_gptsan_japanese": ["GPTSanJapaneseConfig"],
"tokenization_gptsan_japanese": ["GPTSanJapaneseTokenizer"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_gptsan_japanese"] = [
"GPTSanJapaneseForConditionalGeneration",
"GPTSanJapaneseModel",
"GPTSanJapanesePreTrainedModel",
]
_import_structure["tokenization_gptsan_japanese"] = [
"GPTSanJapaneseTokenizer",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_gptsan_japanese import GPTSanJapaneseConfig
from .tokenization_gptsan_japanese import GPTSanJapaneseTokenizer
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_gptsan_japanese import (
GPTSanJapaneseForConditionalGeneration,
GPTSanJapaneseModel,
GPTSanJapanesePreTrainedModel,
)
from .tokenization_gptsan_japanese import GPTSanJapaneseTokenizer
from .configuration_gptsan_japanese import *
from .modeling_gptsan_japanese import *
from .tokenization_gptsan_japanese import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -152,3 +152,6 @@ class GPTSanJapaneseConfig(PretrainedConfig):
eos_token_id=eos_token_id,
**kwargs,
)
__all__ = ["GPTSanJapaneseConfig"]

View File

@ -1332,3 +1332,6 @@ class GPTSanJapaneseForConditionalGeneration(GPTSanJapanesePreTrainedModel):
total_router_logits.append(router_logits)
total_expert_indexes.append(expert_indexes)
return torch.cat(total_router_logits, dim=1), torch.cat(total_expert_indexes, dim=1)
__all__ = ["GPTSanJapaneseForConditionalGeneration", "GPTSanJapaneseModel", "GPTSanJapanesePreTrainedModel"]

View File

@ -513,3 +513,6 @@ class SubWordJapaneseTokenizer:
def convert_id_to_token(self, index):
return self.ids_to_tokens[index][0]
__all__ = ["GPTSanJapaneseTokenizer"]

View File

@ -13,43 +13,15 @@
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_tokenizers_available, is_torch_available
_import_structure = {
"configuration_graphormer": ["GraphormerConfig"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_graphormer"] = [
"GraphormerForGraphClassification",
"GraphormerModel",
"GraphormerPreTrainedModel",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_graphormer import GraphormerConfig
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_graphormer import (
GraphormerForGraphClassification,
GraphormerModel,
GraphormerPreTrainedModel,
)
from .configuration_graphormer import *
from .modeling_graphormer import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -215,3 +215,6 @@ class GraphormerConfig(PretrainedConfig):
eos_token_id=eos_token_id,
**kwargs,
)
__all__ = ["GraphormerConfig"]

View File

@ -906,3 +906,6 @@ class GraphormerForGraphClassification(GraphormerPreTrainedModel):
if not return_dict:
return tuple(x for x in [loss, logits, hidden_states] if x is not None)
return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=hidden_states, attentions=None)
__all__ = ["GraphormerForGraphClassification", "GraphormerModel", "GraphormerPreTrainedModel"]

View File

@ -11,56 +11,18 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
_import_structure = {
"configuration_jukebox": [
"JukeboxConfig",
"JukeboxPriorConfig",
"JukeboxVQVAEConfig",
],
"tokenization_jukebox": ["JukeboxTokenizer"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_jukebox"] = [
"JukeboxModel",
"JukeboxPreTrainedModel",
"JukeboxVQVAE",
"JukeboxPrior",
]
if TYPE_CHECKING:
from .configuration_jukebox import (
JukeboxConfig,
JukeboxPriorConfig,
JukeboxVQVAEConfig,
)
from .tokenization_jukebox import JukeboxTokenizer
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_jukebox import (
JukeboxModel,
JukeboxPreTrainedModel,
JukeboxPrior,
JukeboxVQVAE,
)
from .configuration_jukebox import *
from .modeling_jukebox import *
from .tokenization_jukebox import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -608,3 +608,6 @@ class JukeboxConfig(PretrainedConfig):
result = super().to_dict()
result["prior_config_list"] = [config.to_dict() for config in result.pop("prior_configs")]
return result
__all__ = ["JukeboxConfig", "JukeboxPriorConfig", "JukeboxVQVAEConfig"]

View File

@ -2665,3 +2665,6 @@ class JukeboxModel(JukeboxPreTrainedModel):
)
music_tokens = self._sample(music_tokens, labels, sample_levels, **sampling_kwargs)
return music_tokens
__all__ = ["JukeboxModel", "JukeboxPreTrainedModel", "JukeboxVQVAE", "JukeboxPrior"]

View File

@ -402,3 +402,6 @@ class JukeboxTokenizer(PreTrainedTokenizer):
genres = [self.genres_decoder.get(genre) for genre in genres_index]
lyrics = [self.lyrics_decoder.get(character) for character in lyric_index]
return artist, genres, lyrics
__all__ = ["JukeboxTokenizer"]

View File

@ -13,43 +13,17 @@
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
_import_structure = {
"configuration_mctct": ["MCTCTConfig"],
"feature_extraction_mctct": ["MCTCTFeatureExtractor"],
"processing_mctct": ["MCTCTProcessor"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_mctct"] = [
"MCTCTForCTC",
"MCTCTModel",
"MCTCTPreTrainedModel",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_mctct import MCTCTConfig
from .feature_extraction_mctct import MCTCTFeatureExtractor
from .processing_mctct import MCTCTProcessor
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_mctct import MCTCTForCTC, MCTCTModel, MCTCTPreTrainedModel
from .configuration_mctct import *
from .feature_extraction_mctct import *
from .modeling_mctct import *
from .processing_mctct import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -179,3 +179,6 @@ class MCTCTConfig(PretrainedConfig):
f"but is `len(config.conv_kernel) = {len(self.conv_kernel)}`, "
f"`config.num_conv_layers = {self.num_conv_layers}`."
)
__all__ = ["MCTCTConfig"]

View File

@ -286,3 +286,6 @@ class MCTCTFeatureExtractor(SequenceFeatureExtractor):
padded_inputs = padded_inputs.convert_to_tensors(return_tensors)
return padded_inputs
__all__ = ["MCTCTFeatureExtractor"]

View File

@ -786,3 +786,6 @@ class MCTCTForCTC(MCTCTPreTrainedModel):
return CausalLMOutput(
loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions
)
__all__ = ["MCTCTForCTC", "MCTCTModel", "MCTCTPreTrainedModel"]

View File

@ -141,3 +141,6 @@ class MCTCTProcessor(ProcessorMixin):
yield
self.current_processor = self.feature_extractor
self._in_target_context_manager = False
__all__ = ["MCTCTProcessor"]

View File

@ -11,58 +11,17 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_torch_available,
)
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
_import_structure = {
"configuration_mega": ["MegaConfig", "MegaOnnxConfig"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_mega"] = [
"MegaForCausalLM",
"MegaForMaskedLM",
"MegaForMultipleChoice",
"MegaForQuestionAnswering",
"MegaForSequenceClassification",
"MegaForTokenClassification",
"MegaModel",
"MegaPreTrainedModel",
]
if TYPE_CHECKING:
from .configuration_mega import MegaConfig, MegaOnnxConfig
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_mega import (
MegaForCausalLM,
MegaForMaskedLM,
MegaForMultipleChoice,
MegaForQuestionAnswering,
MegaForSequenceClassification,
MegaForTokenClassification,
MegaModel,
MegaPreTrainedModel,
)
from .configuration_mega import *
from .modeling_mega import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -238,3 +238,6 @@ class MegaOnnxConfig(OnnxConfig):
("attention_mask", dynamic_axis),
]
)
__all__ = ["MegaConfig", "MegaOnnxConfig"]

View File

@ -2271,3 +2271,15 @@ class MegaForQuestionAnswering(MegaPreTrainedModel):
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
__all__ = [
"MegaForCausalLM",
"MegaForMaskedLM",
"MegaForMultipleChoice",
"MegaForQuestionAnswering",
"MegaForSequenceClassification",
"MegaForTokenClassification",
"MegaModel",
"MegaPreTrainedModel",
]

View File

@ -11,35 +11,17 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
_import_structure = {"configuration_mmbt": ["MMBTConfig"]}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_mmbt"] = ["MMBTForClassification", "MMBTModel", "ModalEmbeddings"]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_mmbt import MMBTConfig
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_mmbt import MMBTForClassification, MMBTModel, ModalEmbeddings
from .configuration_mmbt import *
from .modeling_mmbt import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -40,3 +40,6 @@ class MMBTConfig:
self.modal_hidden_size = modal_hidden_size
if num_labels:
self.num_labels = num_labels
__all__ = ["MMBTConfig"]

View File

@ -405,3 +405,6 @@ class MMBTForClassification(nn.Module):
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
__all__ = ["MMBTForClassification", "MMBTModel", "ModalEmbeddings"]

View File

@ -13,42 +13,15 @@
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
_import_structure = {"configuration_nat": ["NatConfig"]}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_nat"] = [
"NatForImageClassification",
"NatModel",
"NatPreTrainedModel",
"NatBackbone",
]
if TYPE_CHECKING:
from .configuration_nat import NatConfig
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_nat import (
NatBackbone,
NatForImageClassification,
NatModel,
NatPreTrainedModel,
)
from .configuration_nat import *
from .modeling_nat import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -143,3 +143,6 @@ class NatConfig(BackboneConfigMixin, PretrainedConfig):
self._out_features, self._out_indices = get_aligned_output_features_output_indices(
out_features=out_features, out_indices=out_indices, stage_names=self.stage_names
)
__all__ = ["NatConfig"]

View File

@ -948,3 +948,6 @@ class NatBackbone(NatPreTrainedModel, BackboneMixin):
hidden_states=outputs.hidden_states if output_hidden_states else None,
attentions=outputs.attentions,
)
__all__ = ["NatForImageClassification", "NatModel", "NatPreTrainedModel", "NatBackbone"]

View File

@ -13,55 +13,15 @@
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_tokenizers_available, is_torch_available
_import_structure = {
"configuration_nezha": ["NezhaConfig"],
}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_nezha"] = [
"NezhaForNextSentencePrediction",
"NezhaForMaskedLM",
"NezhaForPreTraining",
"NezhaForMultipleChoice",
"NezhaForQuestionAnswering",
"NezhaForSequenceClassification",
"NezhaForTokenClassification",
"NezhaModel",
"NezhaPreTrainedModel",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_nezha import NezhaConfig
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_nezha import (
NezhaForMaskedLM,
NezhaForMultipleChoice,
NezhaForNextSentencePrediction,
NezhaForPreTraining,
NezhaForQuestionAnswering,
NezhaForSequenceClassification,
NezhaForTokenClassification,
NezhaModel,
NezhaPreTrainedModel,
)
from .configuration_nezha import *
from .modeling_nezha import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -100,3 +100,6 @@ class NezhaConfig(PretrainedConfig):
self.layer_norm_eps = layer_norm_eps
self.classifier_dropout = classifier_dropout
self.use_cache = use_cache
__all__ = ["NezhaConfig"]

View File

@ -1682,3 +1682,16 @@ class NezhaForQuestionAnswering(NezhaPreTrainedModel):
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
__all__ = [
"NezhaForNextSentencePrediction",
"NezhaForMaskedLM",
"NezhaForPreTraining",
"NezhaForMultipleChoice",
"NezhaForQuestionAnswering",
"NezhaForSequenceClassification",
"NezhaForTokenClassification",
"NezhaModel",
"NezhaPreTrainedModel",
]

View File

@ -13,83 +13,15 @@
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_sentencepiece_available,
is_tokenizers_available,
is_torch_available,
)
_import_structure = {
"configuration_open_llama": ["OpenLlamaConfig"],
}
try:
if not is_sentencepiece_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["tokenization_open_llama"] = ["LlamaTokenizer"]
try:
if not is_tokenizers_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["tokenization_open_llama_fast"] = ["LlamaTokenizerFast"]
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_open_llama"] = [
"OpenLlamaForCausalLM",
"OpenLlamaModel",
"OpenLlamaPreTrainedModel",
"OpenLlamaForSequenceClassification",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_open_llama import OpenLlamaConfig
try:
if not is_sentencepiece_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from transformers import LlamaTokenizer
try:
if not is_tokenizers_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from transformers import LlamaTokenizerFast
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_open_llama import (
OpenLlamaForCausalLM,
OpenLlamaForSequenceClassification,
OpenLlamaModel,
OpenLlamaPreTrainedModel,
)
from .configuration_open_llama import *
from .modeling_open_llama import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -164,3 +164,6 @@ class OpenLlamaConfig(PretrainedConfig):
)
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
raise ValueError(f"`rope_scaling`'s factor field must be a float > 1, got {rope_scaling_factor}")
__all__ = ["OpenLlamaConfig"]

View File

@ -970,3 +970,6 @@ class OpenLlamaForSequenceClassification(OpenLlamaPreTrainedModel):
hidden_states=transformer_outputs.hidden_states,
attentions=transformer_outputs.attentions,
)
__all__ = ["OpenLlamaPreTrainedModel", "OpenLlamaModel", "OpenLlamaForCausalLM", "OpenLlamaForSequenceClassification"]

View File

@ -13,57 +13,15 @@
# limitations under the License.
from typing import TYPE_CHECKING
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
_import_structure = {"configuration_qdqbert": ["QDQBertConfig"]}
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_qdqbert"] = [
"QDQBertForMaskedLM",
"QDQBertForMultipleChoice",
"QDQBertForNextSentencePrediction",
"QDQBertForQuestionAnswering",
"QDQBertForSequenceClassification",
"QDQBertForTokenClassification",
"QDQBertLayer",
"QDQBertLMHeadModel",
"QDQBertModel",
"QDQBertPreTrainedModel",
"load_tf_weights_in_qdqbert",
]
from ....utils import _LazyModule
from ....utils.import_utils import define_import_structure
if TYPE_CHECKING:
from .configuration_qdqbert import QDQBertConfig
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_qdqbert import (
QDQBertForMaskedLM,
QDQBertForMultipleChoice,
QDQBertForNextSentencePrediction,
QDQBertForQuestionAnswering,
QDQBertForSequenceClassification,
QDQBertForTokenClassification,
QDQBertLayer,
QDQBertLMHeadModel,
QDQBertModel,
QDQBertPreTrainedModel,
load_tf_weights_in_qdqbert,
)
from .configuration_qdqbert import *
from .modeling_qdqbert import *
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

View File

@ -118,3 +118,6 @@ class QDQBertConfig(PretrainedConfig):
self.type_vocab_size = type_vocab_size
self.layer_norm_eps = layer_norm_eps
self.use_cache = use_cache
__all__ = ["QDQBertConfig"]

View File

@ -1732,3 +1732,18 @@ class QDQBertForQuestionAnswering(QDQBertPreTrainedModel):
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
__all__ = [
"QDQBertForMaskedLM",
"QDQBertForMultipleChoice",
"QDQBertForNextSentencePrediction",
"QDQBertForQuestionAnswering",
"QDQBertForSequenceClassification",
"QDQBertForTokenClassification",
"QDQBertLayer",
"QDQBertLMHeadModel",
"QDQBertModel",
"QDQBertPreTrainedModel",
"load_tf_weights_in_qdqbert",
]

Some files were not shown because too many files have changed in this diff Show More