mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
Simplify soft dependencies and update the dummy-creation process (#36827)
* Reverse dependency map shouldn't be created when test_all is set * [test_all] Remove dummies * Modular fixes * Update utils/check_repo.py Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com> * [test_all] Better docs * [test_all] Update src/transformers/commands/chat.py Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com> * [test_all] Remove deprecated AdaptiveEmbeddings from the tests * [test_all] Doc builder * [test_all] is_dummy * [test_all] Import utils * [test_all] Doc building should not require all deps --------- Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com> Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
This commit is contained in:
parent
931126b929
commit
54a123f068
@ -1078,6 +1078,8 @@
|
|||||||
title: Utilities for Audio processing
|
title: Utilities for Audio processing
|
||||||
- local: internal/file_utils
|
- local: internal/file_utils
|
||||||
title: General Utilities
|
title: General Utilities
|
||||||
|
- local: internal/import_utils
|
||||||
|
title: Importing Utilities
|
||||||
- local: internal/time_series_utils
|
- local: internal/time_series_utils
|
||||||
title: Utilities for Time Series
|
title: Utilities for Time Series
|
||||||
title: Internal helpers
|
title: Internal helpers
|
||||||
|
91
docs/source/en/internal/import_utils.md
Normal file
91
docs/source/en/internal/import_utils.md
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
the License. You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
specific language governing permissions and limitations under the License.
|
||||||
|
|
||||||
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||||
|
rendered properly in your Markdown viewer.
|
||||||
|
|
||||||
|
-->
|
||||||
|
|
||||||
|
# Import Utilities
|
||||||
|
|
||||||
|
This page goes through the transformers utilities to enable lazy and fast object import.
|
||||||
|
While we strive for minimal dependencies, some models have specific dependencies requirements that cannot be
|
||||||
|
worked around. We don't want for all users of `transformers` to have to install those dependencies to use other models,
|
||||||
|
we therefore mark those as soft dependencies rather than hard dependencies.
|
||||||
|
|
||||||
|
The transformers toolkit is not made to error-out on import of a model that has a specific dependency; instead, an
|
||||||
|
object for which you are lacking a dependency will error-out when calling any method on it. As an example, if
|
||||||
|
`torchvision` isn't installed, the fast image processors will not be available.
|
||||||
|
|
||||||
|
This object is still importable:
|
||||||
|
|
||||||
|
```python
|
||||||
|
>>> from transformers import DetrImageProcessorFast
|
||||||
|
>>> print(DetrImageProcessorFast)
|
||||||
|
<class 'DetrImageProcessorFast'>
|
||||||
|
```
|
||||||
|
|
||||||
|
However, no method can be called on that object:
|
||||||
|
|
||||||
|
```python
|
||||||
|
>>> DetrImageProcessorFast.from_pretrained()
|
||||||
|
ImportError:
|
||||||
|
DetrImageProcessorFast requires the Torchvision library but it was not found in your environment. Checkout the instructions on the
|
||||||
|
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
|
||||||
|
Please note that you may need to restart your runtime after installation.
|
||||||
|
```
|
||||||
|
|
||||||
|
Let's see how to specify specific object dependencies.
|
||||||
|
|
||||||
|
## Specifying Object Dependencies
|
||||||
|
|
||||||
|
### Filename-based
|
||||||
|
|
||||||
|
All objects under a given filename have an automatic dependency to the tool linked to the filename
|
||||||
|
|
||||||
|
**TensorFlow**: All files starting with `modeling_tf_` have an automatic TensorFlow dependency.
|
||||||
|
|
||||||
|
**Flax**: All files starting with `modeling_flax_` have an automatic Flax dependency
|
||||||
|
|
||||||
|
**PyTorch**: All files starting with `modeling_` and not valid with the above (TensorFlow and Flax) have an automatic
|
||||||
|
PyTorch dependency
|
||||||
|
|
||||||
|
**Tokenizers**: All files starting with `tokenization_` and ending with `_fast` have an automatic `tokenizers` dependency
|
||||||
|
|
||||||
|
**Vision**: All files starting with `image_processing_` have an automatic dependency to the `vision` dependency group;
|
||||||
|
at the time of writing, this only contains the `pillow` dependency.
|
||||||
|
|
||||||
|
**Vision + Torch + Torchvision**: All files starting with `image_processing_` and ending with `_fast` have an automatic
|
||||||
|
dependency to `vision`, `torch`, and `torchvision`.
|
||||||
|
|
||||||
|
All of these automatic dependencies are added on top of the explicit dependencies that are detailed below.
|
||||||
|
|
||||||
|
### Explicit Object Dependencies
|
||||||
|
|
||||||
|
We add a method called `requires` that is used to explicitly specify the dependencies of a given object. As an
|
||||||
|
example, the `Trainer` class has two hard dependencies: `torch` and `accelerate`. Here is how we specify these
|
||||||
|
required dependencies:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from .utils.import_utils import requires
|
||||||
|
|
||||||
|
@requires(backends=("torch", "accelerate"))
|
||||||
|
class Trainer:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Backends that can be added here are all the backends that are available in the `import_utils.py` module.
|
||||||
|
|
||||||
|
## Methods
|
||||||
|
|
||||||
|
[[autodoc]] utils.import_utils.define_import_structure
|
||||||
|
|
||||||
|
[[autodoc]] utils.import_utils.requires
|
@ -88,6 +88,11 @@ The original code can be found [here](https://github.com/salesforce/BLIP).
|
|||||||
[[autodoc]] BlipTextModel
|
[[autodoc]] BlipTextModel
|
||||||
- forward
|
- forward
|
||||||
|
|
||||||
|
## BlipTextLMHeadModel
|
||||||
|
|
||||||
|
[[autodoc]] BlipTextLMHeadModel
|
||||||
|
- forward
|
||||||
|
|
||||||
## BlipVisionModel
|
## BlipVisionModel
|
||||||
|
|
||||||
[[autodoc]] BlipVisionModel
|
[[autodoc]] BlipVisionModel
|
||||||
@ -123,6 +128,11 @@ The original code can be found [here](https://github.com/salesforce/BLIP).
|
|||||||
[[autodoc]] TFBlipTextModel
|
[[autodoc]] TFBlipTextModel
|
||||||
- call
|
- call
|
||||||
|
|
||||||
|
## TFBlipTextLMHeadModel
|
||||||
|
|
||||||
|
[[autodoc]] TFBlipTextLMHeadModel
|
||||||
|
- forward
|
||||||
|
|
||||||
## TFBlipVisionModel
|
## TFBlipVisionModel
|
||||||
|
|
||||||
[[autodoc]] TFBlipVisionModel
|
[[autodoc]] TFBlipVisionModel
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -22,6 +22,7 @@ from .image_processing_base import BatchFeature, ImageProcessingMixin
|
|||||||
from .image_transforms import center_crop, normalize, rescale
|
from .image_transforms import center_crop, normalize, rescale
|
||||||
from .image_utils import ChannelDimension, get_image_size
|
from .image_utils import ChannelDimension, get_image_size
|
||||||
from .utils import logging
|
from .utils import logging
|
||||||
|
from .utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -33,6 +34,7 @@ INIT_SERVICE_KWARGS = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class BaseImageProcessor(ImageProcessingMixin):
|
class BaseImageProcessor(ImageProcessingMixin):
|
||||||
def __init__(self, **kwargs):
|
def __init__(self, **kwargs):
|
||||||
super().__init__(**kwargs)
|
super().__init__(**kwargs)
|
||||||
|
@ -68,6 +68,8 @@ if is_torchvision_available():
|
|||||||
from torchvision.transforms.v2 import functional as F
|
from torchvision.transforms.v2 import functional as F
|
||||||
else:
|
else:
|
||||||
from torchvision.transforms import functional as F
|
from torchvision.transforms import functional as F
|
||||||
|
else:
|
||||||
|
pil_torch_interpolation_mapping = None
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
@ -72,6 +72,8 @@ if is_vision_available():
|
|||||||
PILImageResampling.BICUBIC: InterpolationMode.BICUBIC,
|
PILImageResampling.BICUBIC: InterpolationMode.BICUBIC,
|
||||||
PILImageResampling.LANCZOS: InterpolationMode.LANCZOS,
|
PILImageResampling.LANCZOS: InterpolationMode.LANCZOS,
|
||||||
}
|
}
|
||||||
|
else:
|
||||||
|
pil_torch_interpolation_mapping = {}
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -20,7 +20,7 @@ import re
|
|||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from transformers.utils.import_utils import export
|
from transformers.utils.import_utils import requires
|
||||||
|
|
||||||
from .utils import is_torch_available
|
from .utils import is_torch_available
|
||||||
|
|
||||||
@ -225,7 +225,7 @@ def _attach_debugger_logic(model, class_name, debug_path: str):
|
|||||||
break # exit the loop after finding one (unsure, but should be just one call.)
|
break # exit the loop after finding one (unsure, but should be just one call.)
|
||||||
|
|
||||||
|
|
||||||
@export(backends=("torch",))
|
@requires(backends=("torch",))
|
||||||
def model_addition_debugger(cls):
|
def model_addition_debugger(cls):
|
||||||
"""
|
"""
|
||||||
# Model addition debugger - a model adder tracer
|
# Model addition debugger - a model adder tracer
|
||||||
@ -282,7 +282,7 @@ def model_addition_debugger(cls):
|
|||||||
return cls
|
return cls
|
||||||
|
|
||||||
|
|
||||||
@export(backends=("torch",))
|
@requires(backends=("torch",))
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def model_addition_debugger_context(model, debug_path: Optional[str] = None):
|
def model_addition_debugger_context(model, debug_path: Optional[str] = None):
|
||||||
"""
|
"""
|
||||||
|
@ -11,316 +11,320 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from . import (
|
from ..utils import _LazyModule
|
||||||
albert,
|
from ..utils.import_utils import define_import_structure
|
||||||
align,
|
|
||||||
altclip,
|
|
||||||
aria,
|
if TYPE_CHECKING:
|
||||||
audio_spectrogram_transformer,
|
from .albert import *
|
||||||
auto,
|
from .align import *
|
||||||
autoformer,
|
from .altclip import *
|
||||||
aya_vision,
|
from .aria import *
|
||||||
bamba,
|
from .audio_spectrogram_transformer import *
|
||||||
bark,
|
from .auto import *
|
||||||
bart,
|
from .autoformer import *
|
||||||
barthez,
|
from .aya_vision import *
|
||||||
bartpho,
|
from .bamba import *
|
||||||
beit,
|
from .bark import *
|
||||||
bert,
|
from .bart import *
|
||||||
bert_generation,
|
from .barthez import *
|
||||||
bert_japanese,
|
from .bartpho import *
|
||||||
bertweet,
|
from .beit import *
|
||||||
big_bird,
|
from .bert import *
|
||||||
bigbird_pegasus,
|
from .bert_generation import *
|
||||||
biogpt,
|
from .bert_japanese import *
|
||||||
bit,
|
from .bertweet import *
|
||||||
blenderbot,
|
from .big_bird import *
|
||||||
blenderbot_small,
|
from .bigbird_pegasus import *
|
||||||
blip,
|
from .biogpt import *
|
||||||
blip_2,
|
from .bit import *
|
||||||
bloom,
|
from .blenderbot import *
|
||||||
bridgetower,
|
from .blenderbot_small import *
|
||||||
bros,
|
from .blip import *
|
||||||
byt5,
|
from .blip_2 import *
|
||||||
camembert,
|
from .bloom import *
|
||||||
canine,
|
from .bridgetower import *
|
||||||
chameleon,
|
from .bros import *
|
||||||
chinese_clip,
|
from .byt5 import *
|
||||||
clap,
|
from .camembert import *
|
||||||
clip,
|
from .canine import *
|
||||||
clipseg,
|
from .chameleon import *
|
||||||
clvp,
|
from .chinese_clip import *
|
||||||
code_llama,
|
from .clap import *
|
||||||
codegen,
|
from .clip import *
|
||||||
cohere,
|
from .clipseg import *
|
||||||
cohere2,
|
from .clvp import *
|
||||||
colpali,
|
from .code_llama import *
|
||||||
conditional_detr,
|
from .codegen import *
|
||||||
convbert,
|
from .cohere import *
|
||||||
convnext,
|
from .cohere2 import *
|
||||||
convnextv2,
|
from .colpali import *
|
||||||
cpm,
|
from .conditional_detr import *
|
||||||
cpmant,
|
from .convbert import *
|
||||||
ctrl,
|
from .convnext import *
|
||||||
cvt,
|
from .convnextv2 import *
|
||||||
dab_detr,
|
from .cpm import *
|
||||||
dac,
|
from .cpmant import *
|
||||||
data2vec,
|
from .ctrl import *
|
||||||
dbrx,
|
from .cvt import *
|
||||||
deberta,
|
from .dab_detr import *
|
||||||
deberta_v2,
|
from .dac import *
|
||||||
decision_transformer,
|
from .data2vec import *
|
||||||
deepseek_v3,
|
from .dbrx import *
|
||||||
deformable_detr,
|
from .deberta import *
|
||||||
deit,
|
from .deberta_v2 import *
|
||||||
deprecated,
|
from .decision_transformer import *
|
||||||
depth_anything,
|
from .deformable_detr import *
|
||||||
depth_pro,
|
from .deit import *
|
||||||
detr,
|
from .deprecated import *
|
||||||
dialogpt,
|
from .depth_anything import *
|
||||||
diffllama,
|
from .depth_pro import *
|
||||||
dinat,
|
from .detr import *
|
||||||
dinov2,
|
from .dialogpt import *
|
||||||
dinov2_with_registers,
|
from .diffllama import *
|
||||||
distilbert,
|
from .dinat import *
|
||||||
dit,
|
from .dinov2 import *
|
||||||
donut,
|
from .dinov2_with_registers import *
|
||||||
dpr,
|
from .distilbert import *
|
||||||
dpt,
|
from .dit import *
|
||||||
efficientnet,
|
from .donut import *
|
||||||
electra,
|
from .dpr import *
|
||||||
emu3,
|
from .dpt import *
|
||||||
encodec,
|
from .efficientnet import *
|
||||||
encoder_decoder,
|
from .electra import *
|
||||||
ernie,
|
from .emu3 import *
|
||||||
esm,
|
from .encodec import *
|
||||||
falcon,
|
from .encoder_decoder import *
|
||||||
falcon_mamba,
|
from .ernie import *
|
||||||
fastspeech2_conformer,
|
from .esm import *
|
||||||
flaubert,
|
from .falcon import *
|
||||||
flava,
|
from .falcon_mamba import *
|
||||||
fnet,
|
from .fastspeech2_conformer import *
|
||||||
focalnet,
|
from .flaubert import *
|
||||||
fsmt,
|
from .flava import *
|
||||||
funnel,
|
from .fnet import *
|
||||||
fuyu,
|
from .focalnet import *
|
||||||
gemma,
|
from .fsmt import *
|
||||||
gemma2,
|
from .funnel import *
|
||||||
gemma3,
|
from .fuyu import *
|
||||||
git,
|
from .gemma import *
|
||||||
glm,
|
from .gemma2 import *
|
||||||
glm4,
|
from .gemma3 import *
|
||||||
glpn,
|
from .git import *
|
||||||
got_ocr2,
|
from .glm import *
|
||||||
gpt2,
|
from .glm4 import *
|
||||||
gpt_bigcode,
|
from .glpn import *
|
||||||
gpt_neo,
|
from .got_ocr2 import *
|
||||||
gpt_neox,
|
from .gpt2 import *
|
||||||
gpt_neox_japanese,
|
from .gpt_bigcode import *
|
||||||
gpt_sw3,
|
from .gpt_neo import *
|
||||||
gptj,
|
from .gpt_neox import *
|
||||||
granite,
|
from .gpt_neox_japanese import *
|
||||||
granitemoe,
|
from .gpt_sw3 import *
|
||||||
granitemoeshared,
|
from .gptj import *
|
||||||
grounding_dino,
|
from .granite import *
|
||||||
groupvit,
|
from .granitemoe import *
|
||||||
helium,
|
from .granitemoeshared import *
|
||||||
herbert,
|
from .grounding_dino import *
|
||||||
hiera,
|
from .groupvit import *
|
||||||
hubert,
|
from .helium import *
|
||||||
ibert,
|
from .herbert import *
|
||||||
idefics,
|
from .hiera import *
|
||||||
idefics2,
|
from .hubert import *
|
||||||
idefics3,
|
from .ibert import *
|
||||||
ijepa,
|
from .idefics import *
|
||||||
imagegpt,
|
from .idefics2 import *
|
||||||
informer,
|
from .idefics3 import *
|
||||||
instructblip,
|
from .ijepa import *
|
||||||
instructblipvideo,
|
from .imagegpt import *
|
||||||
jamba,
|
from .informer import *
|
||||||
jetmoe,
|
from .instructblip import *
|
||||||
kosmos2,
|
from .instructblipvideo import *
|
||||||
layoutlm,
|
from .jamba import *
|
||||||
layoutlmv2,
|
from .jetmoe import *
|
||||||
layoutlmv3,
|
from .kosmos2 import *
|
||||||
layoutxlm,
|
from .layoutlm import *
|
||||||
led,
|
from .layoutlmv2 import *
|
||||||
levit,
|
from .layoutlmv3 import *
|
||||||
lilt,
|
from .layoutxlm import *
|
||||||
llama,
|
from .led import *
|
||||||
llama4,
|
from .levit import *
|
||||||
llava,
|
from .lilt import *
|
||||||
llava_next,
|
from .llama import *
|
||||||
llava_next_video,
|
from .llama4 import *
|
||||||
llava_onevision,
|
from .llava import *
|
||||||
longformer,
|
from .llava_next import *
|
||||||
longt5,
|
from .llava_next_video import *
|
||||||
luke,
|
from .llava_onevision import *
|
||||||
lxmert,
|
from .longformer import *
|
||||||
m2m_100,
|
from .longt5 import *
|
||||||
mamba,
|
from .luke import *
|
||||||
mamba2,
|
from .lxmert import *
|
||||||
marian,
|
from .m2m_100 import *
|
||||||
markuplm,
|
from .mamba import *
|
||||||
mask2former,
|
from .mamba2 import *
|
||||||
maskformer,
|
from .marian import *
|
||||||
mbart,
|
from .markuplm import *
|
||||||
mbart50,
|
from .mask2former import *
|
||||||
megatron_bert,
|
from .maskformer import *
|
||||||
megatron_gpt2,
|
from .mbart import *
|
||||||
mgp_str,
|
from .mbart50 import *
|
||||||
mimi,
|
from .megatron_bert import *
|
||||||
mistral,
|
from .megatron_gpt2 import *
|
||||||
mistral3,
|
from .mgp_str import *
|
||||||
mixtral,
|
from .mimi import *
|
||||||
mllama,
|
from .mistral import *
|
||||||
mluke,
|
from .mistral3 import *
|
||||||
mobilebert,
|
from .mixtral import *
|
||||||
mobilenet_v1,
|
from .mllama import *
|
||||||
mobilenet_v2,
|
from .mluke import *
|
||||||
mobilevit,
|
from .mobilebert import *
|
||||||
mobilevitv2,
|
from .mobilenet_v1 import *
|
||||||
modernbert,
|
from .mobilenet_v2 import *
|
||||||
moonshine,
|
from .mobilevit import *
|
||||||
moshi,
|
from .mobilevitv2 import *
|
||||||
mpnet,
|
from .modernbert import *
|
||||||
mpt,
|
from .moonshine import *
|
||||||
mra,
|
from .moshi import *
|
||||||
mt5,
|
from .mpnet import *
|
||||||
musicgen,
|
from .mpt import *
|
||||||
musicgen_melody,
|
from .mra import *
|
||||||
mvp,
|
from .mt5 import *
|
||||||
myt5,
|
from .musicgen import *
|
||||||
nemotron,
|
from .musicgen_melody import *
|
||||||
nllb,
|
from .mvp import *
|
||||||
nllb_moe,
|
from .myt5 import *
|
||||||
nougat,
|
from .nemotron import *
|
||||||
nystromformer,
|
from .nllb import *
|
||||||
olmo,
|
from .nllb_moe import *
|
||||||
olmo2,
|
from .nougat import *
|
||||||
olmoe,
|
from .nystromformer import *
|
||||||
omdet_turbo,
|
from .olmo import *
|
||||||
oneformer,
|
from .olmo2 import *
|
||||||
openai,
|
from .olmoe import *
|
||||||
opt,
|
from .omdet_turbo import *
|
||||||
owlv2,
|
from .oneformer import *
|
||||||
owlvit,
|
from .openai import *
|
||||||
paligemma,
|
from .opt import *
|
||||||
patchtsmixer,
|
from .owlv2 import *
|
||||||
patchtst,
|
from .owlvit import *
|
||||||
pegasus,
|
from .paligemma import *
|
||||||
pegasus_x,
|
from .patchtsmixer import *
|
||||||
perceiver,
|
from .patchtst import *
|
||||||
persimmon,
|
from .pegasus import *
|
||||||
phi,
|
from .pegasus_x import *
|
||||||
phi3,
|
from .perceiver import *
|
||||||
phi4_multimodal,
|
from .persimmon import *
|
||||||
phimoe,
|
from .phi import *
|
||||||
phobert,
|
from .phi3 import *
|
||||||
pix2struct,
|
from .phi4_multimodal import *
|
||||||
pixtral,
|
from .phimoe import *
|
||||||
plbart,
|
from .phobert import *
|
||||||
poolformer,
|
from .pix2struct import *
|
||||||
pop2piano,
|
from .pixtral import *
|
||||||
prompt_depth_anything,
|
from .plbart import *
|
||||||
prophetnet,
|
from .poolformer import *
|
||||||
pvt,
|
from .pop2piano import *
|
||||||
pvt_v2,
|
from .prophetnet import *
|
||||||
qwen2,
|
from .pvt import *
|
||||||
qwen2_5_vl,
|
from .pvt_v2 import *
|
||||||
qwen2_audio,
|
from .qwen2 import *
|
||||||
qwen2_moe,
|
from .qwen2_5_vl import *
|
||||||
qwen2_vl,
|
from .qwen2_audio import *
|
||||||
qwen3,
|
from .qwen2_moe import *
|
||||||
qwen3_moe,
|
from .qwen2_vl import *
|
||||||
rag,
|
from .rag import *
|
||||||
recurrent_gemma,
|
from .recurrent_gemma import *
|
||||||
reformer,
|
from .reformer import *
|
||||||
regnet,
|
from .regnet import *
|
||||||
rembert,
|
from .rembert import *
|
||||||
resnet,
|
from .resnet import *
|
||||||
roberta,
|
from .roberta import *
|
||||||
roberta_prelayernorm,
|
from .roberta_prelayernorm import *
|
||||||
roc_bert,
|
from .roc_bert import *
|
||||||
roformer,
|
from .roformer import *
|
||||||
rt_detr,
|
from .rt_detr import *
|
||||||
rt_detr_v2,
|
from .rt_detr_v2 import *
|
||||||
rwkv,
|
from .rwkv import *
|
||||||
sam,
|
from .sam import *
|
||||||
seamless_m4t,
|
from .seamless_m4t import *
|
||||||
seamless_m4t_v2,
|
from .seamless_m4t_v2 import *
|
||||||
segformer,
|
from .segformer import *
|
||||||
seggpt,
|
from .seggpt import *
|
||||||
sew,
|
from .sew import *
|
||||||
sew_d,
|
from .sew_d import *
|
||||||
shieldgemma2,
|
from .siglip import *
|
||||||
siglip,
|
from .siglip2 import *
|
||||||
siglip2,
|
from .smolvlm import *
|
||||||
smolvlm,
|
from .speech_encoder_decoder import *
|
||||||
speech_encoder_decoder,
|
from .speech_to_text import *
|
||||||
speech_to_text,
|
from .speecht5 import *
|
||||||
speecht5,
|
from .splinter import *
|
||||||
splinter,
|
from .squeezebert import *
|
||||||
squeezebert,
|
from .stablelm import *
|
||||||
stablelm,
|
from .starcoder2 import *
|
||||||
starcoder2,
|
from .superglue import *
|
||||||
superglue,
|
from .superpoint import *
|
||||||
superpoint,
|
from .swiftformer import *
|
||||||
swiftformer,
|
from .swin import *
|
||||||
swin,
|
from .swin2sr import *
|
||||||
swin2sr,
|
from .swinv2 import *
|
||||||
swinv2,
|
from .switch_transformers import *
|
||||||
switch_transformers,
|
from .t5 import *
|
||||||
t5,
|
from .table_transformer import *
|
||||||
table_transformer,
|
from .tapas import *
|
||||||
tapas,
|
from .textnet import *
|
||||||
textnet,
|
from .time_series_transformer import *
|
||||||
time_series_transformer,
|
from .timesformer import *
|
||||||
timesformer,
|
from .timm_backbone import *
|
||||||
timm_backbone,
|
from .timm_wrapper import *
|
||||||
timm_wrapper,
|
from .trocr import *
|
||||||
trocr,
|
from .tvp import *
|
||||||
tvp,
|
from .udop import *
|
||||||
udop,
|
from .umt5 import *
|
||||||
umt5,
|
from .unispeech import *
|
||||||
unispeech,
|
from .unispeech_sat import *
|
||||||
unispeech_sat,
|
from .univnet import *
|
||||||
univnet,
|
from .upernet import *
|
||||||
upernet,
|
from .video_llava import *
|
||||||
video_llava,
|
from .videomae import *
|
||||||
videomae,
|
from .vilt import *
|
||||||
vilt,
|
from .vipllava import *
|
||||||
vipllava,
|
from .vision_encoder_decoder import *
|
||||||
vision_encoder_decoder,
|
from .vision_text_dual_encoder import *
|
||||||
vision_text_dual_encoder,
|
from .visual_bert import *
|
||||||
visual_bert,
|
from .vit import *
|
||||||
vit,
|
from .vit_mae import *
|
||||||
vit_mae,
|
from .vit_msn import *
|
||||||
vit_msn,
|
from .vitdet import *
|
||||||
vitdet,
|
from .vitmatte import *
|
||||||
vitmatte,
|
from .vitpose import *
|
||||||
vitpose,
|
from .vitpose_backbone import *
|
||||||
vitpose_backbone,
|
from .vits import *
|
||||||
vits,
|
from .vivit import *
|
||||||
vivit,
|
from .wav2vec2 import *
|
||||||
wav2vec2,
|
from .wav2vec2_bert import *
|
||||||
wav2vec2_bert,
|
from .wav2vec2_conformer import *
|
||||||
wav2vec2_conformer,
|
from .wav2vec2_phoneme import *
|
||||||
wav2vec2_phoneme,
|
from .wav2vec2_with_lm import *
|
||||||
wav2vec2_with_lm,
|
from .wavlm import *
|
||||||
wavlm,
|
from .whisper import *
|
||||||
whisper,
|
from .x_clip import *
|
||||||
x_clip,
|
from .xglm import *
|
||||||
xglm,
|
from .xlm import *
|
||||||
xlm,
|
from .xlm_roberta import *
|
||||||
xlm_roberta,
|
from .xlm_roberta_xl import *
|
||||||
xlm_roberta_xl,
|
from .xlnet import *
|
||||||
xlnet,
|
from .xmod import *
|
||||||
xmod,
|
from .yolos import *
|
||||||
yolos,
|
from .yoso import *
|
||||||
yoso,
|
from .zamba import *
|
||||||
zamba,
|
from .zamba2 import *
|
||||||
zamba2,
|
from .zoedepth import *
|
||||||
zoedepth,
|
else:
|
||||||
)
|
import sys
|
||||||
|
|
||||||
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -23,7 +23,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
from ...utils.import_utils import export
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -33,7 +33,7 @@ VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
|
|||||||
SPIECE_UNDERLINE = "▁"
|
SPIECE_UNDERLINE = "▁"
|
||||||
|
|
||||||
|
|
||||||
@export(backends=("sentencepiece",))
|
@requires(backends=("sentencepiece",))
|
||||||
class AlbertTokenizer(PreTrainedTokenizer):
|
class AlbertTokenizer(PreTrainedTokenizer):
|
||||||
"""
|
"""
|
||||||
Construct an ALBERT tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
Construct an ALBERT tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
||||||
|
@ -11,399 +11,24 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ...utils import (
|
from ...utils import _LazyModule
|
||||||
OptionalDependencyNotAvailable,
|
from ...utils.import_utils import define_import_structure
|
||||||
_LazyModule,
|
|
||||||
is_flax_available,
|
|
||||||
is_tf_available,
|
|
||||||
is_torch_available,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"auto_factory": ["get_values"],
|
|
||||||
"configuration_auto": ["CONFIG_MAPPING", "MODEL_NAMES_MAPPING", "AutoConfig"],
|
|
||||||
"feature_extraction_auto": ["FEATURE_EXTRACTOR_MAPPING", "AutoFeatureExtractor"],
|
|
||||||
"image_processing_auto": ["IMAGE_PROCESSOR_MAPPING", "AutoImageProcessor"],
|
|
||||||
"processing_auto": ["PROCESSOR_MAPPING", "AutoProcessor"],
|
|
||||||
"tokenization_auto": ["TOKENIZER_MAPPING", "AutoTokenizer"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_auto"] = [
|
|
||||||
"MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_AUDIO_XVECTOR_MAPPING",
|
|
||||||
"MODEL_FOR_BACKBONE_MAPPING",
|
|
||||||
"MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING",
|
|
||||||
"MODEL_FOR_CAUSAL_LM_MAPPING",
|
|
||||||
"MODEL_FOR_CTC_MAPPING",
|
|
||||||
"MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"MODEL_FOR_DEPTH_ESTIMATION_MAPPING",
|
|
||||||
"MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_IMAGE_MAPPING",
|
|
||||||
"MODEL_FOR_IMAGE_SEGMENTATION_MAPPING",
|
|
||||||
"MODEL_FOR_IMAGE_TO_IMAGE_MAPPING",
|
|
||||||
"MODEL_FOR_KEYPOINT_DETECTION_MAPPING",
|
|
||||||
"MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING",
|
|
||||||
"MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
|
|
||||||
"MODEL_FOR_MASKED_LM_MAPPING",
|
|
||||||
"MODEL_FOR_MASK_GENERATION_MAPPING",
|
|
||||||
"MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
|
|
||||||
"MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
|
|
||||||
"MODEL_FOR_OBJECT_DETECTION_MAPPING",
|
|
||||||
"MODEL_FOR_PRETRAINING_MAPPING",
|
|
||||||
"MODEL_FOR_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
|
|
||||||
"MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
|
|
||||||
"MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
|
|
||||||
"MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"MODEL_FOR_TEXT_ENCODING_MAPPING",
|
|
||||||
"MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING",
|
|
||||||
"MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING",
|
|
||||||
"MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING",
|
|
||||||
"MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_VISION_2_SEQ_MAPPING",
|
|
||||||
"MODEL_FOR_RETRIEVAL_MAPPING",
|
|
||||||
"MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING",
|
|
||||||
"MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"MODEL_MAPPING",
|
|
||||||
"MODEL_WITH_LM_HEAD_MAPPING",
|
|
||||||
"MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING",
|
|
||||||
"MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING",
|
|
||||||
"MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING",
|
|
||||||
"AutoModel",
|
|
||||||
"AutoBackbone",
|
|
||||||
"AutoModelForAudioClassification",
|
|
||||||
"AutoModelForAudioFrameClassification",
|
|
||||||
"AutoModelForAudioXVector",
|
|
||||||
"AutoModelForCausalLM",
|
|
||||||
"AutoModelForCTC",
|
|
||||||
"AutoModelForDepthEstimation",
|
|
||||||
"AutoModelForImageClassification",
|
|
||||||
"AutoModelForImageSegmentation",
|
|
||||||
"AutoModelForImageToImage",
|
|
||||||
"AutoModelForInstanceSegmentation",
|
|
||||||
"AutoModelForKeypointDetection",
|
|
||||||
"AutoModelForMaskGeneration",
|
|
||||||
"AutoModelForTextEncoding",
|
|
||||||
"AutoModelForMaskedImageModeling",
|
|
||||||
"AutoModelForMaskedLM",
|
|
||||||
"AutoModelForMultipleChoice",
|
|
||||||
"AutoModelForNextSentencePrediction",
|
|
||||||
"AutoModelForObjectDetection",
|
|
||||||
"AutoModelForPreTraining",
|
|
||||||
"AutoModelForQuestionAnswering",
|
|
||||||
"AutoModelForSemanticSegmentation",
|
|
||||||
"AutoModelForSeq2SeqLM",
|
|
||||||
"AutoModelForSequenceClassification",
|
|
||||||
"AutoModelForSpeechSeq2Seq",
|
|
||||||
"AutoModelForTableQuestionAnswering",
|
|
||||||
"AutoModelForTextToSpectrogram",
|
|
||||||
"AutoModelForTextToWaveform",
|
|
||||||
"AutoModelForTokenClassification",
|
|
||||||
"AutoModelForUniversalSegmentation",
|
|
||||||
"AutoModelForVideoClassification",
|
|
||||||
"AutoModelForVision2Seq",
|
|
||||||
"AutoModelForVisualQuestionAnswering",
|
|
||||||
"AutoModelForDocumentQuestionAnswering",
|
|
||||||
"AutoModelWithLMHead",
|
|
||||||
"AutoModelForZeroShotImageClassification",
|
|
||||||
"AutoModelForZeroShotObjectDetection",
|
|
||||||
"AutoModelForImageTextToText",
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_tf_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_tf_auto"] = [
|
|
||||||
"TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
|
|
||||||
"TF_MODEL_FOR_CAUSAL_LM_MAPPING",
|
|
||||||
"TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
|
|
||||||
"TF_MODEL_FOR_MASK_GENERATION_MAPPING",
|
|
||||||
"TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
|
|
||||||
"TF_MODEL_FOR_MASKED_LM_MAPPING",
|
|
||||||
"TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
|
|
||||||
"TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
|
|
||||||
"TF_MODEL_FOR_PRETRAINING_MAPPING",
|
|
||||||
"TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
|
|
||||||
"TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
|
|
||||||
"TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
|
|
||||||
"TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
|
|
||||||
"TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"TF_MODEL_FOR_TEXT_ENCODING_MAPPING",
|
|
||||||
"TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
|
|
||||||
"TF_MODEL_FOR_VISION_2_SEQ_MAPPING",
|
|
||||||
"TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
|
|
||||||
"TF_MODEL_MAPPING",
|
|
||||||
"TF_MODEL_WITH_LM_HEAD_MAPPING",
|
|
||||||
"TFAutoModel",
|
|
||||||
"TFAutoModelForAudioClassification",
|
|
||||||
"TFAutoModelForCausalLM",
|
|
||||||
"TFAutoModelForImageClassification",
|
|
||||||
"TFAutoModelForMaskedImageModeling",
|
|
||||||
"TFAutoModelForMaskedLM",
|
|
||||||
"TFAutoModelForMaskGeneration",
|
|
||||||
"TFAutoModelForMultipleChoice",
|
|
||||||
"TFAutoModelForNextSentencePrediction",
|
|
||||||
"TFAutoModelForPreTraining",
|
|
||||||
"TFAutoModelForDocumentQuestionAnswering",
|
|
||||||
"TFAutoModelForQuestionAnswering",
|
|
||||||
"TFAutoModelForSemanticSegmentation",
|
|
||||||
"TFAutoModelForSeq2SeqLM",
|
|
||||||
"TFAutoModelForSequenceClassification",
|
|
||||||
"TFAutoModelForSpeechSeq2Seq",
|
|
||||||
"TFAutoModelForTableQuestionAnswering",
|
|
||||||
"TFAutoModelForTextEncoding",
|
|
||||||
"TFAutoModelForTokenClassification",
|
|
||||||
"TFAutoModelForVision2Seq",
|
|
||||||
"TFAutoModelForZeroShotImageClassification",
|
|
||||||
"TFAutoModelWithLMHead",
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_flax_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_flax_auto"] = [
|
|
||||||
"FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_CAUSAL_LM_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_MASKED_LM_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_PRETRAINING_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
|
|
||||||
"FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING",
|
|
||||||
"FLAX_MODEL_MAPPING",
|
|
||||||
"FlaxAutoModel",
|
|
||||||
"FlaxAutoModelForCausalLM",
|
|
||||||
"FlaxAutoModelForImageClassification",
|
|
||||||
"FlaxAutoModelForMaskedLM",
|
|
||||||
"FlaxAutoModelForMultipleChoice",
|
|
||||||
"FlaxAutoModelForNextSentencePrediction",
|
|
||||||
"FlaxAutoModelForPreTraining",
|
|
||||||
"FlaxAutoModelForQuestionAnswering",
|
|
||||||
"FlaxAutoModelForSeq2SeqLM",
|
|
||||||
"FlaxAutoModelForSequenceClassification",
|
|
||||||
"FlaxAutoModelForSpeechSeq2Seq",
|
|
||||||
"FlaxAutoModelForTokenClassification",
|
|
||||||
"FlaxAutoModelForVision2Seq",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .auto_factory import get_values
|
from .auto_factory import *
|
||||||
from .configuration_auto import CONFIG_MAPPING, MODEL_NAMES_MAPPING, AutoConfig
|
from .configuration_auto import *
|
||||||
from .feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
|
from .feature_extraction_auto import *
|
||||||
from .image_processing_auto import IMAGE_PROCESSOR_MAPPING, AutoImageProcessor
|
from .image_processing_auto import *
|
||||||
from .processing_auto import PROCESSOR_MAPPING, AutoProcessor
|
from .modeling_auto import *
|
||||||
from .tokenization_auto import TOKENIZER_MAPPING, AutoTokenizer
|
from .modeling_flax_auto import *
|
||||||
|
from .modeling_tf_auto import *
|
||||||
try:
|
from .processing_auto import *
|
||||||
if not is_torch_available():
|
from .tokenization_auto import *
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_auto import (
|
|
||||||
MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_AUDIO_XVECTOR_MAPPING,
|
|
||||||
MODEL_FOR_BACKBONE_MAPPING,
|
|
||||||
MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING,
|
|
||||||
MODEL_FOR_CAUSAL_LM_MAPPING,
|
|
||||||
MODEL_FOR_CTC_MAPPING,
|
|
||||||
MODEL_FOR_DEPTH_ESTIMATION_MAPPING,
|
|
||||||
MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING,
|
|
||||||
MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_IMAGE_MAPPING,
|
|
||||||
MODEL_FOR_IMAGE_SEGMENTATION_MAPPING,
|
|
||||||
MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING,
|
|
||||||
MODEL_FOR_IMAGE_TO_IMAGE_MAPPING,
|
|
||||||
MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING,
|
|
||||||
MODEL_FOR_KEYPOINT_DETECTION_MAPPING,
|
|
||||||
MODEL_FOR_MASK_GENERATION_MAPPING,
|
|
||||||
MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING,
|
|
||||||
MODEL_FOR_MASKED_LM_MAPPING,
|
|
||||||
MODEL_FOR_MULTIPLE_CHOICE_MAPPING,
|
|
||||||
MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING,
|
|
||||||
MODEL_FOR_OBJECT_DETECTION_MAPPING,
|
|
||||||
MODEL_FOR_PRETRAINING_MAPPING,
|
|
||||||
MODEL_FOR_QUESTION_ANSWERING_MAPPING,
|
|
||||||
MODEL_FOR_RETRIEVAL_MAPPING,
|
|
||||||
MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING,
|
|
||||||
MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,
|
|
||||||
MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING,
|
|
||||||
MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING,
|
|
||||||
MODEL_FOR_TEXT_ENCODING_MAPPING,
|
|
||||||
MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING,
|
|
||||||
MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING,
|
|
||||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING,
|
|
||||||
MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING,
|
|
||||||
MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_VISION_2_SEQ_MAPPING,
|
|
||||||
MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING,
|
|
||||||
MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING,
|
|
||||||
MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING,
|
|
||||||
MODEL_MAPPING,
|
|
||||||
MODEL_WITH_LM_HEAD_MAPPING,
|
|
||||||
AutoBackbone,
|
|
||||||
AutoModel,
|
|
||||||
AutoModelForAudioClassification,
|
|
||||||
AutoModelForAudioFrameClassification,
|
|
||||||
AutoModelForAudioXVector,
|
|
||||||
AutoModelForCausalLM,
|
|
||||||
AutoModelForCTC,
|
|
||||||
AutoModelForDepthEstimation,
|
|
||||||
AutoModelForDocumentQuestionAnswering,
|
|
||||||
AutoModelForImageClassification,
|
|
||||||
AutoModelForImageSegmentation,
|
|
||||||
AutoModelForImageTextToText,
|
|
||||||
AutoModelForImageToImage,
|
|
||||||
AutoModelForInstanceSegmentation,
|
|
||||||
AutoModelForKeypointDetection,
|
|
||||||
AutoModelForMaskedImageModeling,
|
|
||||||
AutoModelForMaskedLM,
|
|
||||||
AutoModelForMaskGeneration,
|
|
||||||
AutoModelForMultipleChoice,
|
|
||||||
AutoModelForNextSentencePrediction,
|
|
||||||
AutoModelForObjectDetection,
|
|
||||||
AutoModelForPreTraining,
|
|
||||||
AutoModelForQuestionAnswering,
|
|
||||||
AutoModelForSemanticSegmentation,
|
|
||||||
AutoModelForSeq2SeqLM,
|
|
||||||
AutoModelForSequenceClassification,
|
|
||||||
AutoModelForSpeechSeq2Seq,
|
|
||||||
AutoModelForTableQuestionAnswering,
|
|
||||||
AutoModelForTextEncoding,
|
|
||||||
AutoModelForTextToSpectrogram,
|
|
||||||
AutoModelForTextToWaveform,
|
|
||||||
AutoModelForTokenClassification,
|
|
||||||
AutoModelForUniversalSegmentation,
|
|
||||||
AutoModelForVideoClassification,
|
|
||||||
AutoModelForVision2Seq,
|
|
||||||
AutoModelForVisualQuestionAnswering,
|
|
||||||
AutoModelForZeroShotImageClassification,
|
|
||||||
AutoModelForZeroShotObjectDetection,
|
|
||||||
AutoModelWithLMHead,
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_tf_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_tf_auto import (
|
|
||||||
TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING,
|
|
||||||
TF_MODEL_FOR_CAUSAL_LM_MAPPING,
|
|
||||||
TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING,
|
|
||||||
TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING,
|
|
||||||
TF_MODEL_FOR_MASK_GENERATION_MAPPING,
|
|
||||||
TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING,
|
|
||||||
TF_MODEL_FOR_MASKED_LM_MAPPING,
|
|
||||||
TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING,
|
|
||||||
TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING,
|
|
||||||
TF_MODEL_FOR_PRETRAINING_MAPPING,
|
|
||||||
TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING,
|
|
||||||
TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING,
|
|
||||||
TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,
|
|
||||||
TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING,
|
|
||||||
TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING,
|
|
||||||
TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING,
|
|
||||||
TF_MODEL_FOR_TEXT_ENCODING_MAPPING,
|
|
||||||
TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
|
|
||||||
TF_MODEL_FOR_VISION_2_SEQ_MAPPING,
|
|
||||||
TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING,
|
|
||||||
TF_MODEL_MAPPING,
|
|
||||||
TF_MODEL_WITH_LM_HEAD_MAPPING,
|
|
||||||
TFAutoModel,
|
|
||||||
TFAutoModelForAudioClassification,
|
|
||||||
TFAutoModelForCausalLM,
|
|
||||||
TFAutoModelForDocumentQuestionAnswering,
|
|
||||||
TFAutoModelForImageClassification,
|
|
||||||
TFAutoModelForMaskedImageModeling,
|
|
||||||
TFAutoModelForMaskedLM,
|
|
||||||
TFAutoModelForMaskGeneration,
|
|
||||||
TFAutoModelForMultipleChoice,
|
|
||||||
TFAutoModelForNextSentencePrediction,
|
|
||||||
TFAutoModelForPreTraining,
|
|
||||||
TFAutoModelForQuestionAnswering,
|
|
||||||
TFAutoModelForSemanticSegmentation,
|
|
||||||
TFAutoModelForSeq2SeqLM,
|
|
||||||
TFAutoModelForSequenceClassification,
|
|
||||||
TFAutoModelForSpeechSeq2Seq,
|
|
||||||
TFAutoModelForTableQuestionAnswering,
|
|
||||||
TFAutoModelForTextEncoding,
|
|
||||||
TFAutoModelForTokenClassification,
|
|
||||||
TFAutoModelForVision2Seq,
|
|
||||||
TFAutoModelForZeroShotImageClassification,
|
|
||||||
TFAutoModelWithLMHead,
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_flax_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_flax_auto import (
|
|
||||||
FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_CAUSAL_LM_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_MASKED_LM_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_PRETRAINING_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
|
|
||||||
FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING,
|
|
||||||
FLAX_MODEL_MAPPING,
|
|
||||||
FlaxAutoModel,
|
|
||||||
FlaxAutoModelForCausalLM,
|
|
||||||
FlaxAutoModelForImageClassification,
|
|
||||||
FlaxAutoModelForMaskedLM,
|
|
||||||
FlaxAutoModelForMultipleChoice,
|
|
||||||
FlaxAutoModelForNextSentencePrediction,
|
|
||||||
FlaxAutoModelForPreTraining,
|
|
||||||
FlaxAutoModelForQuestionAnswering,
|
|
||||||
FlaxAutoModelForSeq2SeqLM,
|
|
||||||
FlaxAutoModelForSequenceClassification,
|
|
||||||
FlaxAutoModelForSpeechSeq2Seq,
|
|
||||||
FlaxAutoModelForTokenClassification,
|
|
||||||
FlaxAutoModelForVision2Seq,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -844,3 +844,6 @@ class _LazyAutoMapping(OrderedDict):
|
|||||||
raise ValueError(f"'{key}' is already used by a Transformers model.")
|
raise ValueError(f"'{key}' is already used by a Transformers model.")
|
||||||
|
|
||||||
self._extra_content[key] = value
|
self._extra_content[key] = value
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["get_values"]
|
||||||
|
@ -1173,3 +1173,6 @@ class AutoConfig:
|
|||||||
"match!"
|
"match!"
|
||||||
)
|
)
|
||||||
CONFIG_MAPPING.register(model_type, config, exist_ok=exist_ok)
|
CONFIG_MAPPING.register(model_type, config, exist_ok=exist_ok)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["CONFIG_MAPPING", "MODEL_NAMES_MAPPING", "AutoConfig"]
|
||||||
|
@ -406,3 +406,6 @@ class AutoFeatureExtractor:
|
|||||||
feature_extractor_class ([`FeatureExtractorMixin`]): The feature extractor to register.
|
feature_extractor_class ([`FeatureExtractorMixin`]): The feature extractor to register.
|
||||||
"""
|
"""
|
||||||
FEATURE_EXTRACTOR_MAPPING.register(config_class, feature_extractor_class, exist_ok=exist_ok)
|
FEATURE_EXTRACTOR_MAPPING.register(config_class, feature_extractor_class, exist_ok=exist_ok)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["FEATURE_EXTRACTOR_MAPPING", "AutoFeatureExtractor"]
|
||||||
|
@ -36,6 +36,7 @@ from ...utils import (
|
|||||||
is_vision_available,
|
is_vision_available,
|
||||||
logging,
|
logging,
|
||||||
)
|
)
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .auto_factory import _LazyAutoMapping
|
from .auto_factory import _LazyAutoMapping
|
||||||
from .configuration_auto import (
|
from .configuration_auto import (
|
||||||
CONFIG_MAPPING_NAMES,
|
CONFIG_MAPPING_NAMES,
|
||||||
@ -324,6 +325,7 @@ def _warning_fast_image_processor_available(fast_class):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision", "torchvision"))
|
||||||
class AutoImageProcessor:
|
class AutoImageProcessor:
|
||||||
r"""
|
r"""
|
||||||
This is a generic image processor class that will be instantiated as one of the image processor classes of the
|
This is a generic image processor class that will be instantiated as one of the image processor classes of the
|
||||||
@ -640,3 +642,6 @@ class AutoImageProcessor:
|
|||||||
IMAGE_PROCESSOR_MAPPING.register(
|
IMAGE_PROCESSOR_MAPPING.register(
|
||||||
config_class, (slow_image_processor_class, fast_image_processor_class), exist_ok=exist_ok
|
config_class, (slow_image_processor_class, fast_image_processor_class), exist_ok=exist_ok
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["IMAGE_PROCESSOR_MAPPING", "AutoImageProcessor"]
|
||||||
|
@ -1955,3 +1955,90 @@ class AutoModelWithLMHead(_AutoModelWithLMHead):
|
|||||||
FutureWarning,
|
FutureWarning,
|
||||||
)
|
)
|
||||||
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
|
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_AUDIO_XVECTOR_MAPPING",
|
||||||
|
"MODEL_FOR_BACKBONE_MAPPING",
|
||||||
|
"MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING",
|
||||||
|
"MODEL_FOR_CAUSAL_LM_MAPPING",
|
||||||
|
"MODEL_FOR_CTC_MAPPING",
|
||||||
|
"MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"MODEL_FOR_DEPTH_ESTIMATION_MAPPING",
|
||||||
|
"MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_IMAGE_MAPPING",
|
||||||
|
"MODEL_FOR_IMAGE_SEGMENTATION_MAPPING",
|
||||||
|
"MODEL_FOR_IMAGE_TO_IMAGE_MAPPING",
|
||||||
|
"MODEL_FOR_KEYPOINT_DETECTION_MAPPING",
|
||||||
|
"MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING",
|
||||||
|
"MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
|
||||||
|
"MODEL_FOR_MASKED_LM_MAPPING",
|
||||||
|
"MODEL_FOR_MASK_GENERATION_MAPPING",
|
||||||
|
"MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
|
||||||
|
"MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
|
||||||
|
"MODEL_FOR_OBJECT_DETECTION_MAPPING",
|
||||||
|
"MODEL_FOR_PRETRAINING_MAPPING",
|
||||||
|
"MODEL_FOR_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
|
||||||
|
"MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
|
||||||
|
"MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
|
||||||
|
"MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"MODEL_FOR_TEXT_ENCODING_MAPPING",
|
||||||
|
"MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING",
|
||||||
|
"MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING",
|
||||||
|
"MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING",
|
||||||
|
"MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_VISION_2_SEQ_MAPPING",
|
||||||
|
"MODEL_FOR_RETRIEVAL_MAPPING",
|
||||||
|
"MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING",
|
||||||
|
"MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"MODEL_MAPPING",
|
||||||
|
"MODEL_WITH_LM_HEAD_MAPPING",
|
||||||
|
"MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING",
|
||||||
|
"MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING",
|
||||||
|
"MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING",
|
||||||
|
"AutoModel",
|
||||||
|
"AutoBackbone",
|
||||||
|
"AutoModelForAudioClassification",
|
||||||
|
"AutoModelForAudioFrameClassification",
|
||||||
|
"AutoModelForAudioXVector",
|
||||||
|
"AutoModelForCausalLM",
|
||||||
|
"AutoModelForCTC",
|
||||||
|
"AutoModelForDepthEstimation",
|
||||||
|
"AutoModelForImageClassification",
|
||||||
|
"AutoModelForImageSegmentation",
|
||||||
|
"AutoModelForImageToImage",
|
||||||
|
"AutoModelForInstanceSegmentation",
|
||||||
|
"AutoModelForKeypointDetection",
|
||||||
|
"AutoModelForMaskGeneration",
|
||||||
|
"AutoModelForTextEncoding",
|
||||||
|
"AutoModelForMaskedImageModeling",
|
||||||
|
"AutoModelForMaskedLM",
|
||||||
|
"AutoModelForMultipleChoice",
|
||||||
|
"AutoModelForNextSentencePrediction",
|
||||||
|
"AutoModelForObjectDetection",
|
||||||
|
"AutoModelForPreTraining",
|
||||||
|
"AutoModelForQuestionAnswering",
|
||||||
|
"AutoModelForSemanticSegmentation",
|
||||||
|
"AutoModelForSeq2SeqLM",
|
||||||
|
"AutoModelForSequenceClassification",
|
||||||
|
"AutoModelForSpeechSeq2Seq",
|
||||||
|
"AutoModelForTableQuestionAnswering",
|
||||||
|
"AutoModelForTextToSpectrogram",
|
||||||
|
"AutoModelForTextToWaveform",
|
||||||
|
"AutoModelForTokenClassification",
|
||||||
|
"AutoModelForUniversalSegmentation",
|
||||||
|
"AutoModelForVideoClassification",
|
||||||
|
"AutoModelForVision2Seq",
|
||||||
|
"AutoModelForVisualQuestionAnswering",
|
||||||
|
"AutoModelForDocumentQuestionAnswering",
|
||||||
|
"AutoModelWithLMHead",
|
||||||
|
"AutoModelForZeroShotImageClassification",
|
||||||
|
"AutoModelForZeroShotObjectDetection",
|
||||||
|
"AutoModelForImageTextToText",
|
||||||
|
]
|
||||||
|
@ -381,3 +381,33 @@ class FlaxAutoModelForSpeechSeq2Seq(_BaseAutoModelClass):
|
|||||||
FlaxAutoModelForSpeechSeq2Seq = auto_class_update(
|
FlaxAutoModelForSpeechSeq2Seq = auto_class_update(
|
||||||
FlaxAutoModelForSpeechSeq2Seq, head_doc="sequence-to-sequence speech-to-text modeling"
|
FlaxAutoModelForSpeechSeq2Seq, head_doc="sequence-to-sequence speech-to-text modeling"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_CAUSAL_LM_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_MASKED_LM_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_PRETRAINING_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
|
||||||
|
"FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING",
|
||||||
|
"FLAX_MODEL_MAPPING",
|
||||||
|
"FlaxAutoModel",
|
||||||
|
"FlaxAutoModelForCausalLM",
|
||||||
|
"FlaxAutoModelForImageClassification",
|
||||||
|
"FlaxAutoModelForMaskedLM",
|
||||||
|
"FlaxAutoModelForMultipleChoice",
|
||||||
|
"FlaxAutoModelForNextSentencePrediction",
|
||||||
|
"FlaxAutoModelForPreTraining",
|
||||||
|
"FlaxAutoModelForQuestionAnswering",
|
||||||
|
"FlaxAutoModelForSeq2SeqLM",
|
||||||
|
"FlaxAutoModelForSequenceClassification",
|
||||||
|
"FlaxAutoModelForSpeechSeq2Seq",
|
||||||
|
"FlaxAutoModelForTokenClassification",
|
||||||
|
"FlaxAutoModelForVision2Seq",
|
||||||
|
]
|
||||||
|
@ -726,3 +726,51 @@ class TFAutoModelWithLMHead(_TFAutoModelWithLMHead):
|
|||||||
FutureWarning,
|
FutureWarning,
|
||||||
)
|
)
|
||||||
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
|
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING",
|
||||||
|
"TF_MODEL_FOR_CAUSAL_LM_MAPPING",
|
||||||
|
"TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING",
|
||||||
|
"TF_MODEL_FOR_MASK_GENERATION_MAPPING",
|
||||||
|
"TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING",
|
||||||
|
"TF_MODEL_FOR_MASKED_LM_MAPPING",
|
||||||
|
"TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING",
|
||||||
|
"TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING",
|
||||||
|
"TF_MODEL_FOR_PRETRAINING_MAPPING",
|
||||||
|
"TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING",
|
||||||
|
"TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING",
|
||||||
|
"TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING",
|
||||||
|
"TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING",
|
||||||
|
"TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING",
|
||||||
|
"TF_MODEL_FOR_TEXT_ENCODING_MAPPING",
|
||||||
|
"TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
|
||||||
|
"TF_MODEL_FOR_VISION_2_SEQ_MAPPING",
|
||||||
|
"TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
|
||||||
|
"TF_MODEL_MAPPING",
|
||||||
|
"TF_MODEL_WITH_LM_HEAD_MAPPING",
|
||||||
|
"TFAutoModel",
|
||||||
|
"TFAutoModelForAudioClassification",
|
||||||
|
"TFAutoModelForCausalLM",
|
||||||
|
"TFAutoModelForImageClassification",
|
||||||
|
"TFAutoModelForMaskedImageModeling",
|
||||||
|
"TFAutoModelForMaskedLM",
|
||||||
|
"TFAutoModelForMaskGeneration",
|
||||||
|
"TFAutoModelForMultipleChoice",
|
||||||
|
"TFAutoModelForNextSentencePrediction",
|
||||||
|
"TFAutoModelForPreTraining",
|
||||||
|
"TFAutoModelForDocumentQuestionAnswering",
|
||||||
|
"TFAutoModelForQuestionAnswering",
|
||||||
|
"TFAutoModelForSemanticSegmentation",
|
||||||
|
"TFAutoModelForSeq2SeqLM",
|
||||||
|
"TFAutoModelForSequenceClassification",
|
||||||
|
"TFAutoModelForSpeechSeq2Seq",
|
||||||
|
"TFAutoModelForTableQuestionAnswering",
|
||||||
|
"TFAutoModelForTextEncoding",
|
||||||
|
"TFAutoModelForTokenClassification",
|
||||||
|
"TFAutoModelForVision2Seq",
|
||||||
|
"TFAutoModelForZeroShotImageClassification",
|
||||||
|
"TFAutoModelWithLMHead",
|
||||||
|
]
|
||||||
|
@ -389,3 +389,6 @@ class AutoProcessor:
|
|||||||
processor_class ([`ProcessorMixin`]): The processor to register.
|
processor_class ([`ProcessorMixin`]): The processor to register.
|
||||||
"""
|
"""
|
||||||
PROCESSOR_MAPPING.register(config_class, processor_class, exist_ok=exist_ok)
|
PROCESSOR_MAPPING.register(config_class, processor_class, exist_ok=exist_ok)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["PROCESSOR_MAPPING", "AutoProcessor"]
|
||||||
|
@ -1083,3 +1083,6 @@ class AutoTokenizer:
|
|||||||
fast_tokenizer_class = existing_fast
|
fast_tokenizer_class = existing_fast
|
||||||
|
|
||||||
TOKENIZER_MAPPING.register(config_class, (slow_tokenizer_class, fast_tokenizer_class), exist_ok=exist_ok)
|
TOKENIZER_MAPPING.register(config_class, (slow_tokenizer_class, fast_tokenizer_class), exist_ok=exist_ok)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["TOKENIZER_MAPPING", "AutoTokenizer"]
|
||||||
|
@ -13,45 +13,15 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
# rely on isort to merge the imports
|
from ...utils import _LazyModule
|
||||||
from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
|
from ...utils.import_utils import define_import_structure
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_autoformer": ["AutoformerConfig"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_autoformer"] = [
|
|
||||||
"AutoformerForPrediction",
|
|
||||||
"AutoformerModel",
|
|
||||||
"AutoformerPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_autoformer import (
|
from .configuration_autoformer import *
|
||||||
AutoformerConfig,
|
from .modeling_autoformer import *
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_autoformer import (
|
|
||||||
AutoformerForPrediction,
|
|
||||||
AutoformerModel,
|
|
||||||
AutoformerPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -240,3 +240,6 @@ class AutoformerConfig(PretrainedConfig):
|
|||||||
+ self.num_static_real_features
|
+ self.num_static_real_features
|
||||||
+ self.input_size * 2 # the log1p(abs(loc)) and log(scale) features
|
+ self.input_size * 2 # the log1p(abs(loc)) and log(scale) features
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["AutoformerConfig"]
|
||||||
|
@ -2147,3 +2147,6 @@ class AutoformerForPrediction(AutoformerPreTrainedModel):
|
|||||||
(-1, num_parallel_samples, self.config.prediction_length) + self.target_shape,
|
(-1, num_parallel_samples, self.config.prediction_length) + self.target_shape,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["AutoformerForPrediction", "AutoformerModel", "AutoformerPreTrainedModel"]
|
||||||
|
@ -22,6 +22,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -34,6 +35,7 @@ SPIECE_UNDERLINE = "▁"
|
|||||||
# TODO this class is useless. This is the most standard sentencpiece model. Let's find which one is closest and nuke this.
|
# TODO this class is useless. This is the most standard sentencpiece model. Let's find which one is closest and nuke this.
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class BarthezTokenizer(PreTrainedTokenizer):
|
class BarthezTokenizer(PreTrainedTokenizer):
|
||||||
"""
|
"""
|
||||||
Adapted from [`CamembertTokenizer`] and [`BartTokenizer`]. Construct a BARThez tokenizer. Based on
|
Adapted from [`CamembertTokenizer`] and [`BartTokenizer`]. Construct a BARThez tokenizer. Based on
|
||||||
|
@ -22,6 +22,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -31,6 +32,7 @@ SPIECE_UNDERLINE = "▁"
|
|||||||
VOCAB_FILES_NAMES = {"vocab_file": "sentencepiece.bpe.model", "monolingual_vocab_file": "dict.txt"}
|
VOCAB_FILES_NAMES = {"vocab_file": "sentencepiece.bpe.model", "monolingual_vocab_file": "dict.txt"}
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class BartphoTokenizer(PreTrainedTokenizer):
|
class BartphoTokenizer(PreTrainedTokenizer):
|
||||||
"""
|
"""
|
||||||
Adapted from [`XLMRobertaTokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
Adapted from [`XLMRobertaTokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
||||||
|
@ -17,12 +17,14 @@
|
|||||||
import warnings
|
import warnings
|
||||||
|
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_beit import BeitImageProcessor
|
from .image_processing_beit import BeitImageProcessor
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class BeitFeatureExtractor(BeitImageProcessor):
|
class BeitFeatureExtractor(BeitImageProcessor):
|
||||||
def __init__(self, *args, **kwargs) -> None:
|
def __init__(self, *args, **kwargs) -> None:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
@ -42,6 +42,7 @@ from ...utils import (
|
|||||||
logging,
|
logging,
|
||||||
)
|
)
|
||||||
from ...utils.deprecation import deprecate_kwarg
|
from ...utils.deprecation import deprecate_kwarg
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
if is_vision_available():
|
if is_vision_available():
|
||||||
@ -54,6 +55,7 @@ if is_torch_available():
|
|||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class BeitImageProcessor(BaseImageProcessor):
|
class BeitImageProcessor(BaseImageProcessor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a BEiT image processor.
|
Constructs a BEiT image processor.
|
||||||
|
@ -6,9 +6,11 @@ from tensorflow_text import BertTokenizer as BertTokenizerLayer
|
|||||||
from tensorflow_text import FastBertTokenizer, ShrinkLongestTrimmer, case_fold_utf8, combine_segments, pad_model_inputs
|
from tensorflow_text import FastBertTokenizer, ShrinkLongestTrimmer, case_fold_utf8, combine_segments, pad_model_inputs
|
||||||
|
|
||||||
from ...modeling_tf_utils import keras
|
from ...modeling_tf_utils import keras
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .tokenization_bert import BertTokenizer
|
from .tokenization_bert import BertTokenizer
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("tf", "tensorflow_text"))
|
||||||
class TFBertTokenizer(keras.layers.Layer):
|
class TFBertTokenizer(keras.layers.Layer):
|
||||||
"""
|
"""
|
||||||
This is an in-graph tokenizer for BERT. It should be initialized similarly to other tokenizers, using the
|
This is an in-graph tokenizer for BERT. It should be initialized similarly to other tokenizers, using the
|
||||||
|
@ -22,6 +22,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ...tokenization_utils import PreTrainedTokenizer
|
from ...tokenization_utils import PreTrainedTokenizer
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -29,6 +30,7 @@ logger = logging.get_logger(__name__)
|
|||||||
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
|
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class BertGenerationTokenizer(PreTrainedTokenizer):
|
class BertGenerationTokenizer(PreTrainedTokenizer):
|
||||||
"""
|
"""
|
||||||
Construct a BertGeneration tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
Construct a BertGeneration tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
||||||
|
@ -23,6 +23,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -30,6 +31,7 @@ logger = logging.get_logger(__name__)
|
|||||||
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
|
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class BigBirdTokenizer(PreTrainedTokenizer):
|
class BigBirdTokenizer(PreTrainedTokenizer):
|
||||||
"""
|
"""
|
||||||
Construct a BigBird tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
Construct a BigBird tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
||||||
|
@ -22,7 +22,9 @@ if TYPE_CHECKING:
|
|||||||
from .image_processing_blip import *
|
from .image_processing_blip import *
|
||||||
from .image_processing_blip_fast import *
|
from .image_processing_blip_fast import *
|
||||||
from .modeling_blip import *
|
from .modeling_blip import *
|
||||||
|
from .modeling_blip_text import *
|
||||||
from .modeling_tf_blip import *
|
from .modeling_tf_blip import *
|
||||||
|
from .modeling_tf_blip_text import *
|
||||||
from .processing_blip import *
|
from .processing_blip import *
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
@ -955,3 +955,6 @@ class BlipTextLMHeadModel(BlipTextPreTrainedModel, GenerationMixin):
|
|||||||
tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
|
tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
|
||||||
)
|
)
|
||||||
return reordered_past
|
return reordered_past
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["BlipTextModel", "BlipTextLMHeadModel", "BlipTextPreTrainedModel"]
|
||||||
|
@ -1120,3 +1120,6 @@ class TFBlipTextLMHeadModel(TFBlipTextPreTrainedModel):
|
|||||||
if getattr(self, "cls", None) is not None:
|
if getattr(self, "cls", None) is not None:
|
||||||
with tf.name_scope(self.cls.name):
|
with tf.name_scope(self.cls.name):
|
||||||
self.cls.build(None)
|
self.cls.build(None)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["TFBlipTextLMHeadModel", "TFBlipTextModel", "TFBlipTextPreTrainedModel"]
|
||||||
|
@ -22,6 +22,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -32,6 +33,7 @@ VOCAB_FILES_NAMES = {"vocab_file": "sentencepiece.bpe.model"}
|
|||||||
SPIECE_UNDERLINE = "▁"
|
SPIECE_UNDERLINE = "▁"
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class CamembertTokenizer(PreTrainedTokenizer):
|
class CamembertTokenizer(PreTrainedTokenizer):
|
||||||
"""
|
"""
|
||||||
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Construct a CamemBERT tokenizer. Based on
|
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Construct a CamemBERT tokenizer. Based on
|
||||||
|
@ -17,12 +17,14 @@
|
|||||||
import warnings
|
import warnings
|
||||||
|
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_chinese_clip import ChineseCLIPImageProcessor
|
from .image_processing_chinese_clip import ChineseCLIPImageProcessor
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class ChineseCLIPFeatureExtractor(ChineseCLIPImageProcessor):
|
class ChineseCLIPFeatureExtractor(ChineseCLIPImageProcessor):
|
||||||
def __init__(self, *args, **kwargs) -> None:
|
def __init__(self, *args, **kwargs) -> None:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
@ -41,13 +41,17 @@ from ...image_utils import (
|
|||||||
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
|
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
if is_vision_available():
|
if is_vision_available():
|
||||||
import PIL
|
import PIL
|
||||||
|
|
||||||
|
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class ChineseCLIPImageProcessor(BaseImageProcessor):
|
class ChineseCLIPImageProcessor(BaseImageProcessor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a Chinese-CLIP image processor.
|
Constructs a Chinese-CLIP image processor.
|
||||||
|
@ -24,11 +24,13 @@ from ...audio_utils import mel_filter_bank, spectrogram, window_function
|
|||||||
from ...feature_extraction_sequence_utils import SequenceFeatureExtractor
|
from ...feature_extraction_sequence_utils import SequenceFeatureExtractor
|
||||||
from ...feature_extraction_utils import BatchFeature
|
from ...feature_extraction_utils import BatchFeature
|
||||||
from ...utils import TensorType, logging
|
from ...utils import TensorType, logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("torch",))
|
||||||
class ClapFeatureExtractor(SequenceFeatureExtractor):
|
class ClapFeatureExtractor(SequenceFeatureExtractor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a CLAP feature extractor.
|
Constructs a CLAP feature extractor.
|
||||||
|
@ -17,12 +17,14 @@
|
|||||||
import warnings
|
import warnings
|
||||||
|
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_clip import CLIPImageProcessor
|
from .image_processing_clip import CLIPImageProcessor
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class CLIPFeatureExtractor(CLIPImageProcessor):
|
class CLIPFeatureExtractor(CLIPImageProcessor):
|
||||||
def __init__(self, *args, **kwargs) -> None:
|
def __init__(self, *args, **kwargs) -> None:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
@ -40,6 +40,7 @@ from ...image_utils import (
|
|||||||
validate_preprocess_arguments,
|
validate_preprocess_arguments,
|
||||||
)
|
)
|
||||||
from ...utils import TensorType, is_vision_available, logging
|
from ...utils import TensorType, is_vision_available, logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -49,6 +50,7 @@ if is_vision_available():
|
|||||||
import PIL
|
import PIL
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class CLIPImageProcessor(BaseImageProcessor):
|
class CLIPImageProcessor(BaseImageProcessor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a CLIP image processor.
|
Constructs a CLIP image processor.
|
||||||
|
@ -25,6 +25,7 @@ import sentencepiece as spm
|
|||||||
from ...convert_slow_tokenizer import import_protobuf
|
from ...convert_slow_tokenizer import import_protobuf
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import logging, requires_backends
|
from ...utils import logging, requires_backends
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -46,6 +47,7 @@ correct. If you don't know the answer to a question, please don't share false in
|
|||||||
# fmt: on
|
# fmt: on
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class CodeLlamaTokenizer(PreTrainedTokenizer):
|
class CodeLlamaTokenizer(PreTrainedTokenizer):
|
||||||
"""
|
"""
|
||||||
Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as
|
Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as
|
||||||
|
@ -288,6 +288,5 @@ class ColPaliForRetrieval(ColPaliPreTrainedModel):
|
|||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"ColPaliForRetrieval",
|
"ColPaliForRetrieval",
|
||||||
"ColPaliForRetrievalOutput",
|
|
||||||
"ColPaliPreTrainedModel",
|
"ColPaliPreTrainedModel",
|
||||||
]
|
]
|
||||||
|
@ -18,6 +18,7 @@ import warnings
|
|||||||
|
|
||||||
from ...image_transforms import rgb_to_id as _rgb_to_id
|
from ...image_transforms import rgb_to_id as _rgb_to_id
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_conditional_detr import ConditionalDetrImageProcessor
|
from .image_processing_conditional_detr import ConditionalDetrImageProcessor
|
||||||
|
|
||||||
|
|
||||||
@ -33,6 +34,7 @@ def rgb_to_id(x):
|
|||||||
return _rgb_to_id(x)
|
return _rgb_to_id(x)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class ConditionalDetrFeatureExtractor(ConditionalDetrImageProcessor):
|
class ConditionalDetrFeatureExtractor(ConditionalDetrImageProcessor):
|
||||||
def __init__(self, *args, **kwargs) -> None:
|
def __init__(self, *args, **kwargs) -> None:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
@ -64,6 +64,7 @@ from ...utils import (
|
|||||||
is_vision_available,
|
is_vision_available,
|
||||||
logging,
|
logging,
|
||||||
)
|
)
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
if is_torch_available():
|
if is_torch_available():
|
||||||
@ -801,6 +802,7 @@ def compute_segments(
|
|||||||
return segmentation, segments
|
return segmentation, segments
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class ConditionalDetrImageProcessor(BaseImageProcessor):
|
class ConditionalDetrImageProcessor(BaseImageProcessor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a Conditional Detr image processor.
|
Constructs a Conditional Detr image processor.
|
||||||
|
@ -17,12 +17,14 @@
|
|||||||
import warnings
|
import warnings
|
||||||
|
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_convnext import ConvNextImageProcessor
|
from .image_processing_convnext import ConvNextImageProcessor
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class ConvNextFeatureExtractor(ConvNextImageProcessor):
|
class ConvNextFeatureExtractor(ConvNextImageProcessor):
|
||||||
def __init__(self, *args, **kwargs) -> None:
|
def __init__(self, *args, **kwargs) -> None:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
@ -39,6 +39,7 @@ from ...image_utils import (
|
|||||||
validate_preprocess_arguments,
|
validate_preprocess_arguments,
|
||||||
)
|
)
|
||||||
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
|
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
if is_vision_available():
|
if is_vision_available():
|
||||||
@ -48,6 +49,7 @@ if is_vision_available():
|
|||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class ConvNextImageProcessor(BaseImageProcessor):
|
class ConvNextImageProcessor(BaseImageProcessor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a ConvNeXT image processor.
|
Constructs a ConvNeXT image processor.
|
||||||
|
@ -23,6 +23,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import SPIECE_UNDERLINE, logging
|
from ...utils import SPIECE_UNDERLINE, logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -30,6 +31,7 @@ logger = logging.get_logger(__name__)
|
|||||||
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
|
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class CpmTokenizer(PreTrainedTokenizer):
|
class CpmTokenizer(PreTrainedTokenizer):
|
||||||
"""Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models."""
|
"""Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models."""
|
||||||
|
|
||||||
|
@ -22,6 +22,7 @@ import sentencepiece as sp
|
|||||||
|
|
||||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -30,6 +31,7 @@ logger = logging.get_logger(__name__)
|
|||||||
VOCAB_FILES_NAMES = {"vocab_file": "spm.model"}
|
VOCAB_FILES_NAMES = {"vocab_file": "spm.model"}
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class DebertaV2Tokenizer(PreTrainedTokenizer):
|
class DebertaV2Tokenizer(PreTrainedTokenizer):
|
||||||
r"""
|
r"""
|
||||||
Constructs a DeBERTa-v2 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
Constructs a DeBERTa-v2 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
||||||
|
@ -18,6 +18,7 @@ import warnings
|
|||||||
|
|
||||||
from ...image_transforms import rgb_to_id as _rgb_to_id
|
from ...image_transforms import rgb_to_id as _rgb_to_id
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_deformable_detr import DeformableDetrImageProcessor
|
from .image_processing_deformable_detr import DeformableDetrImageProcessor
|
||||||
|
|
||||||
|
|
||||||
@ -33,6 +34,7 @@ def rgb_to_id(x):
|
|||||||
return _rgb_to_id(x)
|
return _rgb_to_id(x)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class DeformableDetrFeatureExtractor(DeformableDetrImageProcessor):
|
class DeformableDetrFeatureExtractor(DeformableDetrImageProcessor):
|
||||||
def __init__(self, *args, **kwargs) -> None:
|
def __init__(self, *args, **kwargs) -> None:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
@ -64,6 +64,7 @@ from ...utils import (
|
|||||||
is_vision_available,
|
is_vision_available,
|
||||||
logging,
|
logging,
|
||||||
)
|
)
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
if is_torch_available():
|
if is_torch_available():
|
||||||
@ -799,6 +800,7 @@ def compute_segments(
|
|||||||
return segmentation, segments
|
return segmentation, segments
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("torch", "vision"))
|
||||||
class DeformableDetrImageProcessor(BaseImageProcessor):
|
class DeformableDetrImageProcessor(BaseImageProcessor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a Deformable DETR image processor.
|
Constructs a Deformable DETR image processor.
|
||||||
|
@ -39,6 +39,7 @@ from ...utils import (
|
|||||||
is_torchvision_v2_available,
|
is_torchvision_v2_available,
|
||||||
logging,
|
logging,
|
||||||
)
|
)
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_deformable_detr import get_size_with_aspect_ratio
|
from .image_processing_deformable_detr import get_size_with_aspect_ratio
|
||||||
|
|
||||||
|
|
||||||
@ -288,6 +289,7 @@ def prepare_coco_panoptic_annotation(
|
|||||||
Whether to return segmentation masks.
|
Whether to return segmentation masks.
|
||||||
""",
|
""",
|
||||||
)
|
)
|
||||||
|
@requires(backends=("torchvision", "torch"))
|
||||||
class DeformableDetrImageProcessorFast(BaseImageProcessorFast):
|
class DeformableDetrImageProcessorFast(BaseImageProcessorFast):
|
||||||
resample = PILImageResampling.BILINEAR
|
resample = PILImageResampling.BILINEAR
|
||||||
image_mean = IMAGENET_DEFAULT_MEAN
|
image_mean = IMAGENET_DEFAULT_MEAN
|
||||||
|
@ -17,12 +17,14 @@
|
|||||||
import warnings
|
import warnings
|
||||||
|
|
||||||
from ...utils import logging
|
from ...utils import logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
from .image_processing_deit import DeiTImageProcessor
|
from .image_processing_deit import DeiTImageProcessor
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class DeiTFeatureExtractor(DeiTImageProcessor):
|
class DeiTFeatureExtractor(DeiTImageProcessor):
|
||||||
def __init__(self, *args, **kwargs) -> None:
|
def __init__(self, *args, **kwargs) -> None:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
@ -34,6 +34,7 @@ from ...image_utils import (
|
|||||||
validate_preprocess_arguments,
|
validate_preprocess_arguments,
|
||||||
)
|
)
|
||||||
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
|
from ...utils import TensorType, filter_out_non_signature_kwargs, is_vision_available, logging
|
||||||
|
from ...utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
if is_vision_available():
|
if is_vision_available():
|
||||||
@ -43,6 +44,7 @@ if is_vision_available():
|
|||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@requires(backends=("vision",))
|
||||||
class DeiTImageProcessor(BaseImageProcessor):
|
class DeiTImageProcessor(BaseImageProcessor):
|
||||||
r"""
|
r"""
|
||||||
Constructs a DeiT image processor.
|
Constructs a DeiT image processor.
|
||||||
|
@ -0,0 +1,49 @@
|
|||||||
|
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
from ...utils import _LazyModule
|
||||||
|
from ...utils.import_utils import define_import_structure
|
||||||
|
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .bort import *
|
||||||
|
from .deta import *
|
||||||
|
from .efficientformer import *
|
||||||
|
from .ernie_m import *
|
||||||
|
from .gptsan_japanese import *
|
||||||
|
from .graphormer import *
|
||||||
|
from .jukebox import *
|
||||||
|
from .mctct import *
|
||||||
|
from .mega import *
|
||||||
|
from .mmbt import *
|
||||||
|
from .nat import *
|
||||||
|
from .nezha import *
|
||||||
|
from .open_llama import *
|
||||||
|
from .qdqbert import *
|
||||||
|
from .realm import *
|
||||||
|
from .retribert import *
|
||||||
|
from .speech_to_text_2 import *
|
||||||
|
from .tapex import *
|
||||||
|
from .trajectory_transformer import *
|
||||||
|
from .transfo_xl import *
|
||||||
|
from .tvlt import *
|
||||||
|
from .van import *
|
||||||
|
from .vit_hybrid import *
|
||||||
|
from .xlm_prophetnet import *
|
||||||
|
else:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
@ -11,61 +11,18 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available, is_vision_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_deta": ["DetaConfig"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_vision_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["image_processing_deta"] = ["DetaImageProcessor"]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_deta"] = [
|
|
||||||
"DetaForObjectDetection",
|
|
||||||
"DetaModel",
|
|
||||||
"DetaPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_deta import DetaConfig
|
from .configuration_deta import *
|
||||||
|
from .image_processing_deta import *
|
||||||
try:
|
from .modeling_deta import *
|
||||||
if not is_vision_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .image_processing_deta import DetaImageProcessor
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_deta import (
|
|
||||||
DetaForObjectDetection,
|
|
||||||
DetaModel,
|
|
||||||
DetaPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -265,3 +265,6 @@ class DetaConfig(PretrainedConfig):
|
|||||||
@property
|
@property
|
||||||
def hidden_size(self) -> int:
|
def hidden_size(self) -> int:
|
||||||
return self.d_model
|
return self.d_model
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["DetaConfig"]
|
||||||
|
@ -1222,3 +1222,6 @@ class DetaImageProcessor(BaseImageProcessor):
|
|||||||
)
|
)
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["DetaImageProcessor"]
|
||||||
|
@ -2822,3 +2822,6 @@ class DetaStage1Assigner(nn.Module):
|
|||||||
|
|
||||||
def postprocess_indices(self, pr_inds, gt_inds, iou):
|
def postprocess_indices(self, pr_inds, gt_inds, iou):
|
||||||
return sample_topk_per_gt(pr_inds, gt_inds, iou, self.k)
|
return sample_topk_per_gt(pr_inds, gt_inds, iou, self.k)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["DetaForObjectDetection", "DetaModel", "DetaPreTrainedModel"]
|
||||||
|
@ -13,88 +13,17 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import (
|
from ....utils import _LazyModule
|
||||||
OptionalDependencyNotAvailable,
|
from ....utils.import_utils import define_import_structure
|
||||||
_LazyModule,
|
|
||||||
is_tf_available,
|
|
||||||
is_torch_available,
|
|
||||||
is_vision_available,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {"configuration_efficientformer": ["EfficientFormerConfig"]}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_vision_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["image_processing_efficientformer"] = ["EfficientFormerImageProcessor"]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_efficientformer"] = [
|
|
||||||
"EfficientFormerForImageClassification",
|
|
||||||
"EfficientFormerForImageClassificationWithTeacher",
|
|
||||||
"EfficientFormerModel",
|
|
||||||
"EfficientFormerPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_tf_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_tf_efficientformer"] = [
|
|
||||||
"TFEfficientFormerForImageClassification",
|
|
||||||
"TFEfficientFormerForImageClassificationWithTeacher",
|
|
||||||
"TFEfficientFormerModel",
|
|
||||||
"TFEfficientFormerPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_efficientformer import EfficientFormerConfig
|
from .configuration_efficientformer import *
|
||||||
|
from .image_processing_efficientformer import *
|
||||||
try:
|
from .modeling_efficientformer import *
|
||||||
if not is_vision_available():
|
from .modeling_tf_efficientformer import *
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .image_processing_efficientformer import EfficientFormerImageProcessor
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_efficientformer import (
|
|
||||||
EfficientFormerForImageClassification,
|
|
||||||
EfficientFormerForImageClassificationWithTeacher,
|
|
||||||
EfficientFormerModel,
|
|
||||||
EfficientFormerPreTrainedModel,
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
if not is_tf_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_tf_efficientformer import (
|
|
||||||
TFEfficientFormerForImageClassification,
|
|
||||||
TFEfficientFormerForImageClassificationWithTeacher,
|
|
||||||
TFEfficientFormerModel,
|
|
||||||
TFEfficientFormerPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -165,3 +165,8 @@ class EfficientFormerConfig(PretrainedConfig):
|
|||||||
self.layer_scale_init_value = layer_scale_init_value
|
self.layer_scale_init_value = layer_scale_init_value
|
||||||
self.image_size = image_size
|
self.image_size = image_size
|
||||||
self.batch_norm_eps = batch_norm_eps
|
self.batch_norm_eps = batch_norm_eps
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"EfficientFormerConfig",
|
||||||
|
]
|
||||||
|
@ -319,3 +319,6 @@ class EfficientFormerImageProcessor(BaseImageProcessor):
|
|||||||
|
|
||||||
data = {"pixel_values": images}
|
data = {"pixel_values": images}
|
||||||
return BatchFeature(data=data, tensor_type=return_tensors)
|
return BatchFeature(data=data, tensor_type=return_tensors)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["EfficientFormerImageProcessor"]
|
||||||
|
@ -797,3 +797,11 @@ class EfficientFormerForImageClassificationWithTeacher(EfficientFormerPreTrained
|
|||||||
hidden_states=outputs.hidden_states,
|
hidden_states=outputs.hidden_states,
|
||||||
attentions=outputs.attentions,
|
attentions=outputs.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"EfficientFormerForImageClassification",
|
||||||
|
"EfficientFormerForImageClassificationWithTeacher",
|
||||||
|
"EfficientFormerModel",
|
||||||
|
"EfficientFormerPreTrainedModel",
|
||||||
|
]
|
||||||
|
@ -1188,3 +1188,11 @@ class TFEfficientFormerForImageClassificationWithTeacher(TFEfficientFormerPreTra
|
|||||||
if hasattr(self.distillation_classifier, "name"):
|
if hasattr(self.distillation_classifier, "name"):
|
||||||
with tf.name_scope(self.distillation_classifier.name):
|
with tf.name_scope(self.distillation_classifier.name):
|
||||||
self.distillation_classifier.build([None, None, self.config.hidden_sizes[-1]])
|
self.distillation_classifier.build([None, None, self.config.hidden_sizes[-1]])
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"TFEfficientFormerForImageClassification",
|
||||||
|
"TFEfficientFormerForImageClassificationWithTeacher",
|
||||||
|
"TFEfficientFormerModel",
|
||||||
|
"TFEfficientFormerPreTrainedModel",
|
||||||
|
]
|
||||||
|
@ -13,68 +13,16 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
# rely on isort to merge the imports
|
from ....utils import _LazyModule
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_sentencepiece_available, is_torch_available
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_ernie_m": ["ErnieMConfig"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_sentencepiece_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["tokenization_ernie_m"] = ["ErnieMTokenizer"]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_ernie_m"] = [
|
|
||||||
"ErnieMForMultipleChoice",
|
|
||||||
"ErnieMForQuestionAnswering",
|
|
||||||
"ErnieMForSequenceClassification",
|
|
||||||
"ErnieMForTokenClassification",
|
|
||||||
"ErnieMModel",
|
|
||||||
"ErnieMPreTrainedModel",
|
|
||||||
"ErnieMForInformationExtraction",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_ernie_m import ErnieMConfig
|
from .configuration_ernie_m import *
|
||||||
|
from .modeling_ernie_m import *
|
||||||
try:
|
from .tokenization_ernie_m import *
|
||||||
if not is_sentencepiece_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .tokenization_ernie_m import ErnieMTokenizer
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_ernie_m import (
|
|
||||||
ErnieMForInformationExtraction,
|
|
||||||
ErnieMForMultipleChoice,
|
|
||||||
ErnieMForQuestionAnswering,
|
|
||||||
ErnieMForSequenceClassification,
|
|
||||||
ErnieMForTokenClassification,
|
|
||||||
ErnieMModel,
|
|
||||||
ErnieMPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -109,3 +109,6 @@ class ErnieMConfig(PretrainedConfig):
|
|||||||
self.layer_norm_eps = layer_norm_eps
|
self.layer_norm_eps = layer_norm_eps
|
||||||
self.classifier_dropout = classifier_dropout
|
self.classifier_dropout = classifier_dropout
|
||||||
self.act_dropout = act_dropout
|
self.act_dropout = act_dropout
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["ErnieMConfig"]
|
||||||
|
@ -1045,3 +1045,14 @@ class ErnieMForInformationExtraction(ErnieMPreTrainedModel):
|
|||||||
hidden_states=result.hidden_states,
|
hidden_states=result.hidden_states,
|
||||||
attentions=result.attentions,
|
attentions=result.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"ErnieMForMultipleChoice",
|
||||||
|
"ErnieMForQuestionAnswering",
|
||||||
|
"ErnieMForSequenceClassification",
|
||||||
|
"ErnieMForTokenClassification",
|
||||||
|
"ErnieMModel",
|
||||||
|
"ErnieMPreTrainedModel",
|
||||||
|
"ErnieMForInformationExtraction",
|
||||||
|
]
|
||||||
|
@ -23,6 +23,7 @@ import sentencepiece as spm
|
|||||||
|
|
||||||
from ....tokenization_utils import PreTrainedTokenizer
|
from ....tokenization_utils import PreTrainedTokenizer
|
||||||
from ....utils import logging
|
from ....utils import logging
|
||||||
|
from ....utils.import_utils import requires
|
||||||
|
|
||||||
|
|
||||||
logger = logging.get_logger(__name__)
|
logger = logging.get_logger(__name__)
|
||||||
@ -38,6 +39,7 @@ RESOURCE_FILES_NAMES = {
|
|||||||
|
|
||||||
|
|
||||||
# Adapted from paddlenlp.transformers.ernie_m.tokenizer.ErnieMTokenizer
|
# Adapted from paddlenlp.transformers.ernie_m.tokenizer.ErnieMTokenizer
|
||||||
|
@requires(backends=("sentencepiece",))
|
||||||
class ErnieMTokenizer(PreTrainedTokenizer):
|
class ErnieMTokenizer(PreTrainedTokenizer):
|
||||||
r"""
|
r"""
|
||||||
Constructs a Ernie-M tokenizer. It uses the `sentencepiece` tools to cut the words to sub-words.
|
Constructs a Ernie-M tokenizer. It uses the `sentencepiece` tools to cut the words to sub-words.
|
||||||
@ -403,3 +405,6 @@ class ErnieMTokenizer(PreTrainedTokenizer):
|
|||||||
fi.write(content_spiece_model)
|
fi.write(content_spiece_model)
|
||||||
|
|
||||||
return (vocab_file,)
|
return (vocab_file,)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["ErnieMTokenizer"]
|
||||||
|
@ -11,58 +11,18 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import (
|
from ....utils import _LazyModule
|
||||||
OptionalDependencyNotAvailable,
|
from ....utils.import_utils import define_import_structure
|
||||||
_LazyModule,
|
|
||||||
is_flax_available,
|
|
||||||
is_tf_available,
|
|
||||||
is_torch_available,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_gptsan_japanese": ["GPTSanJapaneseConfig"],
|
|
||||||
"tokenization_gptsan_japanese": ["GPTSanJapaneseTokenizer"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_gptsan_japanese"] = [
|
|
||||||
"GPTSanJapaneseForConditionalGeneration",
|
|
||||||
"GPTSanJapaneseModel",
|
|
||||||
"GPTSanJapanesePreTrainedModel",
|
|
||||||
]
|
|
||||||
_import_structure["tokenization_gptsan_japanese"] = [
|
|
||||||
"GPTSanJapaneseTokenizer",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_gptsan_japanese import GPTSanJapaneseConfig
|
from .configuration_gptsan_japanese import *
|
||||||
from .tokenization_gptsan_japanese import GPTSanJapaneseTokenizer
|
from .modeling_gptsan_japanese import *
|
||||||
|
from .tokenization_gptsan_japanese import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_gptsan_japanese import (
|
|
||||||
GPTSanJapaneseForConditionalGeneration,
|
|
||||||
GPTSanJapaneseModel,
|
|
||||||
GPTSanJapanesePreTrainedModel,
|
|
||||||
)
|
|
||||||
from .tokenization_gptsan_japanese import GPTSanJapaneseTokenizer
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -152,3 +152,6 @@ class GPTSanJapaneseConfig(PretrainedConfig):
|
|||||||
eos_token_id=eos_token_id,
|
eos_token_id=eos_token_id,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["GPTSanJapaneseConfig"]
|
||||||
|
@ -1332,3 +1332,6 @@ class GPTSanJapaneseForConditionalGeneration(GPTSanJapanesePreTrainedModel):
|
|||||||
total_router_logits.append(router_logits)
|
total_router_logits.append(router_logits)
|
||||||
total_expert_indexes.append(expert_indexes)
|
total_expert_indexes.append(expert_indexes)
|
||||||
return torch.cat(total_router_logits, dim=1), torch.cat(total_expert_indexes, dim=1)
|
return torch.cat(total_router_logits, dim=1), torch.cat(total_expert_indexes, dim=1)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["GPTSanJapaneseForConditionalGeneration", "GPTSanJapaneseModel", "GPTSanJapanesePreTrainedModel"]
|
||||||
|
@ -513,3 +513,6 @@ class SubWordJapaneseTokenizer:
|
|||||||
|
|
||||||
def convert_id_to_token(self, index):
|
def convert_id_to_token(self, index):
|
||||||
return self.ids_to_tokens[index][0]
|
return self.ids_to_tokens[index][0]
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["GPTSanJapaneseTokenizer"]
|
||||||
|
@ -13,43 +13,15 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_tokenizers_available, is_torch_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_graphormer": ["GraphormerConfig"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_graphormer"] = [
|
|
||||||
"GraphormerForGraphClassification",
|
|
||||||
"GraphormerModel",
|
|
||||||
"GraphormerPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_graphormer import GraphormerConfig
|
from .configuration_graphormer import *
|
||||||
|
from .modeling_graphormer import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_graphormer import (
|
|
||||||
GraphormerForGraphClassification,
|
|
||||||
GraphormerModel,
|
|
||||||
GraphormerPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -215,3 +215,6 @@ class GraphormerConfig(PretrainedConfig):
|
|||||||
eos_token_id=eos_token_id,
|
eos_token_id=eos_token_id,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["GraphormerConfig"]
|
||||||
|
@ -906,3 +906,6 @@ class GraphormerForGraphClassification(GraphormerPreTrainedModel):
|
|||||||
if not return_dict:
|
if not return_dict:
|
||||||
return tuple(x for x in [loss, logits, hidden_states] if x is not None)
|
return tuple(x for x in [loss, logits, hidden_states] if x is not None)
|
||||||
return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=hidden_states, attentions=None)
|
return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=hidden_states, attentions=None)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["GraphormerForGraphClassification", "GraphormerModel", "GraphormerPreTrainedModel"]
|
||||||
|
@ -11,56 +11,18 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_jukebox": [
|
|
||||||
"JukeboxConfig",
|
|
||||||
"JukeboxPriorConfig",
|
|
||||||
"JukeboxVQVAEConfig",
|
|
||||||
],
|
|
||||||
"tokenization_jukebox": ["JukeboxTokenizer"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_jukebox"] = [
|
|
||||||
"JukeboxModel",
|
|
||||||
"JukeboxPreTrainedModel",
|
|
||||||
"JukeboxVQVAE",
|
|
||||||
"JukeboxPrior",
|
|
||||||
]
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_jukebox import (
|
from .configuration_jukebox import *
|
||||||
JukeboxConfig,
|
from .modeling_jukebox import *
|
||||||
JukeboxPriorConfig,
|
from .tokenization_jukebox import *
|
||||||
JukeboxVQVAEConfig,
|
|
||||||
)
|
|
||||||
from .tokenization_jukebox import JukeboxTokenizer
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_jukebox import (
|
|
||||||
JukeboxModel,
|
|
||||||
JukeboxPreTrainedModel,
|
|
||||||
JukeboxPrior,
|
|
||||||
JukeboxVQVAE,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -608,3 +608,6 @@ class JukeboxConfig(PretrainedConfig):
|
|||||||
result = super().to_dict()
|
result = super().to_dict()
|
||||||
result["prior_config_list"] = [config.to_dict() for config in result.pop("prior_configs")]
|
result["prior_config_list"] = [config.to_dict() for config in result.pop("prior_configs")]
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["JukeboxConfig", "JukeboxPriorConfig", "JukeboxVQVAEConfig"]
|
||||||
|
@ -2665,3 +2665,6 @@ class JukeboxModel(JukeboxPreTrainedModel):
|
|||||||
)
|
)
|
||||||
music_tokens = self._sample(music_tokens, labels, sample_levels, **sampling_kwargs)
|
music_tokens = self._sample(music_tokens, labels, sample_levels, **sampling_kwargs)
|
||||||
return music_tokens
|
return music_tokens
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["JukeboxModel", "JukeboxPreTrainedModel", "JukeboxVQVAE", "JukeboxPrior"]
|
||||||
|
@ -402,3 +402,6 @@ class JukeboxTokenizer(PreTrainedTokenizer):
|
|||||||
genres = [self.genres_decoder.get(genre) for genre in genres_index]
|
genres = [self.genres_decoder.get(genre) for genre in genres_index]
|
||||||
lyrics = [self.lyrics_decoder.get(character) for character in lyric_index]
|
lyrics = [self.lyrics_decoder.get(character) for character in lyric_index]
|
||||||
return artist, genres, lyrics
|
return artist, genres, lyrics
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["JukeboxTokenizer"]
|
||||||
|
@ -13,43 +13,17 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_mctct": ["MCTCTConfig"],
|
|
||||||
"feature_extraction_mctct": ["MCTCTFeatureExtractor"],
|
|
||||||
"processing_mctct": ["MCTCTProcessor"],
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_mctct"] = [
|
|
||||||
"MCTCTForCTC",
|
|
||||||
"MCTCTModel",
|
|
||||||
"MCTCTPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_mctct import MCTCTConfig
|
from .configuration_mctct import *
|
||||||
from .feature_extraction_mctct import MCTCTFeatureExtractor
|
from .feature_extraction_mctct import *
|
||||||
from .processing_mctct import MCTCTProcessor
|
from .modeling_mctct import *
|
||||||
|
from .processing_mctct import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_mctct import MCTCTForCTC, MCTCTModel, MCTCTPreTrainedModel
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -179,3 +179,6 @@ class MCTCTConfig(PretrainedConfig):
|
|||||||
f"but is `len(config.conv_kernel) = {len(self.conv_kernel)}`, "
|
f"but is `len(config.conv_kernel) = {len(self.conv_kernel)}`, "
|
||||||
f"`config.num_conv_layers = {self.num_conv_layers}`."
|
f"`config.num_conv_layers = {self.num_conv_layers}`."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["MCTCTConfig"]
|
||||||
|
@ -286,3 +286,6 @@ class MCTCTFeatureExtractor(SequenceFeatureExtractor):
|
|||||||
padded_inputs = padded_inputs.convert_to_tensors(return_tensors)
|
padded_inputs = padded_inputs.convert_to_tensors(return_tensors)
|
||||||
|
|
||||||
return padded_inputs
|
return padded_inputs
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["MCTCTFeatureExtractor"]
|
||||||
|
@ -786,3 +786,6 @@ class MCTCTForCTC(MCTCTPreTrainedModel):
|
|||||||
return CausalLMOutput(
|
return CausalLMOutput(
|
||||||
loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions
|
loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["MCTCTForCTC", "MCTCTModel", "MCTCTPreTrainedModel"]
|
||||||
|
@ -141,3 +141,6 @@ class MCTCTProcessor(ProcessorMixin):
|
|||||||
yield
|
yield
|
||||||
self.current_processor = self.feature_extractor
|
self.current_processor = self.feature_extractor
|
||||||
self._in_target_context_manager = False
|
self._in_target_context_manager = False
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["MCTCTProcessor"]
|
||||||
|
@ -11,58 +11,17 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import (
|
from ....utils import _LazyModule
|
||||||
OptionalDependencyNotAvailable,
|
from ....utils.import_utils import define_import_structure
|
||||||
_LazyModule,
|
|
||||||
is_torch_available,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_mega": ["MegaConfig", "MegaOnnxConfig"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_mega"] = [
|
|
||||||
"MegaForCausalLM",
|
|
||||||
"MegaForMaskedLM",
|
|
||||||
"MegaForMultipleChoice",
|
|
||||||
"MegaForQuestionAnswering",
|
|
||||||
"MegaForSequenceClassification",
|
|
||||||
"MegaForTokenClassification",
|
|
||||||
"MegaModel",
|
|
||||||
"MegaPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_mega import MegaConfig, MegaOnnxConfig
|
from .configuration_mega import *
|
||||||
|
from .modeling_mega import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_mega import (
|
|
||||||
MegaForCausalLM,
|
|
||||||
MegaForMaskedLM,
|
|
||||||
MegaForMultipleChoice,
|
|
||||||
MegaForQuestionAnswering,
|
|
||||||
MegaForSequenceClassification,
|
|
||||||
MegaForTokenClassification,
|
|
||||||
MegaModel,
|
|
||||||
MegaPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -238,3 +238,6 @@ class MegaOnnxConfig(OnnxConfig):
|
|||||||
("attention_mask", dynamic_axis),
|
("attention_mask", dynamic_axis),
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["MegaConfig", "MegaOnnxConfig"]
|
||||||
|
@ -2271,3 +2271,15 @@ class MegaForQuestionAnswering(MegaPreTrainedModel):
|
|||||||
hidden_states=outputs.hidden_states,
|
hidden_states=outputs.hidden_states,
|
||||||
attentions=outputs.attentions,
|
attentions=outputs.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"MegaForCausalLM",
|
||||||
|
"MegaForMaskedLM",
|
||||||
|
"MegaForMultipleChoice",
|
||||||
|
"MegaForQuestionAnswering",
|
||||||
|
"MegaForSequenceClassification",
|
||||||
|
"MegaForTokenClassification",
|
||||||
|
"MegaModel",
|
||||||
|
"MegaPreTrainedModel",
|
||||||
|
]
|
||||||
|
@ -11,35 +11,17 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
_import_structure = {"configuration_mmbt": ["MMBTConfig"]}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_mmbt"] = ["MMBTForClassification", "MMBTModel", "ModalEmbeddings"]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_mmbt import MMBTConfig
|
from .configuration_mmbt import *
|
||||||
|
from .modeling_mmbt import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_mmbt import MMBTForClassification, MMBTModel, ModalEmbeddings
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -40,3 +40,6 @@ class MMBTConfig:
|
|||||||
self.modal_hidden_size = modal_hidden_size
|
self.modal_hidden_size = modal_hidden_size
|
||||||
if num_labels:
|
if num_labels:
|
||||||
self.num_labels = num_labels
|
self.num_labels = num_labels
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["MMBTConfig"]
|
||||||
|
@ -405,3 +405,6 @@ class MMBTForClassification(nn.Module):
|
|||||||
hidden_states=outputs.hidden_states,
|
hidden_states=outputs.hidden_states,
|
||||||
attentions=outputs.attentions,
|
attentions=outputs.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["MMBTForClassification", "MMBTModel", "ModalEmbeddings"]
|
||||||
|
@ -13,42 +13,15 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {"configuration_nat": ["NatConfig"]}
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_nat"] = [
|
|
||||||
"NatForImageClassification",
|
|
||||||
"NatModel",
|
|
||||||
"NatPreTrainedModel",
|
|
||||||
"NatBackbone",
|
|
||||||
]
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_nat import NatConfig
|
from .configuration_nat import *
|
||||||
|
from .modeling_nat import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_nat import (
|
|
||||||
NatBackbone,
|
|
||||||
NatForImageClassification,
|
|
||||||
NatModel,
|
|
||||||
NatPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -143,3 +143,6 @@ class NatConfig(BackboneConfigMixin, PretrainedConfig):
|
|||||||
self._out_features, self._out_indices = get_aligned_output_features_output_indices(
|
self._out_features, self._out_indices = get_aligned_output_features_output_indices(
|
||||||
out_features=out_features, out_indices=out_indices, stage_names=self.stage_names
|
out_features=out_features, out_indices=out_indices, stage_names=self.stage_names
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["NatConfig"]
|
||||||
|
@ -948,3 +948,6 @@ class NatBackbone(NatPreTrainedModel, BackboneMixin):
|
|||||||
hidden_states=outputs.hidden_states if output_hidden_states else None,
|
hidden_states=outputs.hidden_states if output_hidden_states else None,
|
||||||
attentions=outputs.attentions,
|
attentions=outputs.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["NatForImageClassification", "NatModel", "NatPreTrainedModel", "NatBackbone"]
|
||||||
|
@ -13,55 +13,15 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_tokenizers_available, is_torch_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_nezha": ["NezhaConfig"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_nezha"] = [
|
|
||||||
"NezhaForNextSentencePrediction",
|
|
||||||
"NezhaForMaskedLM",
|
|
||||||
"NezhaForPreTraining",
|
|
||||||
"NezhaForMultipleChoice",
|
|
||||||
"NezhaForQuestionAnswering",
|
|
||||||
"NezhaForSequenceClassification",
|
|
||||||
"NezhaForTokenClassification",
|
|
||||||
"NezhaModel",
|
|
||||||
"NezhaPreTrainedModel",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_nezha import NezhaConfig
|
from .configuration_nezha import *
|
||||||
|
from .modeling_nezha import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_nezha import (
|
|
||||||
NezhaForMaskedLM,
|
|
||||||
NezhaForMultipleChoice,
|
|
||||||
NezhaForNextSentencePrediction,
|
|
||||||
NezhaForPreTraining,
|
|
||||||
NezhaForQuestionAnswering,
|
|
||||||
NezhaForSequenceClassification,
|
|
||||||
NezhaForTokenClassification,
|
|
||||||
NezhaModel,
|
|
||||||
NezhaPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -100,3 +100,6 @@ class NezhaConfig(PretrainedConfig):
|
|||||||
self.layer_norm_eps = layer_norm_eps
|
self.layer_norm_eps = layer_norm_eps
|
||||||
self.classifier_dropout = classifier_dropout
|
self.classifier_dropout = classifier_dropout
|
||||||
self.use_cache = use_cache
|
self.use_cache = use_cache
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["NezhaConfig"]
|
||||||
|
@ -1682,3 +1682,16 @@ class NezhaForQuestionAnswering(NezhaPreTrainedModel):
|
|||||||
hidden_states=outputs.hidden_states,
|
hidden_states=outputs.hidden_states,
|
||||||
attentions=outputs.attentions,
|
attentions=outputs.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"NezhaForNextSentencePrediction",
|
||||||
|
"NezhaForMaskedLM",
|
||||||
|
"NezhaForPreTraining",
|
||||||
|
"NezhaForMultipleChoice",
|
||||||
|
"NezhaForQuestionAnswering",
|
||||||
|
"NezhaForSequenceClassification",
|
||||||
|
"NezhaForTokenClassification",
|
||||||
|
"NezhaModel",
|
||||||
|
"NezhaPreTrainedModel",
|
||||||
|
]
|
||||||
|
@ -13,83 +13,15 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import (
|
from ....utils import _LazyModule
|
||||||
OptionalDependencyNotAvailable,
|
from ....utils.import_utils import define_import_structure
|
||||||
_LazyModule,
|
|
||||||
is_sentencepiece_available,
|
|
||||||
is_tokenizers_available,
|
|
||||||
is_torch_available,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
_import_structure = {
|
|
||||||
"configuration_open_llama": ["OpenLlamaConfig"],
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_sentencepiece_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["tokenization_open_llama"] = ["LlamaTokenizer"]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_tokenizers_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["tokenization_open_llama_fast"] = ["LlamaTokenizerFast"]
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_open_llama"] = [
|
|
||||||
"OpenLlamaForCausalLM",
|
|
||||||
"OpenLlamaModel",
|
|
||||||
"OpenLlamaPreTrainedModel",
|
|
||||||
"OpenLlamaForSequenceClassification",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_open_llama import OpenLlamaConfig
|
from .configuration_open_llama import *
|
||||||
|
from .modeling_open_llama import *
|
||||||
try:
|
|
||||||
if not is_sentencepiece_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from transformers import LlamaTokenizer
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_tokenizers_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from transformers import LlamaTokenizerFast
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_open_llama import (
|
|
||||||
OpenLlamaForCausalLM,
|
|
||||||
OpenLlamaForSequenceClassification,
|
|
||||||
OpenLlamaModel,
|
|
||||||
OpenLlamaPreTrainedModel,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -164,3 +164,6 @@ class OpenLlamaConfig(PretrainedConfig):
|
|||||||
)
|
)
|
||||||
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
|
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
|
||||||
raise ValueError(f"`rope_scaling`'s factor field must be a float > 1, got {rope_scaling_factor}")
|
raise ValueError(f"`rope_scaling`'s factor field must be a float > 1, got {rope_scaling_factor}")
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["OpenLlamaConfig"]
|
||||||
|
@ -970,3 +970,6 @@ class OpenLlamaForSequenceClassification(OpenLlamaPreTrainedModel):
|
|||||||
hidden_states=transformer_outputs.hidden_states,
|
hidden_states=transformer_outputs.hidden_states,
|
||||||
attentions=transformer_outputs.attentions,
|
attentions=transformer_outputs.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["OpenLlamaPreTrainedModel", "OpenLlamaModel", "OpenLlamaForCausalLM", "OpenLlamaForSequenceClassification"]
|
||||||
|
@ -13,57 +13,15 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
|
from ....utils import _LazyModule
|
||||||
|
from ....utils.import_utils import define_import_structure
|
||||||
|
|
||||||
_import_structure = {"configuration_qdqbert": ["QDQBertConfig"]}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
_import_structure["modeling_qdqbert"] = [
|
|
||||||
"QDQBertForMaskedLM",
|
|
||||||
"QDQBertForMultipleChoice",
|
|
||||||
"QDQBertForNextSentencePrediction",
|
|
||||||
"QDQBertForQuestionAnswering",
|
|
||||||
"QDQBertForSequenceClassification",
|
|
||||||
"QDQBertForTokenClassification",
|
|
||||||
"QDQBertLayer",
|
|
||||||
"QDQBertLMHeadModel",
|
|
||||||
"QDQBertModel",
|
|
||||||
"QDQBertPreTrainedModel",
|
|
||||||
"load_tf_weights_in_qdqbert",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .configuration_qdqbert import QDQBertConfig
|
from .configuration_qdqbert import *
|
||||||
|
from .modeling_qdqbert import *
|
||||||
try:
|
|
||||||
if not is_torch_available():
|
|
||||||
raise OptionalDependencyNotAvailable()
|
|
||||||
except OptionalDependencyNotAvailable:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
from .modeling_qdqbert import (
|
|
||||||
QDQBertForMaskedLM,
|
|
||||||
QDQBertForMultipleChoice,
|
|
||||||
QDQBertForNextSentencePrediction,
|
|
||||||
QDQBertForQuestionAnswering,
|
|
||||||
QDQBertForSequenceClassification,
|
|
||||||
QDQBertForTokenClassification,
|
|
||||||
QDQBertLayer,
|
|
||||||
QDQBertLMHeadModel,
|
|
||||||
QDQBertModel,
|
|
||||||
QDQBertPreTrainedModel,
|
|
||||||
load_tf_weights_in_qdqbert,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
_file = globals()["__file__"]
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
|
||||||
|
@ -118,3 +118,6 @@ class QDQBertConfig(PretrainedConfig):
|
|||||||
self.type_vocab_size = type_vocab_size
|
self.type_vocab_size = type_vocab_size
|
||||||
self.layer_norm_eps = layer_norm_eps
|
self.layer_norm_eps = layer_norm_eps
|
||||||
self.use_cache = use_cache
|
self.use_cache = use_cache
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = ["QDQBertConfig"]
|
||||||
|
@ -1732,3 +1732,18 @@ class QDQBertForQuestionAnswering(QDQBertPreTrainedModel):
|
|||||||
hidden_states=outputs.hidden_states,
|
hidden_states=outputs.hidden_states,
|
||||||
attentions=outputs.attentions,
|
attentions=outputs.attentions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"QDQBertForMaskedLM",
|
||||||
|
"QDQBertForMultipleChoice",
|
||||||
|
"QDQBertForNextSentencePrediction",
|
||||||
|
"QDQBertForQuestionAnswering",
|
||||||
|
"QDQBertForSequenceClassification",
|
||||||
|
"QDQBertForTokenClassification",
|
||||||
|
"QDQBertLayer",
|
||||||
|
"QDQBertLMHeadModel",
|
||||||
|
"QDQBertModel",
|
||||||
|
"QDQBertPreTrainedModel",
|
||||||
|
"load_tf_weights_in_qdqbert",
|
||||||
|
]
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user