mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
![]() * Added pytests for pvt-v2, all passed
* Added pvt_v2 to docs/source/end/model_doc
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Reverted batch eval changes for PR
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat. Added additional type support for image size in config
* Fixed config backbone compat
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Reverted batch eval changes for PR
* Updated index.md
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat
* Ran fix-copies
* Fixed PvtV2Backbone tests
* Added TFRegNet to OBJECTS_TO_IGNORE in check_docstrings.py
* Fixed backbone stuff and fixed tests: all passing
* Ran make fixup
* Made modifications for code checks
* Remove ONNX config from configuration_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Use explicit image size dict in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Make image_size optional in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove _ntuple use in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove reference to fp16_enabled
* Model modules now take config as first argument even when not used
* Replaced abbreviations for "SR" and "AP" with explicit "spatialreduction" and "averagepooling"
* All LayerNorm now instantiates with config.layer_norm_eps
* Added docstring for depth-wise conv layer
* PvtV2Config now only takes Union[int, Tuple[int, int]] for image size
* Refactored PVTv2 in prep for gradient checkpointing
* Gradient checkpointing ready to test
* Removed override of _set_gradient_checkpointing
* Cleaned out old code
* Applied code fixup
* Applied code fixup
* Began debug of pvt_v2 tests
* Leave handling of num_labels to base pretrained config class
* Deactivated gradient checkpointing tests until it is fixed
* Removed PvtV2ImageProcessor which duped PvtImageProcessor
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Added pvt_v2 to docs/source/end/model_doc
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Reverted batch eval changes for PR
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat. Added additional type support for image size in config
* Fixed config backbone compat
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Reverted batch eval changes for PR
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat
* Ran fix-copies
* Fixed PvtV2Backbone tests
* Added TFRegNet to OBJECTS_TO_IGNORE in check_docstrings.py
* Fixed backbone stuff and fixed tests: all passing
* Ran make fixup
* Made modifications for code checks
* Remove ONNX config from configuration_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Use explicit image size dict in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Make image_size optional in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove _ntuple use in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove reference to fp16_enabled
* Model modules now take config as first argument even when not used
* Replaced abbreviations for "SR" and "AP" with explicit "spatialreduction" and "averagepooling"
* All LayerNorm now instantiates with config.layer_norm_eps
* Added docstring for depth-wise conv layer
* PvtV2Config now only takes Union[int, Tuple[int, int]] for image size
* Refactored PVTv2 in prep for gradient checkpointing
* Gradient checkpointing ready to test
* Removed override of _set_gradient_checkpointing
* Cleaned out old code
* Applied code fixup
* Applied code fixup
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Ran fix-copies and fixup. All checks passed
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Reverted batch eval changes for PR
* Fixed config docstring. Added channels property
* Fixed config backbone compat
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Ran fix-copies and fixup. All checks passed
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Fixed config backbone compat
* Ran fix-copies
* Began debug of pvt_v2 tests
* Leave handling of num_labels to base pretrained config class
* Deactivated gradient checkpointing tests until it is fixed
* Removed PvtV2ImageProcessor which duped PvtImageProcessor
* Fixed issue from rebase
* Fixed issue from rebase
* Set tests for gradient checkpointing to skip those using reentrant since it isn't supported
* Fixed issue from rebase
* Fixed issue from rebase
* Changed model name in docs
* Removed duplicate PvtV2Backbone
* Work around type switching issue in tests
* Fix model name in config comments
* Update docs/source/en/model_doc/pvt_v2.md
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Changed name of variable from 'attn_reduce' to 'sr_type'
* Changed name of variable from 'attn_reduce' to 'sr_type'
* Changed from using 'sr_type' to 'linear_attention' for clarity
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Removed old code
* Changed from using 'sr_type' to 'linear_attention' for clarity
* Fixed Class names to be more descriptive
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Removed outdated code
* Moved paper abstract to single line in pvt_v2.md
* Added usage tips to pvt_v2.md
* Simplified module inits by passing layer_idx
* Fixed typing for hidden_act in PvtV2Config
* Removed unusued import
* Add pvt_v2 to docs/source/en/_toctree.yml
* Updated documentation in docs/source/en/model_doc/pvt_v2.md to be more comprehensive.
* Updated documentation in docs/source/en/model_doc/pvt_v2.md to be more comprehensive.
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Move function parameters to single line
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Update year of copyright to 2024
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Make code more explicit
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Updated sr_ratio to be more explicit spatial_reduction_ratio
* Removed excess type hints in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Move params to single line in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Removed needless comment in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update copyright date in pvt_v2.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Moved params to single line in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Updated copyright date in configuration_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Cleaned comments in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Renamed spatial_reduction Conv2D operation
* Revert "Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
"
This reverts commit
|
||
---|---|---|
.. | ||
albert | ||
align | ||
altclip | ||
audio_spectrogram_transformer | ||
auto | ||
autoformer | ||
bark | ||
bart | ||
barthez | ||
bartpho | ||
beit | ||
bert | ||
bert_generation | ||
bert_japanese | ||
bertweet | ||
big_bird | ||
bigbird_pegasus | ||
biogpt | ||
bit | ||
blenderbot | ||
blenderbot_small | ||
blip | ||
blip_2 | ||
bloom | ||
bridgetower | ||
bros | ||
byt5 | ||
camembert | ||
canine | ||
chinese_clip | ||
clap | ||
clip | ||
clipseg | ||
clvp | ||
code_llama | ||
codegen | ||
conditional_detr | ||
convbert | ||
convnext | ||
convnextv2 | ||
cpm | ||
cpmant | ||
ctrl | ||
cvt | ||
data2vec | ||
deberta | ||
deberta_v2 | ||
decision_transformer | ||
deformable_detr | ||
deit | ||
depth_anything | ||
deta | ||
detr | ||
dinat | ||
dinov2 | ||
distilbert | ||
dit | ||
donut | ||
dpr | ||
dpt | ||
efficientformer | ||
efficientnet | ||
electra | ||
encodec | ||
encoder_decoder | ||
ernie | ||
ernie_m | ||
esm | ||
falcon | ||
fastspeech2_conformer | ||
flaubert | ||
flava | ||
fnet | ||
focalnet | ||
fsmt | ||
funnel | ||
fuyu | ||
gemma | ||
git | ||
glpn | ||
gpt_bigcode | ||
gpt_neo | ||
gpt_neox | ||
gpt_neox_japanese | ||
gpt_sw3 | ||
gpt2 | ||
gptj | ||
gptsan_japanese | ||
graphormer | ||
groupvit | ||
herbert | ||
hubert | ||
ibert | ||
idefics | ||
imagegpt | ||
informer | ||
instructblip | ||
jukebox | ||
kosmos2 | ||
layoutlm | ||
layoutlmv2 | ||
layoutlmv3 | ||
layoutxlm | ||
led | ||
levit | ||
lilt | ||
llama | ||
llava | ||
longformer | ||
longt5 | ||
luke | ||
lxmert | ||
m2m_100 | ||
mamba | ||
marian | ||
markuplm | ||
mask2former | ||
maskformer | ||
mbart | ||
mbart50 | ||
mega | ||
megatron_bert | ||
megatron_gpt2 | ||
mgp_str | ||
mistral | ||
mixtral | ||
mluke | ||
mobilebert | ||
mobilenet_v1 | ||
mobilenet_v2 | ||
mobilevit | ||
mobilevitv2 | ||
mpnet | ||
mpt | ||
mra | ||
mt5 | ||
musicgen | ||
mvp | ||
nat | ||
nezha | ||
nllb | ||
nllb_moe | ||
nougat | ||
nystromformer | ||
oneformer | ||
openai | ||
opt | ||
owlv2 | ||
owlvit | ||
patchtsmixer | ||
patchtst | ||
pegasus | ||
pegasus_x | ||
perceiver | ||
persimmon | ||
phi | ||
phobert | ||
pix2struct | ||
plbart | ||
poolformer | ||
pop2piano | ||
prophetnet | ||
pvt | ||
pvt_v2 | ||
qdqbert | ||
qwen2 | ||
rag | ||
realm | ||
reformer | ||
regnet | ||
rembert | ||
resnet | ||
roberta | ||
roberta_prelayernorm | ||
roc_bert | ||
roformer | ||
rwkv | ||
sam | ||
seamless_m4t | ||
seamless_m4t_v2 | ||
segformer | ||
seggpt | ||
sew | ||
sew_d | ||
siglip | ||
speech_encoder_decoder | ||
speech_to_text | ||
speech_to_text_2 | ||
speecht5 | ||
splinter | ||
squeezebert | ||
stablelm | ||
starcoder2 | ||
swiftformer | ||
swin | ||
swin2sr | ||
swinv2 | ||
switch_transformers | ||
t5 | ||
table_transformer | ||
tapas | ||
time_series_transformer | ||
timesformer | ||
timm_backbone | ||
trocr | ||
tvlt | ||
tvp | ||
udop | ||
umt5 | ||
unispeech | ||
unispeech_sat | ||
univnet | ||
upernet | ||
videomae | ||
vilt | ||
vipllava | ||
vision_encoder_decoder | ||
vision_text_dual_encoder | ||
visual_bert | ||
vit | ||
vit_hybrid | ||
vit_mae | ||
vit_msn | ||
vitdet | ||
vitmatte | ||
vits | ||
vivit | ||
wav2vec2 | ||
wav2vec2_bert | ||
wav2vec2_conformer | ||
wav2vec2_phoneme | ||
wav2vec2_with_lm | ||
wavlm | ||
whisper | ||
x_clip | ||
xglm | ||
xlm | ||
xlm_prophetnet | ||
xlm_roberta | ||
xlm_roberta_xl | ||
xlnet | ||
xmod | ||
yolos | ||
yoso | ||
__init__.py |