mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 06:10:04 +06:00
![]() * Added pytests for pvt-v2, all passed
* Added pvt_v2 to docs/source/end/model_doc
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Reverted batch eval changes for PR
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat. Added additional type support for image size in config
* Fixed config backbone compat
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Reverted batch eval changes for PR
* Updated index.md
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat
* Ran fix-copies
* Fixed PvtV2Backbone tests
* Added TFRegNet to OBJECTS_TO_IGNORE in check_docstrings.py
* Fixed backbone stuff and fixed tests: all passing
* Ran make fixup
* Made modifications for code checks
* Remove ONNX config from configuration_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Use explicit image size dict in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Make image_size optional in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove _ntuple use in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove reference to fp16_enabled
* Model modules now take config as first argument even when not used
* Replaced abbreviations for "SR" and "AP" with explicit "spatialreduction" and "averagepooling"
* All LayerNorm now instantiates with config.layer_norm_eps
* Added docstring for depth-wise conv layer
* PvtV2Config now only takes Union[int, Tuple[int, int]] for image size
* Refactored PVTv2 in prep for gradient checkpointing
* Gradient checkpointing ready to test
* Removed override of _set_gradient_checkpointing
* Cleaned out old code
* Applied code fixup
* Applied code fixup
* Began debug of pvt_v2 tests
* Leave handling of num_labels to base pretrained config class
* Deactivated gradient checkpointing tests until it is fixed
* Removed PvtV2ImageProcessor which duped PvtImageProcessor
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Added pvt_v2 to docs/source/end/model_doc
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Reverted batch eval changes for PR
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat. Added additional type support for image size in config
* Fixed config backbone compat
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* Set key and value layers to use separate linear modules. Fixed pruning function
* Set AvgPool to 7
* Fixed issue in init
* PvT-v2 now works in AutoModel
* Successful conversion of pretrained weights for PVT-v2
* Successful conversion of pretrained weights for PVT-v2 models
* Added pytests for pvt-v2, all passed
* Ran fix-copies and fixup. All checks passed
* Added additional ReLU for linear attention mode
* pvt_v2_b2_linear converted and working
* Reverted batch eval changes for PR
* Expanded type support for Pvt-v2 config
* Fixed config docstring. Added channels property
* Fixed model names in tests
* Fixed config backbone compat
* Ran fix-copies
* Fixed PvtV2Backbone tests
* Added TFRegNet to OBJECTS_TO_IGNORE in check_docstrings.py
* Fixed backbone stuff and fixed tests: all passing
* Ran make fixup
* Made modifications for code checks
* Remove ONNX config from configuration_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Use explicit image size dict in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Make image_size optional in test_modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove _ntuple use in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Remove reference to fp16_enabled
* Model modules now take config as first argument even when not used
* Replaced abbreviations for "SR" and "AP" with explicit "spatialreduction" and "averagepooling"
* All LayerNorm now instantiates with config.layer_norm_eps
* Added docstring for depth-wise conv layer
* PvtV2Config now only takes Union[int, Tuple[int, int]] for image size
* Refactored PVTv2 in prep for gradient checkpointing
* Gradient checkpointing ready to test
* Removed override of _set_gradient_checkpointing
* Cleaned out old code
* Applied code fixup
* Applied code fixup
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Ran fix-copies and fixup. All checks passed
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Reverted batch eval changes for PR
* Fixed config docstring. Added channels property
* Fixed config backbone compat
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Ran fix-copies and fixup. All checks passed
* Allowed for batching of eval metrics
* copied models/pvt to adapt to pvt_v2
* First commit of pvt_v2
* PvT-v2 now works in AutoModel
* Fixed config backbone compat
* Ran fix-copies
* Began debug of pvt_v2 tests
* Leave handling of num_labels to base pretrained config class
* Deactivated gradient checkpointing tests until it is fixed
* Removed PvtV2ImageProcessor which duped PvtImageProcessor
* Fixed issue from rebase
* Fixed issue from rebase
* Set tests for gradient checkpointing to skip those using reentrant since it isn't supported
* Fixed issue from rebase
* Fixed issue from rebase
* Changed model name in docs
* Removed duplicate PvtV2Backbone
* Work around type switching issue in tests
* Fix model name in config comments
* Update docs/source/en/model_doc/pvt_v2.md
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Changed name of variable from 'attn_reduce' to 'sr_type'
* Changed name of variable from 'attn_reduce' to 'sr_type'
* Changed from using 'sr_type' to 'linear_attention' for clarity
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Removed old code
* Changed from using 'sr_type' to 'linear_attention' for clarity
* Fixed Class names to be more descriptive
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Removed outdated code
* Moved paper abstract to single line in pvt_v2.md
* Added usage tips to pvt_v2.md
* Simplified module inits by passing layer_idx
* Fixed typing for hidden_act in PvtV2Config
* Removed unusued import
* Add pvt_v2 to docs/source/en/_toctree.yml
* Updated documentation in docs/source/en/model_doc/pvt_v2.md to be more comprehensive.
* Updated documentation in docs/source/en/model_doc/pvt_v2.md to be more comprehensive.
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Move function parameters to single line
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Update year of copyright to 2024
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
Make code more explicit
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Updated sr_ratio to be more explicit spatial_reduction_ratio
* Removed excess type hints in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Move params to single line in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Removed needless comment in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update copyright date in pvt_v2.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Moved params to single line in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Updated copyright date in configuration_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Cleaned comments in modeling_pvt_v2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Renamed spatial_reduction Conv2D operation
* Revert "Update src/transformers/models/pvt_v2/modeling_pvt_v2.py
"
This reverts commit
|
||
---|---|---|
.. | ||
albert.md | ||
align.md | ||
altclip.md | ||
audio-spectrogram-transformer.md | ||
auto.md | ||
autoformer.md | ||
bark.md | ||
bart.md | ||
barthez.md | ||
bartpho.md | ||
beit.md | ||
bert-generation.md | ||
bert-japanese.md | ||
bert.md | ||
bertweet.md | ||
big_bird.md | ||
bigbird_pegasus.md | ||
biogpt.md | ||
bit.md | ||
blenderbot-small.md | ||
blenderbot.md | ||
blip-2.md | ||
blip.md | ||
bloom.md | ||
bort.md | ||
bridgetower.md | ||
bros.md | ||
byt5.md | ||
camembert.md | ||
canine.md | ||
chinese_clip.md | ||
clap.md | ||
clip.md | ||
clipseg.md | ||
clvp.md | ||
code_llama.md | ||
codegen.md | ||
conditional_detr.md | ||
convbert.md | ||
convnext.md | ||
convnextv2.md | ||
cpm.md | ||
cpmant.md | ||
ctrl.md | ||
cvt.md | ||
data2vec.md | ||
deberta-v2.md | ||
deberta.md | ||
decision_transformer.md | ||
deformable_detr.md | ||
deit.md | ||
deplot.md | ||
depth_anything.md | ||
deta.md | ||
detr.md | ||
dialogpt.md | ||
dinat.md | ||
dinov2.md | ||
distilbert.md | ||
dit.md | ||
donut.md | ||
dpr.md | ||
dpt.md | ||
efficientformer.md | ||
efficientnet.md | ||
electra.md | ||
encodec.md | ||
encoder-decoder.md | ||
ernie_m.md | ||
ernie.md | ||
esm.md | ||
falcon.md | ||
fastspeech2_conformer.md | ||
flan-t5.md | ||
flan-ul2.md | ||
flaubert.md | ||
flava.md | ||
fnet.md | ||
focalnet.md | ||
fsmt.md | ||
funnel.md | ||
fuyu.md | ||
gemma.md | ||
git.md | ||
glpn.md | ||
gpt_bigcode.md | ||
gpt_neo.md | ||
gpt_neox_japanese.md | ||
gpt_neox.md | ||
gpt-sw3.md | ||
gpt2.md | ||
gptj.md | ||
gptsan-japanese.md | ||
graphormer.md | ||
groupvit.md | ||
herbert.md | ||
hubert.md | ||
ibert.md | ||
idefics.md | ||
imagegpt.md | ||
informer.md | ||
instructblip.md | ||
jukebox.md | ||
kosmos-2.md | ||
layoutlm.md | ||
layoutlmv2.md | ||
layoutlmv3.md | ||
layoutxlm.md | ||
led.md | ||
levit.md | ||
lilt.md | ||
llama.md | ||
llama2.md | ||
llava.md | ||
longformer.md | ||
longt5.md | ||
luke.md | ||
lxmert.md | ||
m2m_100.md | ||
madlad-400.md | ||
mamba.md | ||
marian.md | ||
markuplm.md | ||
mask2former.md | ||
maskformer.md | ||
matcha.md | ||
mbart.md | ||
mctct.md | ||
mega.md | ||
megatron_gpt2.md | ||
megatron-bert.md | ||
mgp-str.md | ||
mistral.md | ||
mixtral.md | ||
mluke.md | ||
mms.md | ||
mobilebert.md | ||
mobilenet_v1.md | ||
mobilenet_v2.md | ||
mobilevit.md | ||
mobilevitv2.md | ||
mpnet.md | ||
mpt.md | ||
mra.md | ||
mt5.md | ||
musicgen.md | ||
mvp.md | ||
nat.md | ||
nezha.md | ||
nllb-moe.md | ||
nllb.md | ||
nougat.md | ||
nystromformer.md | ||
oneformer.md | ||
open-llama.md | ||
openai-gpt.md | ||
opt.md | ||
owlv2.md | ||
owlvit.md | ||
patchtsmixer.md | ||
patchtst.md | ||
pegasus_x.md | ||
pegasus.md | ||
perceiver.md | ||
persimmon.md | ||
phi.md | ||
phobert.md | ||
pix2struct.md | ||
plbart.md | ||
poolformer.md | ||
pop2piano.md | ||
prophetnet.md | ||
pvt_v2.md | ||
pvt.md | ||
qdqbert.md | ||
qwen2.md | ||
rag.md | ||
realm.md | ||
reformer.md | ||
regnet.md | ||
rembert.md | ||
resnet.md | ||
retribert.md | ||
roberta-prelayernorm.md | ||
roberta.md | ||
roc_bert.md | ||
roformer.md | ||
rwkv.md | ||
sam.md | ||
seamless_m4t_v2.md | ||
seamless_m4t.md | ||
segformer.md | ||
seggpt.md | ||
sew-d.md | ||
sew.md | ||
siglip.md | ||
speech_to_text_2.md | ||
speech_to_text.md | ||
speech-encoder-decoder.md | ||
speecht5.md | ||
splinter.md | ||
squeezebert.md | ||
stablelm.md | ||
starcoder2.md | ||
swiftformer.md | ||
swin.md | ||
swin2sr.md | ||
swinv2.md | ||
switch_transformers.md | ||
t5.md | ||
t5v1.1.md | ||
table-transformer.md | ||
tapas.md | ||
tapex.md | ||
time_series_transformer.md | ||
timesformer.md | ||
trajectory_transformer.md | ||
transfo-xl.md | ||
trocr.md | ||
tvlt.md | ||
tvp.md | ||
udop.md | ||
ul2.md | ||
umt5.md | ||
unispeech-sat.md | ||
unispeech.md | ||
univnet.md | ||
upernet.md | ||
van.md | ||
videomae.md | ||
vilt.md | ||
vipllava.md | ||
vision-encoder-decoder.md | ||
vision-text-dual-encoder.md | ||
visual_bert.md | ||
vit_hybrid.md | ||
vit_mae.md | ||
vit_msn.md | ||
vit.md | ||
vitdet.md | ||
vitmatte.md | ||
vits.md | ||
vivit.md | ||
wav2vec2_phoneme.md | ||
wav2vec2-bert.md | ||
wav2vec2-conformer.md | ||
wav2vec2.md | ||
wavlm.md | ||
whisper.md | ||
xclip.md | ||
xglm.md | ||
xlm-prophetnet.md | ||
xlm-roberta-xl.md | ||
xlm-roberta.md | ||
xlm-v.md | ||
xlm.md | ||
xlnet.md | ||
xls_r.md | ||
xlsr_wav2vec2.md | ||
xmod.md | ||
yolos.md | ||
yoso.md |