mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
Adding grounding dino (#26087)
* Fixed typo when converting weigths to GroundingDINO vision backbone * Final modifications on modeling * Removed unnecessary class * Fixed convert structure * Added image processing * make fixup partially completed * Now text_backbone_config has its own class * Modified convert script * Removed unnecessary config attribute * Added new function to generate sub sentence mask * Renamed parameters with gamma in the name as it's currently not allowed * Removed tokenization and image_processing scripts since we'll map from existing models * Fixed some issues with configuration * Just some modifications on conversion script * Other modifications * Copied deformable detr * First commit * Added bert to model * Bert validated * Created Text and Fusion layers for Encoder * Adapted Encoder layer * Fixed typos * Adjusted Encoder * Converted encoder to hf * Modified Decoder Layer * Modified main decoder class * Removed copy comments * Fixed forward from GroundingDINOModel and GroundingDINODecoder * Added all necessary layers, configurations and forward logic up to GroundingDINOModel * Added all layers to convertion * Fixed outputs for GroundingDINOModel and GroundingDINOForObjectDetection * Fixed mask input to encoders and fixed nn.MultiheadAttention batch first and attn output * Fixed forward from GroundingDINOTextEnhancerLayer * Fixed output bug with GroundingDINODeformableLayer * Fixed bugs that prevent GroundingDINOForObjectDetection to run forward method * Fixed attentions to be passed correctly * Passing temperature arg when creating Sine position embedding * Removed copy comments * Added temperature argument for position embedding * Fixed typo when converting weigths to GroundingDINO vision backbone * Final modifications on modeling * Removed unnecessary class * Fixed convert structure * Added image processing * make fixup partially completed * Now text_backbone_config has its own class * Modified convert script * Removed unnecessary config attribute * Added new function to generate sub sentence mask * Renamed parameters with gamma in the name as it's currently not allowed * Removed tokenization and image_processing scripts since we'll map from existing models * Fixed some issues with configuration * Just some modifications on conversion script * Other modifications * Fix style * Improve fixup * Improve conversion script * Improve conversion script * Add GroundingDINOProcessor * More improvements * Return token type ids * something * Fix more tests * More improvements * More cleanup * More improvements * Fixed tests, improved modeling and config * More improvements and fixing tests * Improved tests and modeling * Improved tests and added image processor * Improved tests inference * More improvements * More test improvements * Fixed last test * Improved docstrings and comments * Fix style * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Better naming * Better naming * Added Copied statement * Added Copied statement * Moved param init from GroundingDINOBiMultiHeadAttention * Better naming * Fixing clamp style * Better naming * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/models/grounding_dino/configuration_grounding_dino.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Improving conversion script * Improved config * Improved naming * Improved naming again * Improved grouding-dino.md * Moved grounding dino to multimodal * Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> * Fixed docstrings and style * Fix docstrings * Remove timm attributes * Reorder imports * More improvements * Add Grounding DINO to pipeline * Remove model from check_repo * Added grounded post_process to GroundingDINOProcessor * Fixed style * Fixed GroundingDINOTextPrenetConfig docstrings * Aligned inputs.keys() when both image and text are passed with model_input_names * Added tests for GroundingDINOImageProcessor and GroundingDINOProcessor * Testing post_process_grounded_object_detection from GroundingDINOProcessor at test_inference_object_detection_head * Fixed order * Marked test with require_torch * Temporarily changed repo_id * More improvements * Fix style * Final improvements * Improve annotators * Fix style * Add is_torch_available * Remove type hints * vocab_tokens as one liner * Removed print statements * Renamed GroundingDINOTextPrenetConfig to GroundingDINOTextConfig * remove unnecessary comments * Removed unnecessary tests on conversion script * Renamed GroundingDINO to camel case GroundingDino * Fixed GroundingDinoProcessor docstrings * loading MSDA kernels in the modeling file * Fix copies * Replace nn.multiheadattention * Replace nn.multiheadattention * Fixed inputs for GroundingDinoMultiheadAttention & order of modules * Fixed processing to avoid messing with inputs * Added more tips for GroundingDino * Make style * Chaning name to align with SAM * Replace final nn.multiheadattention * Fix model tests * Update year, remove GenerationTesterMixin * Address comments * Address more comments * Rename TextPrenet to TextModel * Rename hidden_states * Address more comments * Address more comments * Address comment * Address more comments * Address merge * Address comment * Address comment * Address comment * Make style * Added layer norm eps to layer norms * Address more comments * More fixes * Fixed equivalence * Make fixup * Remove print statements * Address comments * Address comments * Address comments * Address comments * Address comments * Address comments * Add comment * Address comment * Remove overwriting of test * Fix bbox_embed * Improve decoder_bbox_embed_share * Simplify outputs * Updated post_process_grounded_object_detection * Renamed sources to feature_maps * Improved tests for Grounding Dino ImageProcessor and Processor * Fixed test requirements and imports * Fixed image_processing * Fixed processor tests * Fixed imports for image processing tests * Fix copies * Updated modeling * Fix style * Moved functions to correct position * Fixed copy issues * Update src/transformers/models/deformable_detr/modeling_deformable_detr.py Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com> * Keeping consistency custom cuda kernels for MSDA * Make GroundingDinoProcessor logic clearer * Updated Grounding DINO checkpoints * Changed tests to correct structure * Updated gpu-cpu equivalence test * fix copies * Update src/transformers/models/grounding_dino/processing_grounding_dino.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/grounding_dino/processing_grounding_dino.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/grounding_dino/configuration_grounding_dino.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fixed erros and style * Fix copies * Removed inheritance from PreTrainedModel from GroundingDinoTextModel * Fixed GroundingDinoTextModel * Fixed type of default backbone config * Fixed missing methods for GroundingDinoTextModel and Added timm support for GroundingDinoConvEncoder * Addressed comments * Addressed batched image processing tests * Addressed zero shot test comment * Addressed tip comment * Removed GroundingDinoTextModel from check_repo * Removed inplace masking * Addressed comments * Addressed comments * Addressed comments * Fix copies * Fixing timm test * Fixed batching equivalence test * Update docs/source/en/model_doc/grounding-dino.md Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com> * Update docs/source/en/model_doc/grounding-dino.md Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com> * Update docs/source/en/model_doc/grounding-dino.md Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com> * Addressed more comments * Added a new comment * Reduced image size * Addressed more comments * Nits * Nits * Changed the way text_config is initialized * Update src/transformers/models/grounding_dino/processing_grounding_dino.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: Niels <niels.rogge1@gmail.com> Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com> Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: Eduardo Pacheco <eduardo.pacheco@limehome.com> Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com>
This commit is contained in:
parent
a5e5c92aea
commit
b752ad3019
@ -389,6 +389,7 @@ Current number of checkpoints: ** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (from Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -385,6 +385,7 @@ Aktuelle Anzahl der Checkpoints: ** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (from Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -362,6 +362,7 @@ Número actual de puntos de control: ** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (from Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -383,6 +383,7 @@ Nombre actuel de points de contrôle : ** (de BigCode) a été publié dans l'article [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) par Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** a été publié dans le dépôt [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) par Toshiyuki Sakamoto (tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (de Microsoft) a été publié dans l'article [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) par Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (de Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) publié dans l'article [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) parShilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (de l'UCSD, NVIDIA) a été publié dans l'article [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) par Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (d'Allegro.pl, AGH University of Science and Technology) a été publié dans l'article [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) par Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (de Facebook) a été publié dans l'article [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) par Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -336,6 +336,7 @@ conda install conda-forge::transformers
|
||||
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode से) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. द्वाराअनुसंधान पत्र [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) के साथ जारी किया गया
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others से) Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang. द्वाराअनुसंधान पत्र [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) के साथ जारी किया गया
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA से) साथ में कागज [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) जियारुई जू, शालिनी डी मेलो, सिफ़ी लियू, वोनमिन बायन, थॉमस ब्रेउएल, जान कौट्ज़, ज़ियाओलोंग वांग द्वारा।
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (Allegro.pl, AGH University of Science and Technology से) Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik. द्वाराअनुसंधान पत्र [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) के साथ जारी किया गया
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (फेसबुक से) साथ में पेपर [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) वेई-निंग सू, बेंजामिन बोल्टे, याओ-हंग ह्यूबर्ट त्साई, कुशाल लखोटिया, रुस्लान सालाखुतदीनोव, अब्देलरहमान मोहम्मद द्वारा।
|
||||
|
@ -396,6 +396,7 @@ Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それ
|
||||
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode から) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. から公開された研究論文 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988)
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) 坂本俊之(tanreinama)からリリースされました.
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (Microsoft から) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu から公開された研究論文: [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234).
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others から) Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang. から公開された研究論文 [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499)
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA から) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang から公開された研究論文: [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094)
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (Allegro.pl, AGH University of Science and Technology から) Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik. から公開された研究論文 [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf)
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook から) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed から公開された研究論文: [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447)
|
||||
|
@ -311,6 +311,7 @@ Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는
|
||||
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode 에서 제공)은 Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.의 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988)논문과 함께 발표했습니다.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu 의 [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) 논문과 함께 발표했습니다.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others 에서 제공)은 Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.의 [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499)논문과 함께 발표했습니다.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA 에서) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 의 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 논문과 함께 발표했습니다.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (Allegro.pl, AGH University of Science and Technology 에서 제공)은 Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.의 [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf)논문과 함께 발표했습니다.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook 에서) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 의 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 논문과 함께 발표했습니다.
|
||||
|
@ -394,6 +394,7 @@ Número atual de pontos de verificação: ** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (from Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -384,6 +384,7 @@ conda install conda-forge::transformers
|
||||
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (from Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -386,6 +386,7 @@ Flax, PyTorch లేదా TensorFlow యొక్క ఇన్స్టా
|
||||
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (from Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -385,6 +385,7 @@ Số lượng điểm kiểm tra hiện tại: ** (từ BigCode) được phát hành với bài báo [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (từ Microsoft) được phát hành với bài báo [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (từ Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) được phát hành với bài báo [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (từ UCSD, NVIDIA) được phát hành với bài báo [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (từ Allegro.pl, AGH University of Science and Technology) được phát hành với bài báo [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (từ Facebook) được phát hành với bài báo [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -335,6 +335,7 @@ conda install conda-forge::transformers
|
||||
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (来自 BigCode) 伴随论文 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) 由 Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra 发布。
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by 坂本俊之(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (来自 Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) 伴随论文 [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) 由 Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang 发布。
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (来自 Allegro.pl, AGH University of Science and Technology) 伴随论文 [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) 由 Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik 发布。
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
|
||||
|
@ -347,6 +347,7 @@ conda install conda-forge::transformers
|
||||
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by 坂本俊之(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[Grounding DINO](https://huggingface.co/docs/transformers/main/model_doc/grounding-dino)** (from Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, IDEA Research and others) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -732,6 +732,8 @@
|
||||
title: FLAVA
|
||||
- local: model_doc/git
|
||||
title: GIT
|
||||
- local: model_doc/grounding-dino
|
||||
title: Grounding DINO
|
||||
- local: model_doc/groupvit
|
||||
title: GroupViT
|
||||
- local: model_doc/idefics
|
||||
|
@ -154,6 +154,7 @@ Flax), PyTorch, and/or TensorFlow.
|
||||
| [GPTBigCode](model_doc/gpt_bigcode) | ✅ | ❌ | ❌ |
|
||||
| [GPTSAN-japanese](model_doc/gptsan-japanese) | ✅ | ❌ | ❌ |
|
||||
| [Graphormer](model_doc/graphormer) | ✅ | ❌ | ❌ |
|
||||
| [Grounding DINO](model_doc/grounding-dino) | ✅ | ❌ | ❌ |
|
||||
| [GroupViT](model_doc/groupvit) | ✅ | ✅ | ❌ |
|
||||
| [HerBERT](model_doc/herbert) | ✅ | ✅ | ✅ |
|
||||
| [Hubert](model_doc/hubert) | ✅ | ✅ | ❌ |
|
||||
|
97
docs/source/en/model_doc/grounding-dino.md
Normal file
97
docs/source/en/model_doc/grounding-dino.md
Normal file
@ -0,0 +1,97 @@
|
||||
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# Grounding DINO
|
||||
|
||||
## Overview
|
||||
|
||||
The Grounding DINO model was proposed in [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang. Grounding DINO extends a closed-set object detection model with a text encoder, enabling open-set object detection. The model achieves remarkable results, such as 52.5 AP on COCO zero-shot.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*In this paper, we present an open-set object detector, called Grounding DINO, by marrying Transformer-based detector DINO with grounded pre-training, which can detect arbitrary objects with human inputs such as category names or referring expressions. The key solution of open-set object detection is introducing language to a closed-set detector for open-set concept generalization. To effectively fuse language and vision modalities, we conceptually divide a closed-set detector into three phases and propose a tight fusion solution, which includes a feature enhancer, a language-guided query selection, and a cross-modality decoder for cross-modality fusion. While previous works mainly evaluate open-set object detection on novel categories, we propose to also perform evaluations on referring expression comprehension for objects specified with attributes. Grounding DINO performs remarkably well on all three settings, including benchmarks on COCO, LVIS, ODinW, and RefCOCO/+/g. Grounding DINO achieves a 52.5 AP on the COCO detection zero-shot transfer benchmark, i.e., without any training data from COCO. It sets a new record on the ODinW zero-shot benchmark with a mean 26.1 AP.*
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grouding_dino_architecture.png"
|
||||
alt="drawing" width="600"/>
|
||||
|
||||
<small> Grounding DINO overview. Taken from the <a href="https://arxiv.org/abs/2303.05499">original paper</a>. </small>
|
||||
|
||||
This model was contributed by [EduardoPacheco](https://huggingface.co/EduardoPacheco) and [nielsr](https://huggingface.co/nielsr).
|
||||
The original code can be found [here](https://github.com/IDEA-Research/GroundingDINO).
|
||||
|
||||
## Usage tips
|
||||
|
||||
- One can use [`GroundingDinoProcessor`] to prepare image-text pairs for the model.
|
||||
- To separate classes in the text use a period e.g. "a cat. a dog."
|
||||
- When using multiple classes (e.g. `"a cat. a dog."`), use `post_process_grounded_object_detection` from [`GroundingDinoProcessor`] to post process outputs. Since, the labels returned from `post_process_object_detection` represent the indices from the model dimension where prob > threshold.
|
||||
|
||||
Here's how to use the model for zero-shot object detection:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
import torch
|
||||
from PIL import Image
|
||||
from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection,
|
||||
|
||||
model_id = "IDEA-Research/grounding-dino-tiny"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(model_id)
|
||||
model = AutoModelForZeroShotObjectDetection.from_pretrained(model_id).to(device)
|
||||
|
||||
image_url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
||||
image = Image.open(requests.get(image_url, stream=True).raw)
|
||||
# Check for cats and remote controls
|
||||
text = "a cat. a remote control."
|
||||
|
||||
inputs = processor(images=image, text=text, return_tensors="pt").to(device)
|
||||
with torch.no_grad():
|
||||
outputs = model(**inputs)
|
||||
|
||||
results = processor.post_process_grounded_object_detection(
|
||||
outputs,
|
||||
inputs.input_ids,
|
||||
box_threshold=0.4,
|
||||
text_threshold=0.3,
|
||||
target_sizes=[image.size[::-1]]
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## GroundingDinoImageProcessor
|
||||
|
||||
[[autodoc]] GroundingDinoImageProcessor
|
||||
- preprocess
|
||||
- post_process_object_detection
|
||||
|
||||
## GroundingDinoProcessor
|
||||
|
||||
[[autodoc]] GroundingDinoProcessor
|
||||
- post_process_grounded_object_detection
|
||||
|
||||
## GroundingDinoConfig
|
||||
|
||||
[[autodoc]] GroundingDinoConfig
|
||||
|
||||
## GroundingDinoModel
|
||||
|
||||
[[autodoc]] GroundingDinoModel
|
||||
- forward
|
||||
|
||||
## GroundingDinoForObjectDetection
|
||||
|
||||
[[autodoc]] GroundingDinoForObjectDetection
|
||||
- forward
|
@ -488,9 +488,11 @@ _import_structure = {
|
||||
"GPTSanJapaneseConfig",
|
||||
"GPTSanJapaneseTokenizer",
|
||||
],
|
||||
"models.graphormer": [
|
||||
"GRAPHORMER_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"GraphormerConfig",
|
||||
"models.graphormer": ["GRAPHORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "GraphormerConfig"],
|
||||
"models.grounding_dino": [
|
||||
"GROUNDING_DINO_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"GroundingDinoConfig",
|
||||
"GroundingDinoProcessor",
|
||||
],
|
||||
"models.groupvit": [
|
||||
"GROUPVIT_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
@ -1331,6 +1333,7 @@ else:
|
||||
_import_structure["models.flava"].extend(["FlavaFeatureExtractor", "FlavaImageProcessor", "FlavaProcessor"])
|
||||
_import_structure["models.fuyu"].extend(["FuyuImageProcessor", "FuyuProcessor"])
|
||||
_import_structure["models.glpn"].extend(["GLPNFeatureExtractor", "GLPNImageProcessor"])
|
||||
_import_structure["models.grounding_dino"].extend(["GroundingDinoImageProcessor"])
|
||||
_import_structure["models.idefics"].extend(["IdeficsImageProcessor"])
|
||||
_import_structure["models.imagegpt"].extend(["ImageGPTFeatureExtractor", "ImageGPTImageProcessor"])
|
||||
_import_structure["models.layoutlmv2"].extend(["LayoutLMv2FeatureExtractor", "LayoutLMv2ImageProcessor"])
|
||||
@ -2391,6 +2394,14 @@ else:
|
||||
"GraphormerPreTrainedModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.grounding_dino"].extend(
|
||||
[
|
||||
"GROUNDING_DINO_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"GroundingDinoForObjectDetection",
|
||||
"GroundingDinoModel",
|
||||
"GroundingDinoPreTrainedModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.groupvit"].extend(
|
||||
[
|
||||
"GROUPVIT_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
@ -5380,9 +5391,11 @@ if TYPE_CHECKING:
|
||||
GPTSanJapaneseConfig,
|
||||
GPTSanJapaneseTokenizer,
|
||||
)
|
||||
from .models.graphormer import (
|
||||
GRAPHORMER_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
GraphormerConfig,
|
||||
from .models.graphormer import GRAPHORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, GraphormerConfig
|
||||
from .models.grounding_dino import (
|
||||
GROUNDING_DINO_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
GroundingDinoConfig,
|
||||
GroundingDinoProcessor,
|
||||
)
|
||||
from .models.groupvit import (
|
||||
GROUPVIT_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
@ -6195,6 +6208,7 @@ if TYPE_CHECKING:
|
||||
)
|
||||
from .models.fuyu import FuyuImageProcessor, FuyuProcessor
|
||||
from .models.glpn import GLPNFeatureExtractor, GLPNImageProcessor
|
||||
from .models.grounding_dino import GroundingDinoImageProcessor
|
||||
from .models.idefics import IdeficsImageProcessor
|
||||
from .models.imagegpt import ImageGPTFeatureExtractor, ImageGPTImageProcessor
|
||||
from .models.layoutlmv2 import (
|
||||
@ -7112,6 +7126,12 @@ if TYPE_CHECKING:
|
||||
GraphormerModel,
|
||||
GraphormerPreTrainedModel,
|
||||
)
|
||||
from .models.grounding_dino import (
|
||||
GROUNDING_DINO_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
GroundingDinoForObjectDetection,
|
||||
GroundingDinoModel,
|
||||
GroundingDinoPreTrainedModel,
|
||||
)
|
||||
from .models.groupvit import (
|
||||
GROUPVIT_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
GroupViTModel,
|
||||
|
@ -105,6 +105,7 @@ from . import (
|
||||
gptj,
|
||||
gptsan_japanese,
|
||||
graphormer,
|
||||
grounding_dino,
|
||||
groupvit,
|
||||
herbert,
|
||||
hubert,
|
||||
|
@ -120,6 +120,7 @@ CONFIG_MAPPING_NAMES = OrderedDict(
|
||||
("gptj", "GPTJConfig"),
|
||||
("gptsan-japanese", "GPTSanJapaneseConfig"),
|
||||
("graphormer", "GraphormerConfig"),
|
||||
("grounding-dino", "GroundingDinoConfig"),
|
||||
("groupvit", "GroupViTConfig"),
|
||||
("hubert", "HubertConfig"),
|
||||
("ibert", "IBertConfig"),
|
||||
@ -383,6 +384,7 @@ MODEL_NAMES_MAPPING = OrderedDict(
|
||||
("gptj", "GPT-J"),
|
||||
("gptsan-japanese", "GPTSAN-japanese"),
|
||||
("graphormer", "Graphormer"),
|
||||
("grounding-dino", "Grounding DINO"),
|
||||
("groupvit", "GroupViT"),
|
||||
("herbert", "HerBERT"),
|
||||
("hubert", "Hubert"),
|
||||
|
@ -68,6 +68,7 @@ IMAGE_PROCESSOR_MAPPING_NAMES = OrderedDict(
|
||||
("fuyu", "FuyuImageProcessor"),
|
||||
("git", "CLIPImageProcessor"),
|
||||
("glpn", "GLPNImageProcessor"),
|
||||
("grounding-dino", "GroundingDinoImageProcessor"),
|
||||
("groupvit", "CLIPImageProcessor"),
|
||||
("idefics", "IdeficsImageProcessor"),
|
||||
("imagegpt", "ImageGPTImageProcessor"),
|
||||
|
@ -115,6 +115,7 @@ MODEL_MAPPING_NAMES = OrderedDict(
|
||||
("gptj", "GPTJModel"),
|
||||
("gptsan-japanese", "GPTSanJapaneseForConditionalGeneration"),
|
||||
("graphormer", "GraphormerModel"),
|
||||
("grounding-dino", "GroundingDinoModel"),
|
||||
("groupvit", "GroupViTModel"),
|
||||
("hubert", "HubertModel"),
|
||||
("ibert", "IBertModel"),
|
||||
@ -753,6 +754,7 @@ MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES = OrderedDict(
|
||||
MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING_NAMES = OrderedDict(
|
||||
[
|
||||
# Model for Zero Shot Object Detection mapping
|
||||
("grounding-dino", "GroundingDinoForObjectDetection"),
|
||||
("owlv2", "Owlv2ForObjectDetection"),
|
||||
("owlvit", "OwlViTForObjectDetection"),
|
||||
]
|
||||
|
@ -195,6 +195,7 @@ else:
|
||||
("gpt_neox_japanese", ("GPTNeoXJapaneseTokenizer", None)),
|
||||
("gptj", ("GPT2Tokenizer", "GPT2TokenizerFast" if is_tokenizers_available() else None)),
|
||||
("gptsan-japanese", ("GPTSanJapaneseTokenizer", None)),
|
||||
("grounding-dino", ("BertTokenizer", "BertTokenizerFast" if is_tokenizers_available() else None)),
|
||||
("groupvit", ("CLIPTokenizer", "CLIPTokenizerFast" if is_tokenizers_available() else None)),
|
||||
("herbert", ("HerbertTokenizer", "HerbertTokenizerFast" if is_tokenizers_available() else None)),
|
||||
("hubert", ("Wav2Vec2CTCTokenizer", None)),
|
||||
|
@ -710,13 +710,14 @@ class DeformableDetrMultiscaleDeformableAttention(nn.Module):
|
||||
batch_size, num_queries, self.n_heads, self.n_levels, self.n_points
|
||||
)
|
||||
# batch_size, num_queries, n_heads, n_levels, n_points, 2
|
||||
if reference_points.shape[-1] == 2:
|
||||
num_coordinates = reference_points.shape[-1]
|
||||
if num_coordinates == 2:
|
||||
offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1)
|
||||
sampling_locations = (
|
||||
reference_points[:, :, None, :, None, :]
|
||||
+ sampling_offsets / offset_normalizer[None, None, None, :, None, :]
|
||||
)
|
||||
elif reference_points.shape[-1] == 4:
|
||||
elif num_coordinates == 4:
|
||||
sampling_locations = (
|
||||
reference_points[:, :, None, :, None, :2]
|
||||
+ sampling_offsets / self.n_points * reference_points[:, :, None, :, None, 2:] * 0.5
|
||||
@ -1401,14 +1402,15 @@ class DeformableDetrDecoder(DeformableDetrPreTrainedModel):
|
||||
intermediate_reference_points = ()
|
||||
|
||||
for idx, decoder_layer in enumerate(self.layers):
|
||||
if reference_points.shape[-1] == 4:
|
||||
num_coordinates = reference_points.shape[-1]
|
||||
if num_coordinates == 4:
|
||||
reference_points_input = (
|
||||
reference_points[:, :, None] * torch.cat([valid_ratios, valid_ratios], -1)[:, None]
|
||||
)
|
||||
else:
|
||||
if reference_points.shape[-1] != 2:
|
||||
raise ValueError("Reference points' last dimension must be of size 2")
|
||||
elif reference_points.shape[-1] == 2:
|
||||
reference_points_input = reference_points[:, :, None] * valid_ratios[:, None]
|
||||
else:
|
||||
raise ValueError("Reference points' last dimension must be of size 2")
|
||||
|
||||
if output_hidden_states:
|
||||
all_hidden_states += (hidden_states,)
|
||||
@ -1442,17 +1444,18 @@ class DeformableDetrDecoder(DeformableDetrPreTrainedModel):
|
||||
# hack implementation for iterative bounding box refinement
|
||||
if self.bbox_embed is not None:
|
||||
tmp = self.bbox_embed[idx](hidden_states)
|
||||
if reference_points.shape[-1] == 4:
|
||||
num_coordinates = reference_points.shape[-1]
|
||||
if num_coordinates == 4:
|
||||
new_reference_points = tmp + inverse_sigmoid(reference_points)
|
||||
new_reference_points = new_reference_points.sigmoid()
|
||||
else:
|
||||
if reference_points.shape[-1] != 2:
|
||||
raise ValueError(
|
||||
f"Reference points' last dimension must be of size 2, but is {reference_points.shape[-1]}"
|
||||
)
|
||||
elif num_coordinates == 2:
|
||||
new_reference_points = tmp
|
||||
new_reference_points[..., :2] = tmp[..., :2] + inverse_sigmoid(reference_points)
|
||||
new_reference_points = new_reference_points.sigmoid()
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Last dim of reference_points must be 2 or 4, but got {reference_points.shape[-1]}"
|
||||
)
|
||||
reference_points = new_reference_points.detach()
|
||||
|
||||
intermediate += (hidden_states,)
|
||||
|
@ -682,13 +682,14 @@ class DetaMultiscaleDeformableAttention(nn.Module):
|
||||
batch_size, num_queries, self.n_heads, self.n_levels, self.n_points
|
||||
)
|
||||
# batch_size, num_queries, n_heads, n_levels, n_points, 2
|
||||
if reference_points.shape[-1] == 2:
|
||||
num_coordinates = reference_points.shape[-1]
|
||||
if num_coordinates == 2:
|
||||
offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1)
|
||||
sampling_locations = (
|
||||
reference_points[:, :, None, :, None, :]
|
||||
+ sampling_offsets / offset_normalizer[None, None, None, :, None, :]
|
||||
)
|
||||
elif reference_points.shape[-1] == 4:
|
||||
elif num_coordinates == 4:
|
||||
sampling_locations = (
|
||||
reference_points[:, :, None, :, None, :2]
|
||||
+ sampling_offsets / self.n_points * reference_points[:, :, None, :, None, 2:] * 0.5
|
||||
|
81
src/transformers/models/grounding_dino/__init__.py
Normal file
81
src/transformers/models/grounding_dino/__init__.py
Normal file
@ -0,0 +1,81 @@
|
||||
# Copyright 2024 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available, is_vision_available
|
||||
|
||||
|
||||
_import_structure = {
|
||||
"configuration_grounding_dino": [
|
||||
"GROUNDING_DINO_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"GroundingDinoConfig",
|
||||
],
|
||||
"processing_grounding_dino": ["GroundingDinoProcessor"],
|
||||
}
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["modeling_grounding_dino"] = [
|
||||
"GROUNDING_DINO_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"GroundingDinoForObjectDetection",
|
||||
"GroundingDinoModel",
|
||||
"GroundingDinoPreTrainedModel",
|
||||
]
|
||||
|
||||
try:
|
||||
if not is_vision_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["image_processing_grounding_dino"] = ["GroundingDinoImageProcessor"]
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .configuration_grounding_dino import (
|
||||
GROUNDING_DINO_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
GroundingDinoConfig,
|
||||
)
|
||||
from .processing_grounding_dino import GroundingDinoProcessor
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .modeling_grounding_dino import (
|
||||
GROUNDING_DINO_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
GroundingDinoForObjectDetection,
|
||||
GroundingDinoModel,
|
||||
GroundingDinoPreTrainedModel,
|
||||
)
|
||||
|
||||
try:
|
||||
if not is_vision_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .image_processing_grounding_dino import GroundingDinoImageProcessor
|
||||
|
||||
else:
|
||||
import sys
|
||||
|
||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
@ -0,0 +1,301 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
""" Grounding DINO model configuration"""
|
||||
|
||||
from ...configuration_utils import PretrainedConfig
|
||||
from ...utils import logging
|
||||
from ..auto import CONFIG_MAPPING
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
GROUNDING_DINO_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||
"IDEA-Research/grounding-dino-tiny": "https://huggingface.co/IDEA-Research/grounding-dino-tiny/resolve/main/config.json",
|
||||
}
|
||||
|
||||
|
||||
class GroundingDinoConfig(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of a [`GroundingDinoModel`]. It is used to instantiate a
|
||||
Grounding DINO model according to the specified arguments, defining the model architecture. Instantiating a
|
||||
configuration with the defaults will yield a similar configuration to that of the Grounding DINO
|
||||
[IDEA-Research/grounding-dino-tiny](https://huggingface.co/IDEA-Research/grounding-dino-tiny) architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Args:
|
||||
backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `ResNetConfig()`):
|
||||
The configuration of the backbone model.
|
||||
backbone (`str`, *optional*):
|
||||
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
|
||||
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
|
||||
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
|
||||
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
|
||||
Whether to use pretrained weights for the backbone.
|
||||
use_timm_backbone (`bool`, *optional*, defaults to `False`):
|
||||
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
|
||||
library.
|
||||
backbone_kwargs (`dict`, *optional*):
|
||||
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
|
||||
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
|
||||
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `BertConfig`):
|
||||
The config object or dictionary of the text backbone.
|
||||
num_queries (`int`, *optional*, defaults to 900):
|
||||
Number of object queries, i.e. detection slots. This is the maximal number of objects
|
||||
[`GroundingDinoModel`] can detect in a single image.
|
||||
encoder_layers (`int`, *optional*, defaults to 6):
|
||||
Number of encoder layers.
|
||||
encoder_ffn_dim (`int`, *optional*, defaults to 2048):
|
||||
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
|
||||
encoder_attention_heads (`int`, *optional*, defaults to 8):
|
||||
Number of attention heads for each attention layer in the Transformer encoder.
|
||||
decoder_layers (`int`, *optional*, defaults to 6):
|
||||
Number of decoder layers.
|
||||
decoder_ffn_dim (`int`, *optional*, defaults to 2048):
|
||||
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
|
||||
decoder_attention_heads (`int`, *optional*, defaults to 8):
|
||||
Number of attention heads for each attention layer in the Transformer decoder.
|
||||
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
|
||||
Whether the model is used as an encoder/decoder or not.
|
||||
activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
|
||||
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
||||
`"relu"`, `"silu"` and `"gelu_new"` are supported.
|
||||
d_model (`int`, *optional*, defaults to 256):
|
||||
Dimension of the layers.
|
||||
dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
||||
attention_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout ratio for the attention probabilities.
|
||||
activation_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout ratio for activations inside the fully connected layer.
|
||||
auxiliary_loss (`bool`, *optional*, defaults to `False`):
|
||||
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
|
||||
position_embedding_type (`str`, *optional*, defaults to `"sine"`):
|
||||
Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`.
|
||||
num_feature_levels (`int`, *optional*, defaults to 4):
|
||||
The number of input feature levels.
|
||||
encoder_n_points (`int`, *optional*, defaults to 4):
|
||||
The number of sampled keys in each feature level for each attention head in the encoder.
|
||||
decoder_n_points (`int`, *optional*, defaults to 4):
|
||||
The number of sampled keys in each feature level for each attention head in the decoder.
|
||||
two_stage (`bool`, *optional*, defaults to `True`):
|
||||
Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of
|
||||
Grounding DINO, which are further fed into the decoder for iterative bounding box refinement.
|
||||
class_cost (`float`, *optional*, defaults to 1.0):
|
||||
Relative weight of the classification error in the Hungarian matching cost.
|
||||
bbox_cost (`float`, *optional*, defaults to 5.0):
|
||||
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
|
||||
giou_cost (`float`, *optional*, defaults to 2.0):
|
||||
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
|
||||
bbox_loss_coefficient (`float`, *optional*, defaults to 5.0):
|
||||
Relative weight of the L1 bounding box loss in the object detection loss.
|
||||
giou_loss_coefficient (`float`, *optional*, defaults to 2.0):
|
||||
Relative weight of the generalized IoU loss in the object detection loss.
|
||||
focal_alpha (`float`, *optional*, defaults to 0.25):
|
||||
Alpha parameter in the focal loss.
|
||||
disable_custom_kernels (`bool`, *optional*, defaults to `False`):
|
||||
Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom
|
||||
kernels are not supported by PyTorch ONNX export.
|
||||
max_text_len (`int`, *optional*, defaults to 256):
|
||||
The maximum length of the text input.
|
||||
text_enhancer_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout ratio for the text enhancer.
|
||||
fusion_droppath (`float`, *optional*, defaults to 0.1):
|
||||
The droppath ratio for the fusion module.
|
||||
fusion_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout ratio for the fusion module.
|
||||
embedding_init_target (`bool`, *optional*, defaults to `True`):
|
||||
Whether to initialize the target with Embedding weights.
|
||||
query_dim (`int`, *optional*, defaults to 4):
|
||||
The dimension of the query vector.
|
||||
decoder_bbox_embed_share (`bool`, *optional*, defaults to `True`):
|
||||
Whether to share the bbox regression head for all decoder layers.
|
||||
two_stage_bbox_embed_share (`bool`, *optional*, defaults to `False`):
|
||||
Whether to share the bbox embedding between the two-stage bbox generator and the region proposal
|
||||
generation.
|
||||
positional_embedding_temperature (`float`, *optional*, defaults to 20):
|
||||
The temperature for Sine Positional Embedding that is used together with vision backbone.
|
||||
init_std (`float`, *optional*, defaults to 0.02):
|
||||
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
|
||||
The epsilon used by the layer normalization layers.
|
||||
|
||||
Examples:
|
||||
|
||||
```python
|
||||
>>> from transformers import GroundingDinoConfig, GroundingDinoModel
|
||||
|
||||
>>> # Initializing a Grounding DINO IDEA-Research/grounding-dino-tiny style configuration
|
||||
>>> configuration = GroundingDinoConfig()
|
||||
|
||||
>>> # Initializing a model (with random weights) from the IDEA-Research/grounding-dino-tiny style configuration
|
||||
>>> model = GroundingDinoModel(configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
```"""
|
||||
|
||||
model_type = "grounding-dino"
|
||||
attribute_map = {
|
||||
"hidden_size": "d_model",
|
||||
"num_attention_heads": "encoder_attention_heads",
|
||||
}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
backbone_config=None,
|
||||
backbone=None,
|
||||
use_pretrained_backbone=False,
|
||||
use_timm_backbone=False,
|
||||
backbone_kwargs=None,
|
||||
text_config=None,
|
||||
num_queries=900,
|
||||
encoder_layers=6,
|
||||
encoder_ffn_dim=2048,
|
||||
encoder_attention_heads=8,
|
||||
decoder_layers=6,
|
||||
decoder_ffn_dim=2048,
|
||||
decoder_attention_heads=8,
|
||||
is_encoder_decoder=True,
|
||||
activation_function="relu",
|
||||
d_model=256,
|
||||
dropout=0.1,
|
||||
attention_dropout=0.0,
|
||||
activation_dropout=0.0,
|
||||
auxiliary_loss=False,
|
||||
position_embedding_type="sine",
|
||||
num_feature_levels=4,
|
||||
encoder_n_points=4,
|
||||
decoder_n_points=4,
|
||||
two_stage=True,
|
||||
class_cost=1.0,
|
||||
bbox_cost=5.0,
|
||||
giou_cost=2.0,
|
||||
bbox_loss_coefficient=5.0,
|
||||
giou_loss_coefficient=2.0,
|
||||
focal_alpha=0.25,
|
||||
disable_custom_kernels=False,
|
||||
# other parameters
|
||||
max_text_len=256,
|
||||
text_enhancer_dropout=0.0,
|
||||
fusion_droppath=0.1,
|
||||
fusion_dropout=0.0,
|
||||
embedding_init_target=True,
|
||||
query_dim=4,
|
||||
decoder_bbox_embed_share=True,
|
||||
two_stage_bbox_embed_share=False,
|
||||
positional_embedding_temperature=20,
|
||||
init_std=0.02,
|
||||
layer_norm_eps=1e-5,
|
||||
**kwargs,
|
||||
):
|
||||
if not use_timm_backbone and use_pretrained_backbone:
|
||||
raise ValueError(
|
||||
"Loading pretrained backbone weights from the transformers library is not supported yet. `use_timm_backbone` must be set to `True` when `use_pretrained_backbone=True`"
|
||||
)
|
||||
|
||||
if backbone_config is not None and backbone is not None:
|
||||
raise ValueError("You can't specify both `backbone` and `backbone_config`.")
|
||||
|
||||
if backbone_config is None and backbone is None:
|
||||
logger.info("`backbone_config` is `None`. Initializing the config with the default `Swin` backbone.")
|
||||
backbone_config = CONFIG_MAPPING["swin"](
|
||||
window_size=7,
|
||||
image_size=224,
|
||||
embed_dim=96,
|
||||
depths=[2, 2, 6, 2],
|
||||
num_heads=[3, 6, 12, 24],
|
||||
out_indices=[2, 3, 4],
|
||||
)
|
||||
elif isinstance(backbone_config, dict):
|
||||
backbone_model_type = backbone_config.pop("model_type")
|
||||
config_class = CONFIG_MAPPING[backbone_model_type]
|
||||
backbone_config = config_class.from_dict(backbone_config)
|
||||
|
||||
if backbone_kwargs is not None and backbone_kwargs and backbone_config is not None:
|
||||
raise ValueError("You can't specify both `backbone_kwargs` and `backbone_config`.")
|
||||
|
||||
if text_config is None:
|
||||
text_config = {}
|
||||
logger.info("text_config is None. Initializing the text config with default values (`BertConfig`).")
|
||||
|
||||
self.backbone_config = backbone_config
|
||||
self.backbone = backbone
|
||||
self.use_pretrained_backbone = use_pretrained_backbone
|
||||
self.use_timm_backbone = use_timm_backbone
|
||||
self.backbone_kwargs = backbone_kwargs
|
||||
self.num_queries = num_queries
|
||||
self.d_model = d_model
|
||||
self.encoder_ffn_dim = encoder_ffn_dim
|
||||
self.encoder_layers = encoder_layers
|
||||
self.encoder_attention_heads = encoder_attention_heads
|
||||
self.decoder_ffn_dim = decoder_ffn_dim
|
||||
self.decoder_layers = decoder_layers
|
||||
self.decoder_attention_heads = decoder_attention_heads
|
||||
self.dropout = dropout
|
||||
self.attention_dropout = attention_dropout
|
||||
self.activation_dropout = activation_dropout
|
||||
self.activation_function = activation_function
|
||||
self.auxiliary_loss = auxiliary_loss
|
||||
self.position_embedding_type = position_embedding_type
|
||||
# deformable attributes
|
||||
self.num_feature_levels = num_feature_levels
|
||||
self.encoder_n_points = encoder_n_points
|
||||
self.decoder_n_points = decoder_n_points
|
||||
self.two_stage = two_stage
|
||||
# Hungarian matcher
|
||||
self.class_cost = class_cost
|
||||
self.bbox_cost = bbox_cost
|
||||
self.giou_cost = giou_cost
|
||||
# Loss coefficients
|
||||
self.bbox_loss_coefficient = bbox_loss_coefficient
|
||||
self.giou_loss_coefficient = giou_loss_coefficient
|
||||
self.focal_alpha = focal_alpha
|
||||
self.disable_custom_kernels = disable_custom_kernels
|
||||
# Text backbone
|
||||
if isinstance(text_config, dict):
|
||||
text_config["model_type"] = text_config["model_type"] if "model_type" in text_config else "bert"
|
||||
text_config = CONFIG_MAPPING[text_config["model_type"]](**text_config)
|
||||
elif text_config is None:
|
||||
text_config = CONFIG_MAPPING["bert"]()
|
||||
|
||||
self.text_config = text_config
|
||||
self.max_text_len = max_text_len
|
||||
|
||||
# Text Enhancer
|
||||
self.text_enhancer_dropout = text_enhancer_dropout
|
||||
# Fusion
|
||||
self.fusion_droppath = fusion_droppath
|
||||
self.fusion_dropout = fusion_dropout
|
||||
# Others
|
||||
self.embedding_init_target = embedding_init_target
|
||||
self.query_dim = query_dim
|
||||
self.decoder_bbox_embed_share = decoder_bbox_embed_share
|
||||
self.two_stage_bbox_embed_share = two_stage_bbox_embed_share
|
||||
if two_stage_bbox_embed_share and not decoder_bbox_embed_share:
|
||||
raise ValueError("If two_stage_bbox_embed_share is True, decoder_bbox_embed_share must be True.")
|
||||
self.positional_embedding_temperature = positional_embedding_temperature
|
||||
self.init_std = init_std
|
||||
self.layer_norm_eps = layer_norm_eps
|
||||
super().__init__(is_encoder_decoder=is_encoder_decoder, **kwargs)
|
||||
|
||||
@property
|
||||
def num_attention_heads(self) -> int:
|
||||
return self.encoder_attention_heads
|
||||
|
||||
@property
|
||||
def hidden_size(self) -> int:
|
||||
return self.d_model
|
@ -0,0 +1,491 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2024 The HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Convert Grounding DINO checkpoints from the original repository.
|
||||
|
||||
URL: https://github.com/IDEA-Research/GroundingDINO"""
|
||||
|
||||
import argparse
|
||||
|
||||
import requests
|
||||
import torch
|
||||
from PIL import Image
|
||||
from torchvision import transforms as T
|
||||
|
||||
from transformers import (
|
||||
AutoTokenizer,
|
||||
GroundingDinoConfig,
|
||||
GroundingDinoForObjectDetection,
|
||||
GroundingDinoImageProcessor,
|
||||
GroundingDinoProcessor,
|
||||
SwinConfig,
|
||||
)
|
||||
|
||||
|
||||
IMAGENET_MEAN = [0.485, 0.456, 0.406]
|
||||
IMAGENET_STD = [0.229, 0.224, 0.225]
|
||||
|
||||
|
||||
def get_grounding_dino_config(model_name):
|
||||
if "tiny" in model_name:
|
||||
window_size = 7
|
||||
embed_dim = 96
|
||||
depths = (2, 2, 6, 2)
|
||||
num_heads = (3, 6, 12, 24)
|
||||
image_size = 224
|
||||
elif "base" in model_name:
|
||||
window_size = 12
|
||||
embed_dim = 128
|
||||
depths = (2, 2, 18, 2)
|
||||
num_heads = (4, 8, 16, 32)
|
||||
image_size = 384
|
||||
else:
|
||||
raise ValueError("Model not supported, only supports base and large variants")
|
||||
|
||||
backbone_config = SwinConfig(
|
||||
window_size=window_size,
|
||||
image_size=image_size,
|
||||
embed_dim=embed_dim,
|
||||
depths=depths,
|
||||
num_heads=num_heads,
|
||||
out_indices=[2, 3, 4],
|
||||
)
|
||||
|
||||
config = GroundingDinoConfig(backbone_config=backbone_config)
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def create_rename_keys(state_dict, config):
|
||||
rename_keys = []
|
||||
# fmt: off
|
||||
########################################## VISION BACKBONE - START
|
||||
# patch embedding layer
|
||||
rename_keys.append(("backbone.0.patch_embed.proj.weight",
|
||||
"model.backbone.conv_encoder.model.embeddings.patch_embeddings.projection.weight"))
|
||||
rename_keys.append(("backbone.0.patch_embed.proj.bias",
|
||||
"model.backbone.conv_encoder.model.embeddings.patch_embeddings.projection.bias"))
|
||||
rename_keys.append(("backbone.0.patch_embed.norm.weight",
|
||||
"model.backbone.conv_encoder.model.embeddings.norm.weight"))
|
||||
rename_keys.append(("backbone.0.patch_embed.norm.bias",
|
||||
"model.backbone.conv_encoder.model.embeddings.norm.bias"))
|
||||
|
||||
for layer, depth in enumerate(config.backbone_config.depths):
|
||||
for block in range(depth):
|
||||
# layernorms
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.norm1.weight",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.layernorm_before.weight"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.norm1.bias",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.layernorm_before.bias"))
|
||||
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.norm2.weight",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.layernorm_after.weight"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.norm2.bias",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.layernorm_after.bias"))
|
||||
# attention
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.attn.relative_position_bias_table",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.self.relative_position_bias_table"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.attn.proj.weight",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.output.dense.weight"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.attn.proj.bias",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.output.dense.bias"))
|
||||
# intermediate
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.mlp.fc1.weight",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.intermediate.dense.weight"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.mlp.fc1.bias",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.intermediate.dense.bias"))
|
||||
|
||||
# output
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.mlp.fc2.weight",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.output.dense.weight"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.blocks.{block}.mlp.fc2.bias",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.output.dense.bias"))
|
||||
|
||||
# downsample
|
||||
if layer!=len(config.backbone_config.depths)-1:
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.downsample.reduction.weight",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.downsample.reduction.weight"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.downsample.norm.weight",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.downsample.norm.weight"))
|
||||
rename_keys.append((f"backbone.0.layers.{layer}.downsample.norm.bias",
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.downsample.norm.bias"))
|
||||
|
||||
for out_indice in config.backbone_config.out_indices:
|
||||
# Grounding DINO implementation of out_indices isn't aligned with transformers
|
||||
rename_keys.append((f"backbone.0.norm{out_indice-1}.weight",
|
||||
f"model.backbone.conv_encoder.model.hidden_states_norms.stage{out_indice}.weight"))
|
||||
rename_keys.append((f"backbone.0.norm{out_indice-1}.bias",
|
||||
f"model.backbone.conv_encoder.model.hidden_states_norms.stage{out_indice}.bias"))
|
||||
|
||||
########################################## VISION BACKBONE - END
|
||||
|
||||
########################################## ENCODER - START
|
||||
deformable_key_mappings = {
|
||||
'self_attn.sampling_offsets.weight': 'deformable_layer.self_attn.sampling_offsets.weight',
|
||||
'self_attn.sampling_offsets.bias': 'deformable_layer.self_attn.sampling_offsets.bias',
|
||||
'self_attn.attention_weights.weight': 'deformable_layer.self_attn.attention_weights.weight',
|
||||
'self_attn.attention_weights.bias': 'deformable_layer.self_attn.attention_weights.bias',
|
||||
'self_attn.value_proj.weight': 'deformable_layer.self_attn.value_proj.weight',
|
||||
'self_attn.value_proj.bias': 'deformable_layer.self_attn.value_proj.bias',
|
||||
'self_attn.output_proj.weight': 'deformable_layer.self_attn.output_proj.weight',
|
||||
'self_attn.output_proj.bias': 'deformable_layer.self_attn.output_proj.bias',
|
||||
'norm1.weight': 'deformable_layer.self_attn_layer_norm.weight',
|
||||
'norm1.bias': 'deformable_layer.self_attn_layer_norm.bias',
|
||||
'linear1.weight': 'deformable_layer.fc1.weight',
|
||||
'linear1.bias': 'deformable_layer.fc1.bias',
|
||||
'linear2.weight': 'deformable_layer.fc2.weight',
|
||||
'linear2.bias': 'deformable_layer.fc2.bias',
|
||||
'norm2.weight': 'deformable_layer.final_layer_norm.weight',
|
||||
'norm2.bias': 'deformable_layer.final_layer_norm.bias',
|
||||
}
|
||||
text_enhancer_key_mappings = {
|
||||
'self_attn.in_proj_weight': 'text_enhancer_layer.self_attn.in_proj_weight',
|
||||
'self_attn.in_proj_bias': 'text_enhancer_layer.self_attn.in_proj_bias',
|
||||
'self_attn.out_proj.weight': 'text_enhancer_layer.self_attn.out_proj.weight',
|
||||
'self_attn.out_proj.bias': 'text_enhancer_layer.self_attn.out_proj.bias',
|
||||
'linear1.weight': 'text_enhancer_layer.fc1.weight',
|
||||
'linear1.bias': 'text_enhancer_layer.fc1.bias',
|
||||
'linear2.weight': 'text_enhancer_layer.fc2.weight',
|
||||
'linear2.bias': 'text_enhancer_layer.fc2.bias',
|
||||
'norm1.weight': 'text_enhancer_layer.layer_norm_before.weight',
|
||||
'norm1.bias': 'text_enhancer_layer.layer_norm_before.bias',
|
||||
'norm2.weight': 'text_enhancer_layer.layer_norm_after.weight',
|
||||
'norm2.bias': 'text_enhancer_layer.layer_norm_after.bias',
|
||||
}
|
||||
fusion_key_mappings = {
|
||||
'gamma_v': 'fusion_layer.vision_param',
|
||||
'gamma_l': 'fusion_layer.text_param',
|
||||
'layer_norm_v.weight': 'fusion_layer.layer_norm_vision.weight',
|
||||
'layer_norm_v.bias': 'fusion_layer.layer_norm_vision.bias',
|
||||
'layer_norm_l.weight': 'fusion_layer.layer_norm_text.weight',
|
||||
'layer_norm_l.bias': 'fusion_layer.layer_norm_text.bias',
|
||||
'attn.v_proj.weight': 'fusion_layer.attn.vision_proj.weight',
|
||||
'attn.v_proj.bias': 'fusion_layer.attn.vision_proj.bias',
|
||||
'attn.l_proj.weight': 'fusion_layer.attn.text_proj.weight',
|
||||
'attn.l_proj.bias': 'fusion_layer.attn.text_proj.bias',
|
||||
'attn.values_v_proj.weight': 'fusion_layer.attn.values_vision_proj.weight',
|
||||
'attn.values_v_proj.bias': 'fusion_layer.attn.values_vision_proj.bias',
|
||||
'attn.values_l_proj.weight': 'fusion_layer.attn.values_text_proj.weight',
|
||||
'attn.values_l_proj.bias': 'fusion_layer.attn.values_text_proj.bias',
|
||||
'attn.out_v_proj.weight': 'fusion_layer.attn.out_vision_proj.weight',
|
||||
'attn.out_v_proj.bias': 'fusion_layer.attn.out_vision_proj.bias',
|
||||
'attn.out_l_proj.weight': 'fusion_layer.attn.out_text_proj.weight',
|
||||
'attn.out_l_proj.bias': 'fusion_layer.attn.out_text_proj.bias',
|
||||
}
|
||||
for layer in range(config.encoder_layers):
|
||||
# deformable
|
||||
for src, dest in deformable_key_mappings.items():
|
||||
rename_keys.append((f"transformer.encoder.layers.{layer}.{src}",
|
||||
f"model.encoder.layers.{layer}.{dest}"))
|
||||
# text enhance
|
||||
for src, dest in text_enhancer_key_mappings.items():
|
||||
rename_keys.append((f"transformer.encoder.text_layers.{layer}.{src}",
|
||||
f"model.encoder.layers.{layer}.{dest}"))
|
||||
# fusion layers
|
||||
for src, dest in fusion_key_mappings.items():
|
||||
rename_keys.append((f"transformer.encoder.fusion_layers.{layer}.{src}",
|
||||
f"model.encoder.layers.{layer}.{dest}"))
|
||||
########################################## ENCODER - END
|
||||
|
||||
########################################## DECODER - START
|
||||
key_mappings_decoder = {
|
||||
'cross_attn.sampling_offsets.weight': 'encoder_attn.sampling_offsets.weight',
|
||||
'cross_attn.sampling_offsets.bias': 'encoder_attn.sampling_offsets.bias',
|
||||
'cross_attn.attention_weights.weight': 'encoder_attn.attention_weights.weight',
|
||||
'cross_attn.attention_weights.bias': 'encoder_attn.attention_weights.bias',
|
||||
'cross_attn.value_proj.weight': 'encoder_attn.value_proj.weight',
|
||||
'cross_attn.value_proj.bias': 'encoder_attn.value_proj.bias',
|
||||
'cross_attn.output_proj.weight': 'encoder_attn.output_proj.weight',
|
||||
'cross_attn.output_proj.bias': 'encoder_attn.output_proj.bias',
|
||||
'norm1.weight': 'encoder_attn_layer_norm.weight',
|
||||
'norm1.bias': 'encoder_attn_layer_norm.bias',
|
||||
'ca_text.in_proj_weight': 'encoder_attn_text.in_proj_weight',
|
||||
'ca_text.in_proj_bias': 'encoder_attn_text.in_proj_bias',
|
||||
'ca_text.out_proj.weight': 'encoder_attn_text.out_proj.weight',
|
||||
'ca_text.out_proj.bias': 'encoder_attn_text.out_proj.bias',
|
||||
'catext_norm.weight': 'encoder_attn_text_layer_norm.weight',
|
||||
'catext_norm.bias': 'encoder_attn_text_layer_norm.bias',
|
||||
'self_attn.in_proj_weight': 'self_attn.in_proj_weight',
|
||||
'self_attn.in_proj_bias': 'self_attn.in_proj_bias',
|
||||
'self_attn.out_proj.weight': 'self_attn.out_proj.weight',
|
||||
'self_attn.out_proj.bias': 'self_attn.out_proj.bias',
|
||||
'norm2.weight': 'self_attn_layer_norm.weight',
|
||||
'norm2.bias': 'self_attn_layer_norm.bias',
|
||||
'linear1.weight': 'fc1.weight',
|
||||
'linear1.bias': 'fc1.bias',
|
||||
'linear2.weight': 'fc2.weight',
|
||||
'linear2.bias': 'fc2.bias',
|
||||
'norm3.weight': 'final_layer_norm.weight',
|
||||
'norm3.bias': 'final_layer_norm.bias',
|
||||
}
|
||||
for layer_num in range(config.decoder_layers):
|
||||
source_prefix_decoder = f'transformer.decoder.layers.{layer_num}.'
|
||||
target_prefix_decoder = f'model.decoder.layers.{layer_num}.'
|
||||
|
||||
for source_name, target_name in key_mappings_decoder.items():
|
||||
rename_keys.append((source_prefix_decoder + source_name,
|
||||
target_prefix_decoder + target_name))
|
||||
########################################## DECODER - END
|
||||
|
||||
########################################## Additional - START
|
||||
for layer_name, params in state_dict.items():
|
||||
#### TEXT BACKBONE
|
||||
if "bert" in layer_name:
|
||||
rename_keys.append((layer_name, layer_name.replace("bert", "model.text_backbone")))
|
||||
#### INPUT PROJ - PROJECT OUTPUT FEATURES FROM VISION BACKBONE
|
||||
if "input_proj" in layer_name:
|
||||
rename_keys.append((layer_name, layer_name.replace("input_proj", "model.input_proj_vision")))
|
||||
#### INPUT PROJ - PROJECT OUTPUT FEATURES FROM TEXT BACKBONE
|
||||
if "feat_map" in layer_name:
|
||||
rename_keys.append((layer_name, layer_name.replace("feat_map", "model.text_projection")))
|
||||
#### DECODER REFERENCE POINT HEAD
|
||||
if "transformer.decoder.ref_point_head" in layer_name:
|
||||
rename_keys.append((layer_name, layer_name.replace("transformer.decoder.ref_point_head",
|
||||
"model.decoder.reference_points_head")))
|
||||
#### DECODER BBOX EMBED
|
||||
if "transformer.decoder.bbox_embed" in layer_name:
|
||||
rename_keys.append((layer_name, layer_name.replace("transformer.decoder.bbox_embed",
|
||||
"model.decoder.bbox_embed")))
|
||||
if "transformer.enc_output" in layer_name:
|
||||
rename_keys.append((layer_name, layer_name.replace("transformer", "model")))
|
||||
|
||||
if "transformer.enc_out_bbox_embed" in layer_name:
|
||||
rename_keys.append((layer_name, layer_name.replace("transformer.enc_out_bbox_embed",
|
||||
"model.encoder_output_bbox_embed")))
|
||||
|
||||
rename_keys.append(("transformer.level_embed", "model.level_embed"))
|
||||
rename_keys.append(("transformer.decoder.norm.weight", "model.decoder.layer_norm.weight"))
|
||||
rename_keys.append(("transformer.decoder.norm.bias", "model.decoder.layer_norm.bias"))
|
||||
rename_keys.append(("transformer.tgt_embed.weight", "model.query_position_embeddings.weight"))
|
||||
########################################## Additional - END
|
||||
|
||||
# fmt: on
|
||||
return rename_keys
|
||||
|
||||
|
||||
def rename_key(dct, old, new):
|
||||
val = dct.pop(old)
|
||||
dct[new] = val
|
||||
|
||||
|
||||
# we split up the matrix of each encoder layer into queries, keys and values
|
||||
def read_in_q_k_v_encoder(state_dict, config):
|
||||
########################################## VISION BACKBONE - START
|
||||
embed_dim = config.backbone_config.embed_dim
|
||||
for layer, depth in enumerate(config.backbone_config.depths):
|
||||
hidden_size = embed_dim * 2**layer
|
||||
for block in range(depth):
|
||||
# read in weights + bias of input projection layer (in timm, this is a single matrix + bias)
|
||||
in_proj_weight = state_dict.pop(f"backbone.0.layers.{layer}.blocks.{block}.attn.qkv.weight")
|
||||
in_proj_bias = state_dict.pop(f"backbone.0.layers.{layer}.blocks.{block}.attn.qkv.bias")
|
||||
# next, add query, keys and values (in that order) to the state dict
|
||||
state_dict[
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.self.query.weight"
|
||||
] = in_proj_weight[:hidden_size, :]
|
||||
state_dict[
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.self.query.bias"
|
||||
] = in_proj_bias[:hidden_size]
|
||||
|
||||
state_dict[
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.self.key.weight"
|
||||
] = in_proj_weight[hidden_size : hidden_size * 2, :]
|
||||
state_dict[
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.self.key.bias"
|
||||
] = in_proj_bias[hidden_size : hidden_size * 2]
|
||||
|
||||
state_dict[
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.self.value.weight"
|
||||
] = in_proj_weight[-hidden_size:, :]
|
||||
state_dict[
|
||||
f"model.backbone.conv_encoder.model.encoder.layers.{layer}.blocks.{block}.attention.self.value.bias"
|
||||
] = in_proj_bias[-hidden_size:]
|
||||
########################################## VISION BACKBONE - END
|
||||
|
||||
|
||||
def read_in_q_k_v_text_enhancer(state_dict, config):
|
||||
hidden_size = config.hidden_size
|
||||
for idx in range(config.encoder_layers):
|
||||
# read in weights + bias of input projection layer (in original implementation, this is a single matrix + bias)
|
||||
in_proj_weight = state_dict.pop(f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.in_proj_weight")
|
||||
in_proj_bias = state_dict.pop(f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.in_proj_bias")
|
||||
# next, add query, keys and values (in that order) to the state dict
|
||||
state_dict[f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.query.weight"] = in_proj_weight[
|
||||
:hidden_size, :
|
||||
]
|
||||
state_dict[f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.query.bias"] = in_proj_bias[:hidden_size]
|
||||
|
||||
state_dict[f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.key.weight"] = in_proj_weight[
|
||||
hidden_size : hidden_size * 2, :
|
||||
]
|
||||
state_dict[f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.key.bias"] = in_proj_bias[
|
||||
hidden_size : hidden_size * 2
|
||||
]
|
||||
|
||||
state_dict[f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.value.weight"] = in_proj_weight[
|
||||
-hidden_size:, :
|
||||
]
|
||||
state_dict[f"model.encoder.layers.{idx}.text_enhancer_layer.self_attn.value.bias"] = in_proj_bias[
|
||||
-hidden_size:
|
||||
]
|
||||
|
||||
|
||||
def read_in_q_k_v_decoder(state_dict, config):
|
||||
hidden_size = config.hidden_size
|
||||
for idx in range(config.decoder_layers):
|
||||
# read in weights + bias of input projection layer (in original implementation, this is a single matrix + bias)
|
||||
in_proj_weight = state_dict.pop(f"model.decoder.layers.{idx}.self_attn.in_proj_weight")
|
||||
in_proj_bias = state_dict.pop(f"model.decoder.layers.{idx}.self_attn.in_proj_bias")
|
||||
# next, add query, keys and values (in that order) to the state dict
|
||||
state_dict[f"model.decoder.layers.{idx}.self_attn.query.weight"] = in_proj_weight[:hidden_size, :]
|
||||
state_dict[f"model.decoder.layers.{idx}.self_attn.query.bias"] = in_proj_bias[:hidden_size]
|
||||
|
||||
state_dict[f"model.decoder.layers.{idx}.self_attn.key.weight"] = in_proj_weight[
|
||||
hidden_size : hidden_size * 2, :
|
||||
]
|
||||
state_dict[f"model.decoder.layers.{idx}.self_attn.key.bias"] = in_proj_bias[hidden_size : hidden_size * 2]
|
||||
|
||||
state_dict[f"model.decoder.layers.{idx}.self_attn.value.weight"] = in_proj_weight[-hidden_size:, :]
|
||||
state_dict[f"model.decoder.layers.{idx}.self_attn.value.bias"] = in_proj_bias[-hidden_size:]
|
||||
|
||||
# read in weights + bias of cross-attention
|
||||
in_proj_weight = state_dict.pop(f"model.decoder.layers.{idx}.encoder_attn_text.in_proj_weight")
|
||||
in_proj_bias = state_dict.pop(f"model.decoder.layers.{idx}.encoder_attn_text.in_proj_bias")
|
||||
|
||||
# next, add query, keys and values (in that order) to the state dict
|
||||
state_dict[f"model.decoder.layers.{idx}.encoder_attn_text.query.weight"] = in_proj_weight[:hidden_size, :]
|
||||
state_dict[f"model.decoder.layers.{idx}.encoder_attn_text.query.bias"] = in_proj_bias[:hidden_size]
|
||||
|
||||
state_dict[f"model.decoder.layers.{idx}.encoder_attn_text.key.weight"] = in_proj_weight[
|
||||
hidden_size : hidden_size * 2, :
|
||||
]
|
||||
state_dict[f"model.decoder.layers.{idx}.encoder_attn_text.key.bias"] = in_proj_bias[
|
||||
hidden_size : hidden_size * 2
|
||||
]
|
||||
|
||||
state_dict[f"model.decoder.layers.{idx}.encoder_attn_text.value.weight"] = in_proj_weight[-hidden_size:, :]
|
||||
state_dict[f"model.decoder.layers.{idx}.encoder_attn_text.value.bias"] = in_proj_bias[-hidden_size:]
|
||||
|
||||
|
||||
# We will verify our results on an image of cute cats
|
||||
def prepare_img():
|
||||
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
||||
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
|
||||
return image
|
||||
|
||||
|
||||
def preprocess_caption(caption: str) -> str:
|
||||
result = caption.lower().strip()
|
||||
if result.endswith("."):
|
||||
return result
|
||||
return result + "."
|
||||
|
||||
|
||||
@torch.no_grad()
|
||||
def convert_grounding_dino_checkpoint(args):
|
||||
model_name = args.model_name
|
||||
pytorch_dump_folder_path = args.pytorch_dump_folder_path
|
||||
push_to_hub = args.push_to_hub
|
||||
verify_logits = args.verify_logits
|
||||
|
||||
checkpoint_mapping = {
|
||||
"grounding-dino-tiny": "https://huggingface.co/ShilongLiu/GroundingDino/resolve/main/groundingdino_swint_ogc.pth",
|
||||
"grounding-dino-base": "https://huggingface.co/ShilongLiu/GroundingDino/resolve/main/groundingdino_swinb_cogcoor.pth",
|
||||
}
|
||||
# Define default GroundingDino configuation
|
||||
config = get_grounding_dino_config(model_name)
|
||||
|
||||
# Load original checkpoint
|
||||
checkpoint_url = checkpoint_mapping[model_name]
|
||||
original_state_dict = torch.hub.load_state_dict_from_url(checkpoint_url, map_location="cpu")["model"]
|
||||
original_state_dict = {k.replace("module.", ""): v for k, v in original_state_dict.items()}
|
||||
|
||||
for name, param in original_state_dict.items():
|
||||
print(name, param.shape)
|
||||
|
||||
# Rename keys
|
||||
new_state_dict = original_state_dict.copy()
|
||||
rename_keys = create_rename_keys(original_state_dict, config)
|
||||
|
||||
for src, dest in rename_keys:
|
||||
rename_key(new_state_dict, src, dest)
|
||||
read_in_q_k_v_encoder(new_state_dict, config)
|
||||
read_in_q_k_v_text_enhancer(new_state_dict, config)
|
||||
read_in_q_k_v_decoder(new_state_dict, config)
|
||||
|
||||
# Load HF model
|
||||
model = GroundingDinoForObjectDetection(config)
|
||||
model.eval()
|
||||
missing_keys, unexpected_keys = model.load_state_dict(new_state_dict, strict=False)
|
||||
print("Missing keys:", missing_keys)
|
||||
print("Unexpected keys:", unexpected_keys)
|
||||
|
||||
# Load and process test image
|
||||
image = prepare_img()
|
||||
transforms = T.Compose([T.Resize(size=800, max_size=1333), T.ToTensor(), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)])
|
||||
original_pixel_values = transforms(image).unsqueeze(0)
|
||||
|
||||
image_processor = GroundingDinoImageProcessor()
|
||||
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
|
||||
processor = GroundingDinoProcessor(image_processor=image_processor, tokenizer=tokenizer)
|
||||
|
||||
text = "a cat"
|
||||
inputs = processor(images=image, text=preprocess_caption(text), return_tensors="pt")
|
||||
|
||||
assert torch.allclose(original_pixel_values, inputs.pixel_values, atol=1e-4)
|
||||
|
||||
if verify_logits:
|
||||
# Running forward
|
||||
with torch.no_grad():
|
||||
outputs = model(**inputs)
|
||||
|
||||
print(outputs.logits[0, :3, :3])
|
||||
|
||||
expected_slice = torch.tensor(
|
||||
[[-4.8913, -0.1900, -0.2161], [-4.9653, -0.3719, -0.3950], [-5.9599, -3.3765, -3.3104]]
|
||||
)
|
||||
|
||||
assert torch.allclose(outputs.logits[0, :3, :3], expected_slice, atol=1e-4)
|
||||
print("Looks ok!")
|
||||
|
||||
if pytorch_dump_folder_path is not None:
|
||||
model.save_pretrained(pytorch_dump_folder_path)
|
||||
processor.save_pretrained(pytorch_dump_folder_path)
|
||||
|
||||
if push_to_hub:
|
||||
model.push_to_hub(f"EduardoPacheco/{model_name}")
|
||||
processor.push_to_hub(f"EduardoPacheco/{model_name}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
# Required parameters
|
||||
parser.add_argument(
|
||||
"--model_name",
|
||||
default="grounding-dino-tiny",
|
||||
type=str,
|
||||
choices=["grounding-dino-tiny", "grounding-dino-base"],
|
||||
help="Name of the GroundingDino model you'd like to convert.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model directory."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--push_to_hub", action="store_true", help="Whether or not to push the converted model to the 🤗 hub."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verify_logits", action="store_false", help="Whether or not to verify logits after conversion."
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
convert_grounding_dino_checkpoint(args)
|
File diff suppressed because it is too large
Load Diff
3132
src/transformers/models/grounding_dino/modeling_grounding_dino.py
Normal file
3132
src/transformers/models/grounding_dino/modeling_grounding_dino.py
Normal file
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,228 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2024 The HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
Processor class for Grounding DINO.
|
||||
"""
|
||||
|
||||
from typing import List, Optional, Tuple, Union
|
||||
|
||||
from ...image_processing_utils import BatchFeature
|
||||
from ...image_transforms import center_to_corners_format
|
||||
from ...image_utils import ImageInput
|
||||
from ...processing_utils import ProcessorMixin
|
||||
from ...tokenization_utils_base import BatchEncoding, PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy
|
||||
from ...utils import TensorType, is_torch_available
|
||||
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
|
||||
def get_phrases_from_posmap(posmaps, input_ids):
|
||||
"""Get token ids of phrases from posmaps and input_ids.
|
||||
|
||||
Args:
|
||||
posmaps (`torch.BoolTensor` of shape `(num_boxes, hidden_size)`):
|
||||
A boolean tensor of text-thresholded logits related to the detected bounding boxes.
|
||||
input_ids (`torch.LongTensor`) of shape `(sequence_length, )`):
|
||||
A tensor of token ids.
|
||||
"""
|
||||
left_idx = 0
|
||||
right_idx = posmaps.shape[-1] - 1
|
||||
|
||||
# Avoiding altering the input tensor
|
||||
posmaps = posmaps.clone()
|
||||
|
||||
posmaps[:, 0 : left_idx + 1] = False
|
||||
posmaps[:, right_idx:] = False
|
||||
|
||||
token_ids = []
|
||||
for posmap in posmaps:
|
||||
non_zero_idx = posmap.nonzero(as_tuple=True)[0].tolist()
|
||||
token_ids.append([input_ids[i] for i in non_zero_idx])
|
||||
|
||||
return token_ids
|
||||
|
||||
|
||||
class GroundingDinoProcessor(ProcessorMixin):
|
||||
r"""
|
||||
Constructs a Grounding DINO processor which wraps a Deformable DETR image processor and a BERT tokenizer into a
|
||||
single processor.
|
||||
|
||||
[`GroundingDinoProcessor`] offers all the functionalities of [`GroundingDinoImageProcessor`] and
|
||||
[`AutoTokenizer`]. See the docstring of [`~GroundingDinoProcessor.__call__`] and [`~GroundingDinoProcessor.decode`]
|
||||
for more information.
|
||||
|
||||
Args:
|
||||
image_processor (`GroundingDinoImageProcessor`):
|
||||
An instance of [`GroundingDinoImageProcessor`]. The image processor is a required input.
|
||||
tokenizer (`AutoTokenizer`):
|
||||
An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input.
|
||||
"""
|
||||
|
||||
attributes = ["image_processor", "tokenizer"]
|
||||
image_processor_class = "GroundingDinoImageProcessor"
|
||||
tokenizer_class = "AutoTokenizer"
|
||||
|
||||
def __init__(self, image_processor, tokenizer):
|
||||
super().__init__(image_processor, tokenizer)
|
||||
|
||||
def __call__(
|
||||
self,
|
||||
images: ImageInput = None,
|
||||
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
|
||||
add_special_tokens: bool = True,
|
||||
padding: Union[bool, str, PaddingStrategy] = False,
|
||||
truncation: Union[bool, str, TruncationStrategy] = None,
|
||||
max_length: Optional[int] = None,
|
||||
stride: int = 0,
|
||||
pad_to_multiple_of: Optional[int] = None,
|
||||
return_attention_mask: Optional[bool] = None,
|
||||
return_overflowing_tokens: bool = False,
|
||||
return_special_tokens_mask: bool = False,
|
||||
return_offsets_mapping: bool = False,
|
||||
return_token_type_ids: bool = True,
|
||||
return_length: bool = False,
|
||||
verbose: bool = True,
|
||||
return_tensors: Optional[Union[str, TensorType]] = None,
|
||||
**kwargs,
|
||||
) -> BatchEncoding:
|
||||
"""
|
||||
This method uses [`GroundingDinoImageProcessor.__call__`] method to prepare image(s) for the model, and
|
||||
[`BertTokenizerFast.__call__`] to prepare text for the model.
|
||||
|
||||
Please refer to the docstring of the above two methods for more information.
|
||||
"""
|
||||
if images is None and text is None:
|
||||
raise ValueError("You have to specify either images or text.")
|
||||
|
||||
# Get only text
|
||||
if images is not None:
|
||||
encoding_image_processor = self.image_processor(images, return_tensors=return_tensors)
|
||||
else:
|
||||
encoding_image_processor = BatchFeature()
|
||||
|
||||
if text is not None:
|
||||
text_encoding = self.tokenizer(
|
||||
text=text,
|
||||
add_special_tokens=add_special_tokens,
|
||||
padding=padding,
|
||||
truncation=truncation,
|
||||
max_length=max_length,
|
||||
stride=stride,
|
||||
pad_to_multiple_of=pad_to_multiple_of,
|
||||
return_attention_mask=return_attention_mask,
|
||||
return_overflowing_tokens=return_overflowing_tokens,
|
||||
return_special_tokens_mask=return_special_tokens_mask,
|
||||
return_offsets_mapping=return_offsets_mapping,
|
||||
return_token_type_ids=return_token_type_ids,
|
||||
return_length=return_length,
|
||||
verbose=verbose,
|
||||
return_tensors=return_tensors,
|
||||
**kwargs,
|
||||
)
|
||||
else:
|
||||
text_encoding = BatchEncoding()
|
||||
|
||||
text_encoding.update(encoding_image_processor)
|
||||
|
||||
return text_encoding
|
||||
|
||||
# Copied from transformers.models.blip.processing_blip.BlipProcessor.batch_decode with BertTokenizerFast->PreTrainedTokenizer
|
||||
def batch_decode(self, *args, **kwargs):
|
||||
"""
|
||||
This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
|
||||
refer to the docstring of this method for more information.
|
||||
"""
|
||||
return self.tokenizer.batch_decode(*args, **kwargs)
|
||||
|
||||
# Copied from transformers.models.blip.processing_blip.BlipProcessor.decode with BertTokenizerFast->PreTrainedTokenizer
|
||||
def decode(self, *args, **kwargs):
|
||||
"""
|
||||
This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to
|
||||
the docstring of this method for more information.
|
||||
"""
|
||||
return self.tokenizer.decode(*args, **kwargs)
|
||||
|
||||
@property
|
||||
# Copied from transformers.models.blip.processing_blip.BlipProcessor.model_input_names
|
||||
def model_input_names(self):
|
||||
tokenizer_input_names = self.tokenizer.model_input_names
|
||||
image_processor_input_names = self.image_processor.model_input_names
|
||||
return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
|
||||
|
||||
def post_process_grounded_object_detection(
|
||||
self,
|
||||
outputs,
|
||||
input_ids,
|
||||
box_threshold: float = 0.25,
|
||||
text_threshold: float = 0.25,
|
||||
target_sizes: Union[TensorType, List[Tuple]] = None,
|
||||
):
|
||||
"""
|
||||
Converts the raw output of [`GroundingDinoForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y,
|
||||
bottom_right_x, bottom_right_y) format and get the associated text label.
|
||||
|
||||
Args:
|
||||
outputs ([`GroundingDinoObjectDetectionOutput`]):
|
||||
Raw outputs of the model.
|
||||
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
|
||||
The token ids of the input text.
|
||||
box_threshold (`float`, *optional*, defaults to 0.25):
|
||||
Score threshold to keep object detection predictions.
|
||||
text_threshold (`float`, *optional*, defaults to 0.25):
|
||||
Score threshold to keep text detection predictions.
|
||||
target_sizes (`torch.Tensor` or `List[Tuple[int, int]]`, *optional*):
|
||||
Tensor of shape `(batch_size, 2)` or list of tuples (`Tuple[int, int]`) containing the target size
|
||||
`(height, width)` of each image in the batch. If unset, predictions will not be resized.
|
||||
Returns:
|
||||
`List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
|
||||
in the batch as predicted by the model.
|
||||
"""
|
||||
logits, boxes = outputs.logits, outputs.pred_boxes
|
||||
|
||||
if target_sizes is not None:
|
||||
if len(logits) != len(target_sizes):
|
||||
raise ValueError(
|
||||
"Make sure that you pass in as many target sizes as the batch dimension of the logits"
|
||||
)
|
||||
|
||||
probs = torch.sigmoid(logits) # (batch_size, num_queries, 256)
|
||||
scores = torch.max(probs, dim=-1)[0] # (batch_size, num_queries)
|
||||
|
||||
# Convert to [x0, y0, x1, y1] format
|
||||
boxes = center_to_corners_format(boxes)
|
||||
|
||||
# Convert from relative [0, 1] to absolute [0, height] coordinates
|
||||
if target_sizes is not None:
|
||||
if isinstance(target_sizes, List):
|
||||
img_h = torch.Tensor([i[0] for i in target_sizes])
|
||||
img_w = torch.Tensor([i[1] for i in target_sizes])
|
||||
else:
|
||||
img_h, img_w = target_sizes.unbind(1)
|
||||
|
||||
scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1).to(boxes.device)
|
||||
boxes = boxes * scale_fct[:, None, :]
|
||||
|
||||
results = []
|
||||
for idx, (s, b, p) in enumerate(zip(scores, boxes, probs)):
|
||||
score = s[s > box_threshold]
|
||||
box = b[s > box_threshold]
|
||||
prob = p[s > box_threshold]
|
||||
label_ids = get_phrases_from_posmap(prob > text_threshold, input_ids[idx])
|
||||
label = self.batch_decode(label_ids)
|
||||
results.append({"scores": score, "labels": label, "boxes": box})
|
||||
|
||||
return results
|
@ -4236,6 +4236,30 @@ class GraphormerPreTrainedModel(metaclass=DummyObject):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
GROUNDING_DINO_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
class GroundingDinoForObjectDetection(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class GroundingDinoModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class GroundingDinoPreTrainedModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
GROUPVIT_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
|
@ -247,6 +247,13 @@ class GLPNImageProcessor(metaclass=DummyObject):
|
||||
requires_backends(self, ["vision"])
|
||||
|
||||
|
||||
class GroundingDinoImageProcessor(metaclass=DummyObject):
|
||||
_backends = ["vision"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["vision"])
|
||||
|
||||
|
||||
class IdeficsImageProcessor(metaclass=DummyObject):
|
||||
_backends = ["vision"]
|
||||
|
||||
|
0
tests/models/grounding_dino/__init__.py
Normal file
0
tests/models/grounding_dino/__init__.py
Normal file
@ -0,0 +1,530 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2024 HuggingFace Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import json
|
||||
import pathlib
|
||||
import unittest
|
||||
|
||||
from transformers.testing_utils import require_torch, require_vision, slow
|
||||
from transformers.utils import is_torch_available, is_vision_available
|
||||
|
||||
from ...test_image_processing_common import AnnotationFormatTestMixin, ImageProcessingTestMixin, prepare_image_inputs
|
||||
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
from transformers.models.grounding_dino.modeling_grounding_dino import GroundingDinoObjectDetectionOutput
|
||||
|
||||
if is_vision_available():
|
||||
from PIL import Image
|
||||
|
||||
from transformers import GroundingDinoImageProcessor
|
||||
|
||||
|
||||
class GroundingDinoImageProcessingTester(unittest.TestCase):
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
batch_size=7,
|
||||
num_channels=3,
|
||||
min_resolution=30,
|
||||
max_resolution=400,
|
||||
do_resize=True,
|
||||
size=None,
|
||||
do_normalize=True,
|
||||
image_mean=[0.5, 0.5, 0.5],
|
||||
image_std=[0.5, 0.5, 0.5],
|
||||
do_rescale=True,
|
||||
rescale_factor=1 / 255,
|
||||
do_pad=True,
|
||||
):
|
||||
# by setting size["longest_edge"] > max_resolution we're effectively not testing this :p
|
||||
size = size if size is not None else {"shortest_edge": 18, "longest_edge": 1333}
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.num_channels = num_channels
|
||||
self.min_resolution = min_resolution
|
||||
self.max_resolution = max_resolution
|
||||
self.do_resize = do_resize
|
||||
self.size = size
|
||||
self.do_normalize = do_normalize
|
||||
self.image_mean = image_mean
|
||||
self.image_std = image_std
|
||||
self.do_rescale = do_rescale
|
||||
self.rescale_factor = rescale_factor
|
||||
self.do_pad = do_pad
|
||||
self.num_queries = 5
|
||||
self.embed_dim = 5
|
||||
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTester.prepare_image_processor_dict with DeformableDetr->GroundingDino
|
||||
def prepare_image_processor_dict(self):
|
||||
return {
|
||||
"do_resize": self.do_resize,
|
||||
"size": self.size,
|
||||
"do_normalize": self.do_normalize,
|
||||
"image_mean": self.image_mean,
|
||||
"image_std": self.image_std,
|
||||
"do_rescale": self.do_rescale,
|
||||
"rescale_factor": self.rescale_factor,
|
||||
"do_pad": self.do_pad,
|
||||
}
|
||||
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTester.get_expected_values with DeformableDetr->GroundingDino
|
||||
def get_expected_values(self, image_inputs, batched=False):
|
||||
"""
|
||||
This function computes the expected height and width when providing images to GroundingDinoImageProcessor,
|
||||
assuming do_resize is set to True with a scalar size.
|
||||
"""
|
||||
if not batched:
|
||||
image = image_inputs[0]
|
||||
if isinstance(image, Image.Image):
|
||||
w, h = image.size
|
||||
else:
|
||||
h, w = image.shape[1], image.shape[2]
|
||||
if w < h:
|
||||
expected_height = int(self.size["shortest_edge"] * h / w)
|
||||
expected_width = self.size["shortest_edge"]
|
||||
elif w > h:
|
||||
expected_height = self.size["shortest_edge"]
|
||||
expected_width = int(self.size["shortest_edge"] * w / h)
|
||||
else:
|
||||
expected_height = self.size["shortest_edge"]
|
||||
expected_width = self.size["shortest_edge"]
|
||||
|
||||
else:
|
||||
expected_values = []
|
||||
for image in image_inputs:
|
||||
expected_height, expected_width = self.get_expected_values([image])
|
||||
expected_values.append((expected_height, expected_width))
|
||||
expected_height = max(expected_values, key=lambda item: item[0])[0]
|
||||
expected_width = max(expected_values, key=lambda item: item[1])[1]
|
||||
|
||||
return expected_height, expected_width
|
||||
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTester.expected_output_image_shape with DeformableDetr->GroundingDino
|
||||
def expected_output_image_shape(self, images):
|
||||
height, width = self.get_expected_values(images, batched=True)
|
||||
return self.num_channels, height, width
|
||||
|
||||
def get_fake_grounding_dino_output(self):
|
||||
torch.manual_seed(42)
|
||||
return GroundingDinoObjectDetectionOutput(
|
||||
pred_boxes=torch.rand(self.batch_size, self.num_queries, 4),
|
||||
logits=torch.rand(self.batch_size, self.num_queries, self.embed_dim),
|
||||
)
|
||||
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTester.prepare_image_inputs with DeformableDetr->GroundingDino
|
||||
def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False):
|
||||
return prepare_image_inputs(
|
||||
batch_size=self.batch_size,
|
||||
num_channels=self.num_channels,
|
||||
min_resolution=self.min_resolution,
|
||||
max_resolution=self.max_resolution,
|
||||
equal_resolution=equal_resolution,
|
||||
numpify=numpify,
|
||||
torchify=torchify,
|
||||
)
|
||||
|
||||
|
||||
@require_torch
|
||||
@require_vision
|
||||
class GroundingDinoImageProcessingTest(AnnotationFormatTestMixin, ImageProcessingTestMixin, unittest.TestCase):
|
||||
image_processing_class = GroundingDinoImageProcessor if is_vision_available() else None
|
||||
|
||||
def setUp(self):
|
||||
self.image_processor_tester = GroundingDinoImageProcessingTester(self)
|
||||
|
||||
@property
|
||||
def image_processor_dict(self):
|
||||
return self.image_processor_tester.prepare_image_processor_dict()
|
||||
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTest.test_image_processor_properties with DeformableDetr->GroundingDino
|
||||
def test_image_processor_properties(self):
|
||||
image_processing = self.image_processing_class(**self.image_processor_dict)
|
||||
self.assertTrue(hasattr(image_processing, "image_mean"))
|
||||
self.assertTrue(hasattr(image_processing, "image_std"))
|
||||
self.assertTrue(hasattr(image_processing, "do_normalize"))
|
||||
self.assertTrue(hasattr(image_processing, "do_resize"))
|
||||
self.assertTrue(hasattr(image_processing, "do_rescale"))
|
||||
self.assertTrue(hasattr(image_processing, "do_pad"))
|
||||
self.assertTrue(hasattr(image_processing, "size"))
|
||||
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTest.test_image_processor_from_dict_with_kwargs with DeformableDetr->GroundingDino
|
||||
def test_image_processor_from_dict_with_kwargs(self):
|
||||
image_processor = self.image_processing_class.from_dict(self.image_processor_dict)
|
||||
self.assertEqual(image_processor.size, {"shortest_edge": 18, "longest_edge": 1333})
|
||||
self.assertEqual(image_processor.do_pad, True)
|
||||
|
||||
image_processor = self.image_processing_class.from_dict(
|
||||
self.image_processor_dict, size=42, max_size=84, pad_and_return_pixel_mask=False
|
||||
)
|
||||
self.assertEqual(image_processor.size, {"shortest_edge": 42, "longest_edge": 84})
|
||||
self.assertEqual(image_processor.do_pad, False)
|
||||
|
||||
def test_post_process_object_detection(self):
|
||||
image_processor = self.image_processing_class(**self.image_processor_dict)
|
||||
outputs = self.image_processor_tester.get_fake_grounding_dino_output()
|
||||
results = image_processor.post_process_object_detection(outputs, threshold=0.0)
|
||||
|
||||
self.assertEqual(len(results), self.image_processor_tester.batch_size)
|
||||
self.assertEqual(list(results[0].keys()), ["scores", "labels", "boxes"])
|
||||
self.assertEqual(results[0]["boxes"].shape, (self.image_processor_tester.num_queries, 4))
|
||||
self.assertEqual(results[0]["scores"].shape, (self.image_processor_tester.num_queries,))
|
||||
|
||||
expected_scores = torch.tensor([0.7050, 0.7222, 0.7222, 0.6829, 0.7220])
|
||||
self.assertTrue(torch.allclose(results[0]["scores"], expected_scores, atol=1e-4))
|
||||
|
||||
expected_box_slice = torch.tensor([0.6908, 0.4354, 1.0737, 1.3947])
|
||||
self.assertTrue(torch.allclose(results[0]["boxes"][0], expected_box_slice, atol=1e-4))
|
||||
|
||||
@slow
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTest.test_call_pytorch_with_coco_detection_annotations with DeformableDetr->GroundingDino
|
||||
def test_call_pytorch_with_coco_detection_annotations(self):
|
||||
# prepare image and target
|
||||
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
|
||||
with open("./tests/fixtures/tests_samples/COCO/coco_annotations.txt", "r") as f:
|
||||
target = json.loads(f.read())
|
||||
|
||||
target = {"image_id": 39769, "annotations": target}
|
||||
|
||||
# encode them
|
||||
image_processing = GroundingDinoImageProcessor()
|
||||
encoding = image_processing(images=image, annotations=target, return_tensors="pt")
|
||||
|
||||
# verify pixel values
|
||||
expected_shape = torch.Size([1, 3, 800, 1066])
|
||||
self.assertEqual(encoding["pixel_values"].shape, expected_shape)
|
||||
|
||||
expected_slice = torch.tensor([0.2796, 0.3138, 0.3481])
|
||||
self.assertTrue(torch.allclose(encoding["pixel_values"][0, 0, 0, :3], expected_slice, atol=1e-4))
|
||||
|
||||
# verify area
|
||||
expected_area = torch.tensor([5887.9600, 11250.2061, 489353.8438, 837122.7500, 147967.5156, 165732.3438])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["area"], expected_area))
|
||||
# verify boxes
|
||||
expected_boxes_shape = torch.Size([6, 4])
|
||||
self.assertEqual(encoding["labels"][0]["boxes"].shape, expected_boxes_shape)
|
||||
expected_boxes_slice = torch.tensor([0.5503, 0.2765, 0.0604, 0.2215])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["boxes"][0], expected_boxes_slice, atol=1e-3))
|
||||
# verify image_id
|
||||
expected_image_id = torch.tensor([39769])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["image_id"], expected_image_id))
|
||||
# verify is_crowd
|
||||
expected_is_crowd = torch.tensor([0, 0, 0, 0, 0, 0])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["iscrowd"], expected_is_crowd))
|
||||
# verify class_labels
|
||||
expected_class_labels = torch.tensor([75, 75, 63, 65, 17, 17])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["class_labels"], expected_class_labels))
|
||||
# verify orig_size
|
||||
expected_orig_size = torch.tensor([480, 640])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["orig_size"], expected_orig_size))
|
||||
# verify size
|
||||
expected_size = torch.tensor([800, 1066])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["size"], expected_size))
|
||||
|
||||
@slow
|
||||
# Copied from tests.models.detr.test_image_processing_detr.DetrImageProcessingTest.test_batched_coco_detection_annotations with Detr->GroundingDino
|
||||
def test_batched_coco_detection_annotations(self):
|
||||
image_0 = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
|
||||
image_1 = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png").resize((800, 800))
|
||||
|
||||
with open("./tests/fixtures/tests_samples/COCO/coco_annotations.txt", "r") as f:
|
||||
target = json.loads(f.read())
|
||||
|
||||
annotations_0 = {"image_id": 39769, "annotations": target}
|
||||
annotations_1 = {"image_id": 39769, "annotations": target}
|
||||
|
||||
# Adjust the bounding boxes for the resized image
|
||||
w_0, h_0 = image_0.size
|
||||
w_1, h_1 = image_1.size
|
||||
for i in range(len(annotations_1["annotations"])):
|
||||
coords = annotations_1["annotations"][i]["bbox"]
|
||||
new_bbox = [
|
||||
coords[0] * w_1 / w_0,
|
||||
coords[1] * h_1 / h_0,
|
||||
coords[2] * w_1 / w_0,
|
||||
coords[3] * h_1 / h_0,
|
||||
]
|
||||
annotations_1["annotations"][i]["bbox"] = new_bbox
|
||||
|
||||
images = [image_0, image_1]
|
||||
annotations = [annotations_0, annotations_1]
|
||||
|
||||
image_processing = GroundingDinoImageProcessor()
|
||||
encoding = image_processing(
|
||||
images=images,
|
||||
annotations=annotations,
|
||||
return_segmentation_masks=True,
|
||||
return_tensors="pt", # do_convert_annotations=True
|
||||
)
|
||||
|
||||
# Check the pixel values have been padded
|
||||
postprocessed_height, postprocessed_width = 800, 1066
|
||||
expected_shape = torch.Size([2, 3, postprocessed_height, postprocessed_width])
|
||||
self.assertEqual(encoding["pixel_values"].shape, expected_shape)
|
||||
|
||||
# Check the bounding boxes have been adjusted for padded images
|
||||
self.assertEqual(encoding["labels"][0]["boxes"].shape, torch.Size([6, 4]))
|
||||
self.assertEqual(encoding["labels"][1]["boxes"].shape, torch.Size([6, 4]))
|
||||
expected_boxes_0 = torch.tensor(
|
||||
[
|
||||
[0.6879, 0.4609, 0.0755, 0.3691],
|
||||
[0.2118, 0.3359, 0.2601, 0.1566],
|
||||
[0.5011, 0.5000, 0.9979, 1.0000],
|
||||
[0.5010, 0.5020, 0.9979, 0.9959],
|
||||
[0.3284, 0.5944, 0.5884, 0.8112],
|
||||
[0.8394, 0.5445, 0.3213, 0.9110],
|
||||
]
|
||||
)
|
||||
expected_boxes_1 = torch.tensor(
|
||||
[
|
||||
[0.4130, 0.2765, 0.0453, 0.2215],
|
||||
[0.1272, 0.2016, 0.1561, 0.0940],
|
||||
[0.3757, 0.4933, 0.7488, 0.9865],
|
||||
[0.3759, 0.5002, 0.7492, 0.9955],
|
||||
[0.1971, 0.5456, 0.3532, 0.8646],
|
||||
[0.5790, 0.4115, 0.3430, 0.7161],
|
||||
]
|
||||
)
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["boxes"], expected_boxes_0, rtol=1e-3))
|
||||
self.assertTrue(torch.allclose(encoding["labels"][1]["boxes"], expected_boxes_1, rtol=1e-3))
|
||||
|
||||
# Check the masks have also been padded
|
||||
self.assertEqual(encoding["labels"][0]["masks"].shape, torch.Size([6, 800, 1066]))
|
||||
self.assertEqual(encoding["labels"][1]["masks"].shape, torch.Size([6, 800, 1066]))
|
||||
|
||||
# Check if do_convert_annotations=False, then the annotations are not converted to centre_x, centre_y, width, height
|
||||
# format and not in the range [0, 1]
|
||||
encoding = image_processing(
|
||||
images=images,
|
||||
annotations=annotations,
|
||||
return_segmentation_masks=True,
|
||||
do_convert_annotations=False,
|
||||
return_tensors="pt",
|
||||
)
|
||||
self.assertEqual(encoding["labels"][0]["boxes"].shape, torch.Size([6, 4]))
|
||||
self.assertEqual(encoding["labels"][1]["boxes"].shape, torch.Size([6, 4]))
|
||||
# Convert to absolute coordinates
|
||||
unnormalized_boxes_0 = torch.vstack(
|
||||
[
|
||||
expected_boxes_0[:, 0] * postprocessed_width,
|
||||
expected_boxes_0[:, 1] * postprocessed_height,
|
||||
expected_boxes_0[:, 2] * postprocessed_width,
|
||||
expected_boxes_0[:, 3] * postprocessed_height,
|
||||
]
|
||||
).T
|
||||
unnormalized_boxes_1 = torch.vstack(
|
||||
[
|
||||
expected_boxes_1[:, 0] * postprocessed_width,
|
||||
expected_boxes_1[:, 1] * postprocessed_height,
|
||||
expected_boxes_1[:, 2] * postprocessed_width,
|
||||
expected_boxes_1[:, 3] * postprocessed_height,
|
||||
]
|
||||
).T
|
||||
# Convert from centre_x, centre_y, width, height to x_min, y_min, x_max, y_max
|
||||
expected_boxes_0 = torch.vstack(
|
||||
[
|
||||
unnormalized_boxes_0[:, 0] - unnormalized_boxes_0[:, 2] / 2,
|
||||
unnormalized_boxes_0[:, 1] - unnormalized_boxes_0[:, 3] / 2,
|
||||
unnormalized_boxes_0[:, 0] + unnormalized_boxes_0[:, 2] / 2,
|
||||
unnormalized_boxes_0[:, 1] + unnormalized_boxes_0[:, 3] / 2,
|
||||
]
|
||||
).T
|
||||
expected_boxes_1 = torch.vstack(
|
||||
[
|
||||
unnormalized_boxes_1[:, 0] - unnormalized_boxes_1[:, 2] / 2,
|
||||
unnormalized_boxes_1[:, 1] - unnormalized_boxes_1[:, 3] / 2,
|
||||
unnormalized_boxes_1[:, 0] + unnormalized_boxes_1[:, 2] / 2,
|
||||
unnormalized_boxes_1[:, 1] + unnormalized_boxes_1[:, 3] / 2,
|
||||
]
|
||||
).T
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["boxes"], expected_boxes_0, rtol=1))
|
||||
self.assertTrue(torch.allclose(encoding["labels"][1]["boxes"], expected_boxes_1, rtol=1))
|
||||
|
||||
@slow
|
||||
# Copied from tests.models.deformable_detr.test_image_processing_deformable_detr.DeformableDetrImageProcessingTest.test_call_pytorch_with_coco_panoptic_annotations with DeformableDetr->GroundingDino
|
||||
def test_call_pytorch_with_coco_panoptic_annotations(self):
|
||||
# prepare image, target and masks_path
|
||||
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
|
||||
with open("./tests/fixtures/tests_samples/COCO/coco_panoptic_annotations.txt", "r") as f:
|
||||
target = json.loads(f.read())
|
||||
|
||||
target = {"file_name": "000000039769.png", "image_id": 39769, "segments_info": target}
|
||||
|
||||
masks_path = pathlib.Path("./tests/fixtures/tests_samples/COCO/coco_panoptic")
|
||||
|
||||
# encode them
|
||||
image_processing = GroundingDinoImageProcessor(format="coco_panoptic")
|
||||
encoding = image_processing(images=image, annotations=target, masks_path=masks_path, return_tensors="pt")
|
||||
|
||||
# verify pixel values
|
||||
expected_shape = torch.Size([1, 3, 800, 1066])
|
||||
self.assertEqual(encoding["pixel_values"].shape, expected_shape)
|
||||
|
||||
expected_slice = torch.tensor([0.2796, 0.3138, 0.3481])
|
||||
self.assertTrue(torch.allclose(encoding["pixel_values"][0, 0, 0, :3], expected_slice, atol=1e-4))
|
||||
|
||||
# verify area
|
||||
expected_area = torch.tensor([147979.6875, 165527.0469, 484638.5938, 11292.9375, 5879.6562, 7634.1147])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["area"], expected_area))
|
||||
# verify boxes
|
||||
expected_boxes_shape = torch.Size([6, 4])
|
||||
self.assertEqual(encoding["labels"][0]["boxes"].shape, expected_boxes_shape)
|
||||
expected_boxes_slice = torch.tensor([0.2625, 0.5437, 0.4688, 0.8625])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["boxes"][0], expected_boxes_slice, atol=1e-3))
|
||||
# verify image_id
|
||||
expected_image_id = torch.tensor([39769])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["image_id"], expected_image_id))
|
||||
# verify is_crowd
|
||||
expected_is_crowd = torch.tensor([0, 0, 0, 0, 0, 0])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["iscrowd"], expected_is_crowd))
|
||||
# verify class_labels
|
||||
expected_class_labels = torch.tensor([17, 17, 63, 75, 75, 93])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["class_labels"], expected_class_labels))
|
||||
# verify masks
|
||||
expected_masks_sum = 822873
|
||||
self.assertEqual(encoding["labels"][0]["masks"].sum().item(), expected_masks_sum)
|
||||
# verify orig_size
|
||||
expected_orig_size = torch.tensor([480, 640])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["orig_size"], expected_orig_size))
|
||||
# verify size
|
||||
expected_size = torch.tensor([800, 1066])
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["size"], expected_size))
|
||||
|
||||
@slow
|
||||
# Copied from tests.models.detr.test_image_processing_detr.DetrImageProcessingTest.test_batched_coco_panoptic_annotations with Detr->GroundingDino
|
||||
def test_batched_coco_panoptic_annotations(self):
|
||||
# prepare image, target and masks_path
|
||||
image_0 = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
|
||||
image_1 = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png").resize((800, 800))
|
||||
|
||||
with open("./tests/fixtures/tests_samples/COCO/coco_panoptic_annotations.txt", "r") as f:
|
||||
target = json.loads(f.read())
|
||||
|
||||
annotation_0 = {"file_name": "000000039769.png", "image_id": 39769, "segments_info": target}
|
||||
annotation_1 = {"file_name": "000000039769.png", "image_id": 39769, "segments_info": target}
|
||||
|
||||
w_0, h_0 = image_0.size
|
||||
w_1, h_1 = image_1.size
|
||||
for i in range(len(annotation_1["segments_info"])):
|
||||
coords = annotation_1["segments_info"][i]["bbox"]
|
||||
new_bbox = [
|
||||
coords[0] * w_1 / w_0,
|
||||
coords[1] * h_1 / h_0,
|
||||
coords[2] * w_1 / w_0,
|
||||
coords[3] * h_1 / h_0,
|
||||
]
|
||||
annotation_1["segments_info"][i]["bbox"] = new_bbox
|
||||
|
||||
masks_path = pathlib.Path("./tests/fixtures/tests_samples/COCO/coco_panoptic")
|
||||
|
||||
images = [image_0, image_1]
|
||||
annotations = [annotation_0, annotation_1]
|
||||
|
||||
# encode them
|
||||
image_processing = GroundingDinoImageProcessor(format="coco_panoptic")
|
||||
encoding = image_processing(
|
||||
images=images,
|
||||
annotations=annotations,
|
||||
masks_path=masks_path,
|
||||
return_tensors="pt",
|
||||
return_segmentation_masks=True,
|
||||
)
|
||||
|
||||
# Check the pixel values have been padded
|
||||
postprocessed_height, postprocessed_width = 800, 1066
|
||||
expected_shape = torch.Size([2, 3, postprocessed_height, postprocessed_width])
|
||||
self.assertEqual(encoding["pixel_values"].shape, expected_shape)
|
||||
|
||||
# Check the bounding boxes have been adjusted for padded images
|
||||
self.assertEqual(encoding["labels"][0]["boxes"].shape, torch.Size([6, 4]))
|
||||
self.assertEqual(encoding["labels"][1]["boxes"].shape, torch.Size([6, 4]))
|
||||
expected_boxes_0 = torch.tensor(
|
||||
[
|
||||
[0.2625, 0.5437, 0.4688, 0.8625],
|
||||
[0.7719, 0.4104, 0.4531, 0.7125],
|
||||
[0.5000, 0.4927, 0.9969, 0.9854],
|
||||
[0.1688, 0.2000, 0.2063, 0.0917],
|
||||
[0.5492, 0.2760, 0.0578, 0.2187],
|
||||
[0.4992, 0.4990, 0.9984, 0.9979],
|
||||
]
|
||||
)
|
||||
expected_boxes_1 = torch.tensor(
|
||||
[
|
||||
[0.1576, 0.3262, 0.2814, 0.5175],
|
||||
[0.4634, 0.2463, 0.2720, 0.4275],
|
||||
[0.3002, 0.2956, 0.5985, 0.5913],
|
||||
[0.1013, 0.1200, 0.1238, 0.0550],
|
||||
[0.3297, 0.1656, 0.0347, 0.1312],
|
||||
[0.2997, 0.2994, 0.5994, 0.5987],
|
||||
]
|
||||
)
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["boxes"], expected_boxes_0, rtol=1e-3))
|
||||
self.assertTrue(torch.allclose(encoding["labels"][1]["boxes"], expected_boxes_1, rtol=1e-3))
|
||||
|
||||
# Check the masks have also been padded
|
||||
self.assertEqual(encoding["labels"][0]["masks"].shape, torch.Size([6, 800, 1066]))
|
||||
self.assertEqual(encoding["labels"][1]["masks"].shape, torch.Size([6, 800, 1066]))
|
||||
|
||||
# Check if do_convert_annotations=False, then the annotations are not converted to centre_x, centre_y, width, height
|
||||
# format and not in the range [0, 1]
|
||||
encoding = image_processing(
|
||||
images=images,
|
||||
annotations=annotations,
|
||||
masks_path=masks_path,
|
||||
return_segmentation_masks=True,
|
||||
do_convert_annotations=False,
|
||||
return_tensors="pt",
|
||||
)
|
||||
self.assertEqual(encoding["labels"][0]["boxes"].shape, torch.Size([6, 4]))
|
||||
self.assertEqual(encoding["labels"][1]["boxes"].shape, torch.Size([6, 4]))
|
||||
# Convert to absolute coordinates
|
||||
unnormalized_boxes_0 = torch.vstack(
|
||||
[
|
||||
expected_boxes_0[:, 0] * postprocessed_width,
|
||||
expected_boxes_0[:, 1] * postprocessed_height,
|
||||
expected_boxes_0[:, 2] * postprocessed_width,
|
||||
expected_boxes_0[:, 3] * postprocessed_height,
|
||||
]
|
||||
).T
|
||||
unnormalized_boxes_1 = torch.vstack(
|
||||
[
|
||||
expected_boxes_1[:, 0] * postprocessed_width,
|
||||
expected_boxes_1[:, 1] * postprocessed_height,
|
||||
expected_boxes_1[:, 2] * postprocessed_width,
|
||||
expected_boxes_1[:, 3] * postprocessed_height,
|
||||
]
|
||||
).T
|
||||
# Convert from centre_x, centre_y, width, height to x_min, y_min, x_max, y_max
|
||||
expected_boxes_0 = torch.vstack(
|
||||
[
|
||||
unnormalized_boxes_0[:, 0] - unnormalized_boxes_0[:, 2] / 2,
|
||||
unnormalized_boxes_0[:, 1] - unnormalized_boxes_0[:, 3] / 2,
|
||||
unnormalized_boxes_0[:, 0] + unnormalized_boxes_0[:, 2] / 2,
|
||||
unnormalized_boxes_0[:, 1] + unnormalized_boxes_0[:, 3] / 2,
|
||||
]
|
||||
).T
|
||||
expected_boxes_1 = torch.vstack(
|
||||
[
|
||||
unnormalized_boxes_1[:, 0] - unnormalized_boxes_1[:, 2] / 2,
|
||||
unnormalized_boxes_1[:, 1] - unnormalized_boxes_1[:, 3] / 2,
|
||||
unnormalized_boxes_1[:, 0] + unnormalized_boxes_1[:, 2] / 2,
|
||||
unnormalized_boxes_1[:, 1] + unnormalized_boxes_1[:, 3] / 2,
|
||||
]
|
||||
).T
|
||||
self.assertTrue(torch.allclose(encoding["labels"][0]["boxes"], expected_boxes_0, rtol=1))
|
||||
self.assertTrue(torch.allclose(encoding["labels"][1]["boxes"], expected_boxes_1, rtol=1))
|
689
tests/models/grounding_dino/test_modeling_grounding_dino.py
Normal file
689
tests/models/grounding_dino/test_modeling_grounding_dino.py
Normal file
@ -0,0 +1,689 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
""" Testing suite for the PyTorch Grounding DINO model. """
|
||||
|
||||
import collections
|
||||
import inspect
|
||||
import math
|
||||
import re
|
||||
import unittest
|
||||
|
||||
from transformers import (
|
||||
GroundingDinoConfig,
|
||||
SwinConfig,
|
||||
is_torch_available,
|
||||
is_vision_available,
|
||||
)
|
||||
from transformers.file_utils import cached_property
|
||||
from transformers.testing_utils import (
|
||||
require_timm,
|
||||
require_torch,
|
||||
require_torch_gpu,
|
||||
require_vision,
|
||||
slow,
|
||||
torch_device,
|
||||
)
|
||||
|
||||
from ...test_configuration_common import ConfigTester
|
||||
from ...test_modeling_common import ModelTesterMixin, _config_zero_init, floats_tensor, ids_tensor
|
||||
from ...test_pipeline_mixin import PipelineTesterMixin
|
||||
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
from transformers import GroundingDinoForObjectDetection, GroundingDinoModel
|
||||
from transformers.pytorch_utils import id_tensor_storage
|
||||
|
||||
|
||||
if is_vision_available():
|
||||
from PIL import Image
|
||||
|
||||
from transformers import AutoProcessor
|
||||
|
||||
|
||||
class GroundingDinoModelTester:
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
batch_size=4,
|
||||
is_training=True,
|
||||
use_labels=True,
|
||||
hidden_size=32,
|
||||
num_hidden_layers=2,
|
||||
num_attention_heads=4,
|
||||
intermediate_size=4,
|
||||
hidden_act="gelu",
|
||||
hidden_dropout_prob=0.1,
|
||||
attention_probs_dropout_prob=0.1,
|
||||
num_queries=2,
|
||||
num_channels=3,
|
||||
image_size=98,
|
||||
n_targets=8,
|
||||
num_labels=3,
|
||||
num_feature_levels=4,
|
||||
encoder_n_points=2,
|
||||
decoder_n_points=6,
|
||||
max_text_len=7,
|
||||
):
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.is_training = is_training
|
||||
self.use_labels = use_labels
|
||||
self.hidden_size = hidden_size
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.num_attention_heads = num_attention_heads
|
||||
self.intermediate_size = intermediate_size
|
||||
self.hidden_act = hidden_act
|
||||
self.hidden_dropout_prob = hidden_dropout_prob
|
||||
self.attention_probs_dropout_prob = attention_probs_dropout_prob
|
||||
self.num_queries = num_queries
|
||||
self.num_channels = num_channels
|
||||
self.image_size = image_size
|
||||
self.n_targets = n_targets
|
||||
self.num_labels = num_labels
|
||||
self.num_feature_levels = num_feature_levels
|
||||
self.encoder_n_points = encoder_n_points
|
||||
self.decoder_n_points = decoder_n_points
|
||||
self.max_text_len = max_text_len
|
||||
|
||||
# we also set the expected seq length for both encoder and decoder
|
||||
self.encoder_seq_length_vision = (
|
||||
math.ceil(self.image_size / 8) ** 2
|
||||
+ math.ceil(self.image_size / 16) ** 2
|
||||
+ math.ceil(self.image_size / 32) ** 2
|
||||
+ math.ceil(self.image_size / 64) ** 2
|
||||
)
|
||||
|
||||
self.encoder_seq_length_text = self.max_text_len
|
||||
|
||||
self.decoder_seq_length = self.num_queries
|
||||
|
||||
def prepare_config_and_inputs(self):
|
||||
pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size])
|
||||
pixel_mask = torch.ones([self.batch_size, self.image_size, self.image_size], device=torch_device)
|
||||
|
||||
input_ids = ids_tensor([self.batch_size, self.max_text_len], self.num_labels)
|
||||
|
||||
labels = None
|
||||
if self.use_labels:
|
||||
# labels is a list of Dict (each Dict being the labels for a given example in the batch)
|
||||
labels = []
|
||||
for i in range(self.batch_size):
|
||||
target = {}
|
||||
target["class_labels"] = torch.randint(
|
||||
high=self.num_labels, size=(self.n_targets,), device=torch_device
|
||||
)
|
||||
target["boxes"] = torch.rand(self.n_targets, 4, device=torch_device)
|
||||
target["masks"] = torch.rand(self.n_targets, self.image_size, self.image_size, device=torch_device)
|
||||
labels.append(target)
|
||||
|
||||
config = self.get_config()
|
||||
return config, pixel_values, pixel_mask, input_ids, labels
|
||||
|
||||
def get_config(self):
|
||||
swin_config = SwinConfig(
|
||||
window_size=7,
|
||||
embed_dim=8,
|
||||
depths=[1, 1, 1, 1],
|
||||
num_heads=[1, 1, 1, 1],
|
||||
image_size=self.image_size,
|
||||
out_features=["stage2", "stage3", "stage4"],
|
||||
out_indices=[2, 3, 4],
|
||||
)
|
||||
text_backbone = {
|
||||
"hidden_size": 8,
|
||||
"num_hidden_layers": 2,
|
||||
"num_attention_heads": 2,
|
||||
"intermediate_size": 8,
|
||||
"max_position_embeddings": 8,
|
||||
"model_type": "bert",
|
||||
}
|
||||
return GroundingDinoConfig(
|
||||
d_model=self.hidden_size,
|
||||
encoder_layers=self.num_hidden_layers,
|
||||
decoder_layers=self.num_hidden_layers,
|
||||
encoder_attention_heads=self.num_attention_heads,
|
||||
decoder_attention_heads=self.num_attention_heads,
|
||||
encoder_ffn_dim=self.intermediate_size,
|
||||
decoder_ffn_dim=self.intermediate_size,
|
||||
dropout=self.hidden_dropout_prob,
|
||||
attention_dropout=self.attention_probs_dropout_prob,
|
||||
num_queries=self.num_queries,
|
||||
num_labels=self.num_labels,
|
||||
num_feature_levels=self.num_feature_levels,
|
||||
encoder_n_points=self.encoder_n_points,
|
||||
decoder_n_points=self.decoder_n_points,
|
||||
use_timm_backbone=False,
|
||||
backbone_config=swin_config,
|
||||
max_text_len=self.max_text_len,
|
||||
text_config=text_backbone,
|
||||
)
|
||||
|
||||
def prepare_config_and_inputs_for_common(self):
|
||||
config, pixel_values, pixel_mask, input_ids, labels = self.prepare_config_and_inputs()
|
||||
inputs_dict = {"pixel_values": pixel_values, "pixel_mask": pixel_mask, "input_ids": input_ids}
|
||||
return config, inputs_dict
|
||||
|
||||
def create_and_check_model(self, config, pixel_values, pixel_mask, input_ids, labels):
|
||||
model = GroundingDinoModel(config=config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
|
||||
result = model(pixel_values=pixel_values, pixel_mask=pixel_mask, input_ids=input_ids)
|
||||
|
||||
self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.num_queries, self.hidden_size))
|
||||
|
||||
def create_and_check_object_detection_head_model(self, config, pixel_values, pixel_mask, input_ids, labels):
|
||||
model = GroundingDinoForObjectDetection(config=config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
|
||||
result = model(pixel_values=pixel_values, pixel_mask=pixel_mask, input_ids=input_ids)
|
||||
|
||||
self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_queries, config.max_text_len))
|
||||
self.parent.assertEqual(result.pred_boxes.shape, (self.batch_size, self.num_queries, 4))
|
||||
|
||||
result = model(pixel_values=pixel_values, pixel_mask=pixel_mask, input_ids=input_ids, labels=labels)
|
||||
|
||||
self.parent.assertEqual(result.loss.shape, ())
|
||||
self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_queries, config.max_text_len))
|
||||
self.parent.assertEqual(result.pred_boxes.shape, (self.batch_size, self.num_queries, 4))
|
||||
|
||||
|
||||
@require_torch
|
||||
class GroundingDinoModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase):
|
||||
all_model_classes = (GroundingDinoModel, GroundingDinoForObjectDetection) if is_torch_available() else ()
|
||||
is_encoder_decoder = True
|
||||
test_torchscript = False
|
||||
test_pruning = False
|
||||
test_head_masking = False
|
||||
test_missing_keys = False
|
||||
pipeline_model_mapping = (
|
||||
{"image-feature-extraction": GroundingDinoModel, "zero-shot-object-detection": GroundingDinoForObjectDetection}
|
||||
if is_torch_available()
|
||||
else {}
|
||||
)
|
||||
|
||||
# special case for head models
|
||||
def _prepare_for_class(self, inputs_dict, model_class, return_labels=False):
|
||||
inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels)
|
||||
|
||||
if return_labels:
|
||||
if model_class.__name__ == "GroundingDinoForObjectDetection":
|
||||
labels = []
|
||||
for i in range(self.model_tester.batch_size):
|
||||
target = {}
|
||||
target["class_labels"] = torch.ones(
|
||||
size=(self.model_tester.n_targets,), device=torch_device, dtype=torch.long
|
||||
)
|
||||
target["boxes"] = torch.ones(
|
||||
self.model_tester.n_targets, 4, device=torch_device, dtype=torch.float
|
||||
)
|
||||
target["masks"] = torch.ones(
|
||||
self.model_tester.n_targets,
|
||||
self.model_tester.image_size,
|
||||
self.model_tester.image_size,
|
||||
device=torch_device,
|
||||
dtype=torch.float,
|
||||
)
|
||||
labels.append(target)
|
||||
inputs_dict["labels"] = labels
|
||||
|
||||
return inputs_dict
|
||||
|
||||
def setUp(self):
|
||||
self.model_tester = GroundingDinoModelTester(self)
|
||||
self.config_tester = ConfigTester(self, config_class=GroundingDinoConfig, has_text_modality=False)
|
||||
|
||||
def test_config(self):
|
||||
# we don't test common_properties and arguments_init as these don't apply for Grounding DINO
|
||||
self.config_tester.create_and_test_config_to_json_string()
|
||||
self.config_tester.create_and_test_config_to_json_file()
|
||||
self.config_tester.create_and_test_config_from_and_save_pretrained()
|
||||
self.config_tester.create_and_test_config_with_num_labels()
|
||||
self.config_tester.check_config_can_be_init_without_params()
|
||||
|
||||
def test_model(self):
|
||||
config_and_inputs = self.model_tester.prepare_config_and_inputs()
|
||||
self.model_tester.create_and_check_model(*config_and_inputs)
|
||||
|
||||
def test_object_detection_head_model(self):
|
||||
config_and_inputs = self.model_tester.prepare_config_and_inputs()
|
||||
self.model_tester.create_and_check_object_detection_head_model(*config_and_inputs)
|
||||
|
||||
@unittest.skip(reason="Grounding DINO does not use inputs_embeds")
|
||||
def test_inputs_embeds(self):
|
||||
pass
|
||||
|
||||
@unittest.skip(reason="Grounding DINO does not have a get_input_embeddings method")
|
||||
def test_model_common_attributes(self):
|
||||
pass
|
||||
|
||||
@unittest.skip(reason="Grounding DINO does not use token embeddings")
|
||||
def test_resize_tokens_embeddings(self):
|
||||
pass
|
||||
|
||||
@unittest.skip(reason="Feed forward chunking is not implemented")
|
||||
def test_feed_forward_chunking(self):
|
||||
pass
|
||||
|
||||
def test_attention_outputs(self):
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
config.return_dict = True
|
||||
|
||||
for model_class in self.all_model_classes:
|
||||
inputs_dict["output_attentions"] = True
|
||||
inputs_dict["output_hidden_states"] = False
|
||||
config.return_dict = True
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
outputs = model(**self._prepare_for_class(inputs_dict, model_class))
|
||||
attentions = outputs.encoder_attentions[-1]
|
||||
self.assertEqual(len(attentions), self.model_tester.num_hidden_layers)
|
||||
|
||||
# check that output_attentions also work using config
|
||||
del inputs_dict["output_attentions"]
|
||||
config.output_attentions = True
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
outputs = model(**self._prepare_for_class(inputs_dict, model_class))
|
||||
attentions = outputs.encoder_attentions[-1]
|
||||
self.assertEqual(len(attentions), self.model_tester.num_hidden_layers)
|
||||
|
||||
self.assertListEqual(
|
||||
list(attentions[0].shape[-3:]),
|
||||
[
|
||||
self.model_tester.num_attention_heads,
|
||||
self.model_tester.num_feature_levels,
|
||||
self.model_tester.encoder_n_points,
|
||||
],
|
||||
)
|
||||
out_len = len(outputs)
|
||||
|
||||
correct_outlen = 10
|
||||
|
||||
# loss is at first position
|
||||
if "labels" in inputs_dict:
|
||||
correct_outlen += 1 # loss is added to beginning
|
||||
# Object Detection model returns pred_logits and pred_boxes
|
||||
if model_class.__name__ == "GroundingDinoForObjectDetection":
|
||||
correct_outlen += 2
|
||||
|
||||
self.assertEqual(out_len, correct_outlen)
|
||||
|
||||
# decoder attentions
|
||||
decoder_attentions = outputs.decoder_attentions[0]
|
||||
self.assertIsInstance(decoder_attentions, (list, tuple))
|
||||
self.assertEqual(len(decoder_attentions), self.model_tester.num_hidden_layers)
|
||||
self.assertListEqual(
|
||||
list(decoder_attentions[0].shape[-3:]),
|
||||
[self.model_tester.num_attention_heads, self.model_tester.num_queries, self.model_tester.num_queries],
|
||||
)
|
||||
|
||||
# cross attentions
|
||||
cross_attentions = outputs.decoder_attentions[-1]
|
||||
self.assertIsInstance(cross_attentions, (list, tuple))
|
||||
self.assertEqual(len(cross_attentions), self.model_tester.num_hidden_layers)
|
||||
self.assertListEqual(
|
||||
list(cross_attentions[0].shape[-3:]),
|
||||
[
|
||||
self.model_tester.num_attention_heads,
|
||||
self.model_tester.num_feature_levels,
|
||||
self.model_tester.decoder_n_points,
|
||||
],
|
||||
)
|
||||
|
||||
# Check attention is always last and order is fine
|
||||
inputs_dict["output_attentions"] = True
|
||||
inputs_dict["output_hidden_states"] = True
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
outputs = model(**self._prepare_for_class(inputs_dict, model_class))
|
||||
|
||||
self.assertEqual(out_len + 3, len(outputs))
|
||||
|
||||
self_attentions = outputs.encoder_attentions[-1]
|
||||
|
||||
self.assertEqual(len(self_attentions), self.model_tester.num_hidden_layers)
|
||||
self.assertListEqual(
|
||||
list(self_attentions[0].shape[-3:]),
|
||||
[
|
||||
self.model_tester.num_attention_heads,
|
||||
self.model_tester.num_feature_levels,
|
||||
self.model_tester.encoder_n_points,
|
||||
],
|
||||
)
|
||||
|
||||
# overwrite since hidden_states are called encoder_text_hidden_states
|
||||
def test_hidden_states_output(self):
|
||||
def check_hidden_states_output(inputs_dict, config, model_class):
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
|
||||
with torch.no_grad():
|
||||
outputs = model(**self._prepare_for_class(inputs_dict, model_class))
|
||||
|
||||
hidden_states = outputs.encoder_vision_hidden_states
|
||||
|
||||
expected_num_layers = getattr(
|
||||
self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers + 1
|
||||
)
|
||||
self.assertEqual(len(hidden_states), expected_num_layers)
|
||||
|
||||
seq_len = self.model_tester.encoder_seq_length_vision
|
||||
|
||||
self.assertListEqual(
|
||||
list(hidden_states[0].shape[-2:]),
|
||||
[seq_len, self.model_tester.hidden_size],
|
||||
)
|
||||
|
||||
hidden_states = outputs.encoder_text_hidden_states
|
||||
|
||||
self.assertEqual(len(hidden_states), expected_num_layers)
|
||||
|
||||
seq_len = self.model_tester.encoder_seq_length_text
|
||||
|
||||
self.assertListEqual(
|
||||
list(hidden_states[0].shape[-2:]),
|
||||
[seq_len, self.model_tester.hidden_size],
|
||||
)
|
||||
|
||||
hidden_states = outputs.decoder_hidden_states
|
||||
|
||||
self.assertIsInstance(hidden_states, (list, tuple))
|
||||
self.assertEqual(len(hidden_states), expected_num_layers)
|
||||
seq_len = getattr(self.model_tester, "seq_length", None)
|
||||
decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len)
|
||||
|
||||
self.assertListEqual(
|
||||
list(hidden_states[0].shape[-2:]),
|
||||
[decoder_seq_length, self.model_tester.hidden_size],
|
||||
)
|
||||
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
for model_class in self.all_model_classes:
|
||||
inputs_dict["output_hidden_states"] = True
|
||||
check_hidden_states_output(inputs_dict, config, model_class)
|
||||
|
||||
# check that output_hidden_states also work using config
|
||||
del inputs_dict["output_hidden_states"]
|
||||
config.output_hidden_states = True
|
||||
|
||||
check_hidden_states_output(inputs_dict, config, model_class)
|
||||
|
||||
# removed retain_grad and grad on decoder_hidden_states, as queries don't require grad
|
||||
def test_retain_grad_hidden_states_attentions(self):
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
config.output_hidden_states = True
|
||||
config.output_attentions = True
|
||||
|
||||
# no need to test all models as different heads yield the same functionality
|
||||
model_class = self.all_model_classes[0]
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
|
||||
inputs = self._prepare_for_class(inputs_dict, model_class)
|
||||
|
||||
outputs = model(**inputs)
|
||||
|
||||
output = outputs[0]
|
||||
|
||||
encoder_hidden_states = outputs.encoder_vision_hidden_states[0]
|
||||
encoder_attentions = outputs.encoder_attentions[0][0]
|
||||
encoder_hidden_states.retain_grad()
|
||||
encoder_attentions.retain_grad()
|
||||
|
||||
cross_attentions = outputs.decoder_attentions[-1][0]
|
||||
cross_attentions.retain_grad()
|
||||
|
||||
output.flatten()[0].backward(retain_graph=True)
|
||||
|
||||
self.assertIsNotNone(encoder_hidden_states.grad)
|
||||
self.assertIsNotNone(encoder_attentions.grad)
|
||||
self.assertIsNotNone(cross_attentions.grad)
|
||||
|
||||
def test_forward_signature(self):
|
||||
config, _ = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
for model_class in self.all_model_classes:
|
||||
model = model_class(config)
|
||||
signature = inspect.signature(model.forward)
|
||||
# signature.parameters is an OrderedDict => so arg_names order is deterministic
|
||||
arg_names = [*signature.parameters.keys()]
|
||||
|
||||
expected_arg_names = ["pixel_values", "input_ids"]
|
||||
self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names)
|
||||
|
||||
def test_different_timm_backbone(self):
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
# let's pick a random timm backbone
|
||||
config.backbone = "tf_mobilenetv3_small_075"
|
||||
config.use_timm_backbone = True
|
||||
config.backbone_config = None
|
||||
config.backbone_kwargs = {"in_chans": 3, "out_indices": (2, 3, 4)}
|
||||
|
||||
for model_class in self.all_model_classes:
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
outputs = model(**self._prepare_for_class(inputs_dict, model_class))
|
||||
|
||||
if model_class.__name__ == "GroundingDinoForObjectDetection":
|
||||
expected_shape = (
|
||||
self.model_tester.batch_size,
|
||||
self.model_tester.num_queries,
|
||||
config.max_text_len,
|
||||
)
|
||||
self.assertEqual(outputs.logits.shape, expected_shape)
|
||||
|
||||
self.assertTrue(outputs)
|
||||
|
||||
def test_initialization(self):
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
configs_no_init = _config_zero_init(config)
|
||||
for model_class in self.all_model_classes:
|
||||
model = model_class(config=configs_no_init)
|
||||
for name, param in model.named_parameters():
|
||||
if param.requires_grad:
|
||||
if (
|
||||
"level_embed" in name
|
||||
or "sampling_offsets.bias" in name
|
||||
or "text_param" in name
|
||||
or "vision_param" in name
|
||||
or "value_proj" in name
|
||||
or "output_proj" in name
|
||||
or "reference_points" in name
|
||||
):
|
||||
continue
|
||||
self.assertIn(
|
||||
((param.data.mean() * 1e9).round() / 1e9).item(),
|
||||
[0.0, 1.0],
|
||||
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
|
||||
)
|
||||
|
||||
# Copied from tests.models.deformable_detr.test_modeling_deformable_detr.DeformableDetrModelTest.test_two_stage_training with DeformableDetr->GroundingDino
|
||||
def test_two_stage_training(self):
|
||||
model_class = GroundingDinoForObjectDetection
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
config.return_dict = True
|
||||
config.two_stage = True
|
||||
config.auxiliary_loss = True
|
||||
config.with_box_refine = True
|
||||
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.train()
|
||||
inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
|
||||
loss = model(**inputs).loss
|
||||
loss.backward()
|
||||
|
||||
def test_tied_weights_keys(self):
|
||||
config, _ = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
config.tie_word_embeddings = True
|
||||
for model_class in self.all_model_classes:
|
||||
model_tied = model_class(config)
|
||||
|
||||
ptrs = collections.defaultdict(list)
|
||||
for name, tensor in model_tied.state_dict().items():
|
||||
ptrs[id_tensor_storage(tensor)].append(name)
|
||||
|
||||
# These are all the pointers of shared tensors.
|
||||
tied_params = [names for _, names in ptrs.items() if len(names) > 1]
|
||||
|
||||
tied_weight_keys = model_tied._tied_weights_keys if model_tied._tied_weights_keys is not None else []
|
||||
# Detect we get a hit for each key
|
||||
for key in tied_weight_keys:
|
||||
if not any(re.search(key, p) for group in tied_params for p in group):
|
||||
raise ValueError(f"{key} is not a tied weight key for {model_class}.")
|
||||
|
||||
# Removed tied weights found from tied params -> there should only be one left after
|
||||
for key in tied_weight_keys:
|
||||
for i in range(len(tied_params)):
|
||||
tied_params[i] = [p for p in tied_params[i] if re.search(key, p) is None]
|
||||
|
||||
# GroundingDino when sharing weights also uses the shared ones in GroundingDinoDecoder
|
||||
# Therefore, differently from DeformableDetr, we expect the group lens to be 2
|
||||
# one for self.bbox_embed in GroundingDinoForObejectDetection and another one
|
||||
# in the decoder
|
||||
tied_params = [group for group in tied_params if len(group) > 2]
|
||||
self.assertListEqual(
|
||||
tied_params,
|
||||
[],
|
||||
f"Missing `_tied_weights_keys` for {model_class}: add all of {tied_params} except one.",
|
||||
)
|
||||
|
||||
|
||||
# We will verify our results on an image of cute cats
|
||||
def prepare_img():
|
||||
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
|
||||
return image
|
||||
|
||||
|
||||
def prepare_text():
|
||||
text = "a cat."
|
||||
return text
|
||||
|
||||
|
||||
@require_timm
|
||||
@require_vision
|
||||
@slow
|
||||
class GroundingDinoModelIntegrationTests(unittest.TestCase):
|
||||
@cached_property
|
||||
def default_processor(self):
|
||||
return AutoProcessor.from_pretrained("IDEA-Research/grounding-dino-tiny") if is_vision_available() else None
|
||||
|
||||
def test_inference_object_detection_head(self):
|
||||
model = GroundingDinoForObjectDetection.from_pretrained("IDEA-Research/grounding-dino-tiny").to(torch_device)
|
||||
|
||||
processor = self.default_processor
|
||||
image = prepare_img()
|
||||
text = prepare_text()
|
||||
encoding = processor(images=image, text=text, return_tensors="pt").to(torch_device)
|
||||
|
||||
with torch.no_grad():
|
||||
outputs = model(**encoding)
|
||||
|
||||
expected_shape_logits = torch.Size((1, model.config.num_queries, model.config.d_model))
|
||||
self.assertEqual(outputs.logits.shape, expected_shape_logits)
|
||||
|
||||
expected_boxes = torch.tensor(
|
||||
[[0.7674, 0.4136, 0.4572], [0.2566, 0.5463, 0.4760], [0.2585, 0.5442, 0.4641]]
|
||||
).to(torch_device)
|
||||
expected_logits = torch.tensor(
|
||||
[[-4.8913, -0.1900, -0.2161], [-4.9653, -0.3719, -0.3950], [-5.9599, -3.3765, -3.3104]]
|
||||
).to(torch_device)
|
||||
|
||||
self.assertTrue(torch.allclose(outputs.logits[0, :3, :3], expected_logits, atol=1e-3))
|
||||
|
||||
expected_shape_boxes = torch.Size((1, model.config.num_queries, 4))
|
||||
self.assertEqual(outputs.pred_boxes.shape, expected_shape_boxes)
|
||||
self.assertTrue(torch.allclose(outputs.pred_boxes[0, :3, :3], expected_boxes, atol=1e-4))
|
||||
|
||||
# verify postprocessing
|
||||
results = processor.image_processor.post_process_object_detection(
|
||||
outputs, threshold=0.35, target_sizes=[image.size[::-1]]
|
||||
)[0]
|
||||
expected_scores = torch.tensor([0.4526, 0.4082]).to(torch_device)
|
||||
expected_slice_boxes = torch.tensor([344.8143, 23.1796, 637.4004, 373.8295]).to(torch_device)
|
||||
|
||||
self.assertEqual(len(results["scores"]), 2)
|
||||
self.assertTrue(torch.allclose(results["scores"], expected_scores, atol=1e-3))
|
||||
self.assertTrue(torch.allclose(results["boxes"][0, :], expected_slice_boxes, atol=1e-2))
|
||||
|
||||
# verify grounded postprocessing
|
||||
expected_labels = ["a cat", "a cat"]
|
||||
results = processor.post_process_grounded_object_detection(
|
||||
outputs=outputs,
|
||||
input_ids=encoding.input_ids,
|
||||
box_threshold=0.35,
|
||||
text_threshold=0.3,
|
||||
target_sizes=[image.size[::-1]],
|
||||
)[0]
|
||||
|
||||
self.assertTrue(torch.allclose(results["scores"], expected_scores, atol=1e-3))
|
||||
self.assertTrue(torch.allclose(results["boxes"][0, :], expected_slice_boxes, atol=1e-2))
|
||||
self.assertListEqual(results["labels"], expected_labels)
|
||||
|
||||
@require_torch_gpu
|
||||
def test_inference_object_detection_head_equivalence_cpu_gpu(self):
|
||||
processor = self.default_processor
|
||||
image = prepare_img()
|
||||
text = prepare_text()
|
||||
encoding = processor(images=image, text=text, return_tensors="pt")
|
||||
|
||||
# 1. run model on CPU
|
||||
model = GroundingDinoForObjectDetection.from_pretrained("IDEA-Research/grounding-dino-tiny")
|
||||
|
||||
with torch.no_grad():
|
||||
cpu_outputs = model(**encoding)
|
||||
|
||||
# 2. run model on GPU
|
||||
model.to("cuda")
|
||||
encoding = encoding.to("cuda")
|
||||
with torch.no_grad():
|
||||
gpu_outputs = model(**encoding)
|
||||
|
||||
# 3. assert equivalence
|
||||
for key in cpu_outputs.keys():
|
||||
self.assertTrue(torch.allclose(cpu_outputs[key], gpu_outputs[key].cpu(), atol=1e-3))
|
||||
|
||||
expected_logits = torch.tensor(
|
||||
[[-4.8915, -0.1900, -0.2161], [-4.9658, -0.3716, -0.3948], [-5.9596, -3.3763, -3.3103]]
|
||||
)
|
||||
self.assertTrue(torch.allclose(cpu_outputs.logits[0, :3, :3], expected_logits, atol=1e-3))
|
||||
|
||||
# assert postprocessing
|
||||
results_cpu = processor.image_processor.post_process_object_detection(
|
||||
cpu_outputs, threshold=0.35, target_sizes=[image.size[::-1]]
|
||||
)[0]
|
||||
|
||||
result_gpu = processor.image_processor.post_process_object_detection(
|
||||
gpu_outputs, threshold=0.35, target_sizes=[image.size[::-1]]
|
||||
)[0]
|
||||
|
||||
self.assertTrue(torch.allclose(results_cpu["scores"], result_gpu["scores"].cpu(), atol=1e-3))
|
||||
self.assertTrue(torch.allclose(results_cpu["boxes"], result_gpu["boxes"].cpu(), atol=1e-3))
|
253
tests/models/grounding_dino/test_processor_grounding_dino.py
Normal file
253
tests/models/grounding_dino/test_processor_grounding_dino.py
Normal file
@ -0,0 +1,253 @@
|
||||
# Copyright 2024 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
from transformers import BertTokenizer, BertTokenizerFast, GroundingDinoProcessor
|
||||
from transformers.models.bert.tokenization_bert import VOCAB_FILES_NAMES
|
||||
from transformers.testing_utils import require_torch, require_vision
|
||||
from transformers.utils import IMAGE_PROCESSOR_NAME, is_torch_available, is_vision_available
|
||||
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
from transformers.models.grounding_dino.modeling_grounding_dino import GroundingDinoObjectDetectionOutput
|
||||
|
||||
if is_vision_available():
|
||||
from PIL import Image
|
||||
|
||||
from transformers import GroundingDinoImageProcessor
|
||||
|
||||
|
||||
@require_torch
|
||||
@require_vision
|
||||
class GroundingDinoProcessorTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.tmpdirname = tempfile.mkdtemp()
|
||||
|
||||
vocab_tokens = ["[UNK]","[CLS]","[SEP]","[PAD]","[MASK]","want","##want","##ed","wa","un","runn","##ing",",","low","lowest"] # fmt: skip
|
||||
self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"])
|
||||
with open(self.vocab_file, "w", encoding="utf-8") as vocab_writer:
|
||||
vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
|
||||
|
||||
image_processor_map = {
|
||||
"do_resize": True,
|
||||
"size": None,
|
||||
"do_normalize": True,
|
||||
"image_mean": [0.5, 0.5, 0.5],
|
||||
"image_std": [0.5, 0.5, 0.5],
|
||||
"do_rescale": True,
|
||||
"rescale_factor": 1 / 255,
|
||||
"do_pad": True,
|
||||
}
|
||||
self.image_processor_file = os.path.join(self.tmpdirname, IMAGE_PROCESSOR_NAME)
|
||||
with open(self.image_processor_file, "w", encoding="utf-8") as fp:
|
||||
json.dump(image_processor_map, fp)
|
||||
|
||||
self.batch_size = 7
|
||||
self.num_queries = 5
|
||||
self.embed_dim = 5
|
||||
self.seq_length = 5
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.get_tokenizer with CLIP->Bert
|
||||
def get_tokenizer(self, **kwargs):
|
||||
return BertTokenizer.from_pretrained(self.tmpdirname, **kwargs)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.get_rust_tokenizer with CLIP->Bert
|
||||
def get_rust_tokenizer(self, **kwargs):
|
||||
return BertTokenizerFast.from_pretrained(self.tmpdirname, **kwargs)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.get_image_processor with CLIP->GroundingDino
|
||||
def get_image_processor(self, **kwargs):
|
||||
return GroundingDinoImageProcessor.from_pretrained(self.tmpdirname, **kwargs)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.tearDown
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.tmpdirname)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.prepare_image_inputs
|
||||
def prepare_image_inputs(self):
|
||||
"""This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
|
||||
or a list of PyTorch tensors if one specifies torchify=True.
|
||||
"""
|
||||
|
||||
image_inputs = [np.random.randint(255, size=(3, 30, 400), dtype=np.uint8)]
|
||||
|
||||
image_inputs = [Image.fromarray(np.moveaxis(x, 0, -1)) for x in image_inputs]
|
||||
|
||||
return image_inputs
|
||||
|
||||
def get_fake_grounding_dino_output(self):
|
||||
torch.manual_seed(42)
|
||||
return GroundingDinoObjectDetectionOutput(
|
||||
pred_boxes=torch.rand(self.batch_size, self.num_queries, 4),
|
||||
logits=torch.rand(self.batch_size, self.num_queries, self.embed_dim),
|
||||
)
|
||||
|
||||
def get_fake_grounding_dino_input_ids(self):
|
||||
input_ids = torch.tensor([101, 1037, 4937, 1012, 102])
|
||||
return torch.stack([input_ids] * self.batch_size, dim=0)
|
||||
|
||||
def test_post_process_grounded_object_detection(self):
|
||||
image_processor = self.get_image_processor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = GroundingDinoProcessor(tokenizer=tokenizer, image_processor=image_processor)
|
||||
|
||||
grounding_dino_output = self.get_fake_grounding_dino_output()
|
||||
grounding_dino_input_ids = self.get_fake_grounding_dino_input_ids()
|
||||
|
||||
post_processed = processor.post_process_grounded_object_detection(
|
||||
grounding_dino_output, grounding_dino_input_ids
|
||||
)
|
||||
|
||||
self.assertEqual(len(post_processed), self.batch_size)
|
||||
self.assertEqual(list(post_processed[0].keys()), ["scores", "labels", "boxes"])
|
||||
self.assertEqual(post_processed[0]["boxes"].shape, (self.num_queries, 4))
|
||||
self.assertEqual(post_processed[0]["scores"].shape, (self.num_queries,))
|
||||
|
||||
expected_scores = torch.tensor([0.7050, 0.7222, 0.7222, 0.6829, 0.7220])
|
||||
self.assertTrue(torch.allclose(post_processed[0]["scores"], expected_scores, atol=1e-4))
|
||||
|
||||
expected_box_slice = torch.tensor([0.6908, 0.4354, 1.0737, 1.3947])
|
||||
self.assertTrue(torch.allclose(post_processed[0]["boxes"][0], expected_box_slice, atol=1e-4))
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.test_save_load_pretrained_default with CLIP->GroundingDino,GroundingDinoTokenizer->BertTokenizer
|
||||
def test_save_load_pretrained_default(self):
|
||||
tokenizer_slow = self.get_tokenizer()
|
||||
tokenizer_fast = self.get_rust_tokenizer()
|
||||
image_processor = self.get_image_processor()
|
||||
|
||||
processor_slow = GroundingDinoProcessor(tokenizer=tokenizer_slow, image_processor=image_processor)
|
||||
processor_slow.save_pretrained(self.tmpdirname)
|
||||
processor_slow = GroundingDinoProcessor.from_pretrained(self.tmpdirname, use_fast=False)
|
||||
|
||||
processor_fast = GroundingDinoProcessor(tokenizer=tokenizer_fast, image_processor=image_processor)
|
||||
processor_fast.save_pretrained(self.tmpdirname)
|
||||
processor_fast = GroundingDinoProcessor.from_pretrained(self.tmpdirname)
|
||||
|
||||
self.assertEqual(processor_slow.tokenizer.get_vocab(), tokenizer_slow.get_vocab())
|
||||
self.assertEqual(processor_fast.tokenizer.get_vocab(), tokenizer_fast.get_vocab())
|
||||
self.assertEqual(tokenizer_slow.get_vocab(), tokenizer_fast.get_vocab())
|
||||
self.assertIsInstance(processor_slow.tokenizer, BertTokenizer)
|
||||
self.assertIsInstance(processor_fast.tokenizer, BertTokenizerFast)
|
||||
|
||||
self.assertEqual(processor_slow.image_processor.to_json_string(), image_processor.to_json_string())
|
||||
self.assertEqual(processor_fast.image_processor.to_json_string(), image_processor.to_json_string())
|
||||
self.assertIsInstance(processor_slow.image_processor, GroundingDinoImageProcessor)
|
||||
self.assertIsInstance(processor_fast.image_processor, GroundingDinoImageProcessor)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.test_save_load_pretrained_additional_features with CLIP->GroundingDino,GroundingDinoTokenizer->BertTokenizer
|
||||
def test_save_load_pretrained_additional_features(self):
|
||||
processor = GroundingDinoProcessor(tokenizer=self.get_tokenizer(), image_processor=self.get_image_processor())
|
||||
processor.save_pretrained(self.tmpdirname)
|
||||
|
||||
tokenizer_add_kwargs = self.get_tokenizer(bos_token="(BOS)", eos_token="(EOS)")
|
||||
image_processor_add_kwargs = self.get_image_processor(do_normalize=False, padding_value=1.0)
|
||||
|
||||
processor = GroundingDinoProcessor.from_pretrained(
|
||||
self.tmpdirname, bos_token="(BOS)", eos_token="(EOS)", do_normalize=False, padding_value=1.0
|
||||
)
|
||||
|
||||
self.assertEqual(processor.tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab())
|
||||
self.assertIsInstance(processor.tokenizer, BertTokenizerFast)
|
||||
|
||||
self.assertEqual(processor.image_processor.to_json_string(), image_processor_add_kwargs.to_json_string())
|
||||
self.assertIsInstance(processor.image_processor, GroundingDinoImageProcessor)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.test_image_processor with CLIP->GroundingDino
|
||||
def test_image_processor(self):
|
||||
image_processor = self.get_image_processor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = GroundingDinoProcessor(tokenizer=tokenizer, image_processor=image_processor)
|
||||
|
||||
image_input = self.prepare_image_inputs()
|
||||
|
||||
input_image_proc = image_processor(image_input, return_tensors="np")
|
||||
input_processor = processor(images=image_input, return_tensors="np")
|
||||
|
||||
for key in input_image_proc.keys():
|
||||
self.assertAlmostEqual(input_image_proc[key].sum(), input_processor[key].sum(), delta=1e-2)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.test_tokenizer with CLIP->GroundingDino
|
||||
def test_tokenizer(self):
|
||||
image_processor = self.get_image_processor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = GroundingDinoProcessor(tokenizer=tokenizer, image_processor=image_processor)
|
||||
|
||||
input_str = "lower newer"
|
||||
|
||||
encoded_processor = processor(text=input_str)
|
||||
|
||||
encoded_tok = tokenizer(input_str)
|
||||
|
||||
for key in encoded_tok.keys():
|
||||
self.assertListEqual(encoded_tok[key], encoded_processor[key])
|
||||
|
||||
def test_processor(self):
|
||||
image_processor = self.get_image_processor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = GroundingDinoProcessor(tokenizer=tokenizer, image_processor=image_processor)
|
||||
|
||||
input_str = "lower newer"
|
||||
image_input = self.prepare_image_inputs()
|
||||
|
||||
inputs = processor(text=input_str, images=image_input)
|
||||
|
||||
self.assertListEqual(
|
||||
list(inputs.keys()), ["input_ids", "token_type_ids", "attention_mask", "pixel_values", "pixel_mask"]
|
||||
)
|
||||
|
||||
# test if it raises when no input is passed
|
||||
with pytest.raises(ValueError):
|
||||
processor()
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.test_tokenizer_decode with CLIP->GroundingDino
|
||||
def test_tokenizer_decode(self):
|
||||
image_processor = self.get_image_processor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = GroundingDinoProcessor(tokenizer=tokenizer, image_processor=image_processor)
|
||||
|
||||
predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9]]
|
||||
|
||||
decoded_processor = processor.batch_decode(predicted_ids)
|
||||
decoded_tok = tokenizer.batch_decode(predicted_ids)
|
||||
|
||||
self.assertListEqual(decoded_tok, decoded_processor)
|
||||
|
||||
# Copied from tests.models.clip.test_processor_clip.CLIPProcessorTest.test_model_input_names with CLIP->GroundingDino
|
||||
def test_model_input_names(self):
|
||||
image_processor = self.get_image_processor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = GroundingDinoProcessor(tokenizer=tokenizer, image_processor=image_processor)
|
||||
|
||||
input_str = "lower newer"
|
||||
image_input = self.prepare_image_inputs()
|
||||
|
||||
inputs = processor(text=input_str, images=image_input)
|
||||
|
||||
self.assertListEqual(list(inputs.keys()), processor.model_input_names)
|
Loading…
Reference in New Issue
Block a user