transformers/docs/source/en/model_doc/blenderbot-small.md
Anton Vlasjuk d95c864a25
🔴🔴🔴 [Attention] Refactor Attention Interface for Bart-based Models (#38108)
* starting attn refactor for encoder decoder models via bart (eager + sdpa)

* flash attention works, remove unnecessary code

* flex attention support for bart!, gotta check if the renaming is not too aggressive

* some comments

* skip flex grad test for standalone as done with the other test

* revert flex attn rename (for now), sdpa simplify, and todos

* more todos

* refactor mask creation for reuse

* modular attempt at biogpt

* first batch of other models

* fix attn dropout

* fix autoformer copies

* hubert

* another batch of models

* copies/style + last round of bart models --> whisper next?

* remove unnecessary _reshape function and remove copy to whisper

* add skip for decoder-only models out of enc-dec (same as in bart)

* bring back licences

* remove comment, added to pr read instead

* mostly docs

* disable sew flex attn as it's unclear attn mask for now

* oops

* test fixes for enc-dec

* torch fx fixes + try at flex attn

* skip on mbart

* some more fixes

* musicgen skip / delete old attn class logic + sdpa compose compile skip

* disable flex attn for musicgen, not worth the effort

* more fixes and style

* flex attention test for dropout and encoder decoder that dont have main input names

* informer fixes

* the weirdest thing I've encountered yet...

* style

* remove empty tensor attempt, found core root in previous commits

* disable time series due to tests being very text centric on inputs

* add speech to text to be ignoring the other attns, also due to tests

* update docs

* remaining issues resolved ?

* update docs for current state --> nllb moe and pegasus x sdpa is questionable :D

* some models have not set the is_causal flag...

* change dtype in softmax tol old behaviour + some modular fixes

* I hate it but it is what it is

* fixes from main for bart

* forgot this one

* some model fixes

* style

* current status

* marian works now

* fixing some copies

* some copy fixes + time series x informer

* last models possibly and fixes on style/copies

* some post merge fixes

* more fixes

* make attention interface callable and move warnings there

* style lol

* add comment to "unsupported"

* remove callable interface and change interface warnings + some copies

* fix

* ternary is ugly af, make it simpler

* how did that happen

* fix flex attn test

* failing the test

* no more fallback! fixing copies next

* style + attn fixed

* fixing copies and mask creation

* wrong copy

* fixup tests and disable flex attn for now

* fixup last tests?
2025-05-22 17:12:58 +02:00

7.6 KiB

Blenderbot Small

PyTorch TensorFlow Flax FlashAttention SDPA

Note that [BlenderbotSmallModel] and [BlenderbotSmallForConditionalGeneration] are only used in combination with the checkpoint facebook/blenderbot-90M. Larger Blenderbot checkpoints should instead be used with [BlenderbotModel] and [BlenderbotForConditionalGeneration]

Overview

The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.

The abstract of the paper is the following:

Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.

This model was contributed by patrickvonplaten. The authors' code can be found here.

Usage tips

Blenderbot Small is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left.

Resources

BlenderbotSmallConfig

autodoc BlenderbotSmallConfig

BlenderbotSmallTokenizer

autodoc BlenderbotSmallTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

BlenderbotSmallTokenizerFast

autodoc BlenderbotSmallTokenizerFast

BlenderbotSmallModel

autodoc BlenderbotSmallModel - forward

BlenderbotSmallForConditionalGeneration

autodoc BlenderbotSmallForConditionalGeneration - forward

BlenderbotSmallForCausalLM

autodoc BlenderbotSmallForCausalLM - forward

TFBlenderbotSmallModel

autodoc TFBlenderbotSmallModel - call

TFBlenderbotSmallForConditionalGeneration

autodoc TFBlenderbotSmallForConditionalGeneration - call

FlaxBlenderbotSmallModel

autodoc FlaxBlenderbotSmallModel - call - encode - decode

FlaxBlenderbotForConditionalGeneration

autodoc FlaxBlenderbotSmallForConditionalGeneration - call - encode - decode