Fix data2vec-audio note about attention mask (#27116)

fix data2vec audio note about attention mask
This commit is contained in:
Thien Tran 2023-10-30 18:52:24 +08:00 committed by GitHub
parent 160432110c
commit e830495c1c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -786,12 +786,11 @@ DATA2VEC_AUDIO_INPUTS_DOCSTRING = r"""
<Tip warning={true}>
`attention_mask` should only be passed if the corresponding processor has `config.return_attention_mask ==
True`. For all models whose processor has `config.return_attention_mask == False`, such as
[data2vec-audio-base](https://huggingface.co/facebook/data2vec-audio-base-960h), `attention_mask` should
**not** be passed to avoid degraded performance when doing batched inference. For such models
`input_values` should simply be padded with 0 and passed without `attention_mask`. Be aware that these
models also yield slightly different results depending on whether `input_values` is padded or not.
`attention_mask` should be passed if the corresponding processor has `config.return_attention_mask ==
True`, which is the case for all pre-trained Data2Vec Audio models. Be aware that that even with
`attention_mask`, zero-padded inputs will have slightly different outputs compared to non-padded inputs
because there are more than one convolutional layer in the positional encodings. For a more detailed
explanation, see [here](https://github.com/huggingface/transformers/issues/25621#issuecomment-1713759349).
</Tip>