mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Fix quality with ruff==0.0.253
(#21828)
fix quality with ruff 0.0.253 Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
This commit is contained in:
parent
92dfceb124
commit
f95f60c829
@ -347,7 +347,7 @@ BLIP_2_INPUTS_DOCSTRING = r"""
|
||||
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
|
||||
Pixel values. Pixel values can be obtained using [`Blip2Processor`]. See [`Blip2Processor.__call__`] for
|
||||
details.
|
||||
|
||||
|
||||
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
||||
Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be
|
||||
provided to serve as text prompt, which the language model can continue.
|
||||
@ -366,10 +366,10 @@ BLIP_2_INPUTS_DOCSTRING = r"""
|
||||
decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
|
||||
Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an
|
||||
encoder-decoder language model (like T5) is used.
|
||||
|
||||
|
||||
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
||||
[`PreTrainedTokenizer.__call__`] for details. [What are decoder input IDs?](../glossary#decoder-input-ids)
|
||||
|
||||
|
||||
decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
|
||||
Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
|
||||
be used by default.
|
||||
|
Loading…
Reference in New Issue
Block a user