update docs of length_penalty

This commit is contained in:
Manan Dey 2022-04-30 11:43:51 +05:30
parent 7152ed2bae
commit 466bf4800b

View File

@ -950,9 +950,9 @@ class GenerationMixin:
eos_token_id (`int`, *optional*):
The id of the *end-of-sequence* token.
length_penalty (`float`, *optional*, defaults to 1.0):
Exponential penalty to the length. 1.0 means no penalty. Set to values < 1.0 in order to encourage the
model to generate shorter sequences, to a value > 1.0 in order to encourage the model to produce longer
sequences.
Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. 0.0 means no penalty. Set to values < 0.0 in order to encourage the
model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter
sequences.
no_repeat_ngram_size (`int`, *optional*, defaults to 0):
If set to int > 0, all ngrams of that size can only occur once.
encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0):