mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
update docs of length_penalty
This commit is contained in:
parent
7152ed2bae
commit
466bf4800b
@ -950,9 +950,9 @@ class GenerationMixin:
|
||||
eos_token_id (`int`, *optional*):
|
||||
The id of the *end-of-sequence* token.
|
||||
length_penalty (`float`, *optional*, defaults to 1.0):
|
||||
Exponential penalty to the length. 1.0 means no penalty. Set to values < 1.0 in order to encourage the
|
||||
model to generate shorter sequences, to a value > 1.0 in order to encourage the model to produce longer
|
||||
sequences.
|
||||
Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. 0.0 means no penalty. Set to values < 0.0 in order to encourage the
|
||||
model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter
|
||||
sequences.
|
||||
no_repeat_ngram_size (`int`, *optional*, defaults to 0):
|
||||
If set to int > 0, all ngrams of that size can only occur once.
|
||||
encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0):
|
||||
|
Loading…
Reference in New Issue
Block a user