diff --git a/docs/source/en/generation_strategies.md b/docs/source/en/generation_strategies.md index 44fd6623040..51f92e06103 100644 --- a/docs/source/en/generation_strategies.md +++ b/docs/source/en/generation_strategies.md @@ -82,7 +82,8 @@ Even if the default decoding strategy mostly works for your task, you can still commonly adjusted parameters include: - `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not -including the tokens in the prompt. +including the tokens in the prompt. As an alternative to using the output's length as a stopping criteria, you can choose +to stop generation whenever the full generation exceeds some amount of time. To learn more, check [`StoppingCriteria`]. - `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability diff --git a/src/transformers/pipelines/text2text_generation.py b/src/transformers/pipelines/text2text_generation.py index 39d4c3a5910..5b9ce06832d 100644 --- a/src/transformers/pipelines/text2text_generation.py +++ b/src/transformers/pipelines/text2text_generation.py @@ -39,8 +39,10 @@ class Text2TextGenerationPipeline(Pipeline): [{'generated_text': 'question: Who created the RuPERTa-base?'}] ``` - Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) - + Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text + generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about + text generation parameters in [Text generation strategies](../generation_strategies) and [Text + generation](text_generation). This Text2TextGenerationPipeline pipeline can currently be loaded from [`pipeline`] using the following task identifier: `"text2text-generation"`. diff --git a/src/transformers/pipelines/text_generation.py b/src/transformers/pipelines/text_generation.py index 79da7ce3105..109971d8ac8 100644 --- a/src/transformers/pipelines/text_generation.py +++ b/src/transformers/pipelines/text_generation.py @@ -39,7 +39,10 @@ class TextGenerationPipeline(Pipeline): >>> outputs = generator("My tart needs some", num_return_sequences=4, return_full_text=False) ``` - Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) + Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text + generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about + text generation parameters in [Text generation strategies](../generation_strategies) and [Text + generation](text_generation). This language generation pipeline can currently be loaded from [`pipeline`] using the following task identifier: `"text-generation"`.