transformers/docs/source
Lorenzo Ampil f16540fcba
Pipeline for Text Generation: GenerationPipeline (#3758)
* Add GenerationPipeline

* Fix parameter names

* Correct parameter __call__ parameters

* Add model type attribute and correct function calls for prepare_input

* Take out trailing commas from init attributes

* Remove unnecessary tokenization line

* Implement support for multiple text inputs

* Apply generation support for multiple input text prompts

* Take out tensor coersion

* Take out batch index

* Add text prompt to return sequence

* Squeeze token tensore before decoding

* Return only a single list of sequences if only one prompt was used

* Correct results variable name

* Add GenerationPipeline to SUPPORTED_TASKS with the alias , initalized w GPT2

* Registedred AutoModelWithLMHead for both pt and t

* Update docstring for GenerationPipeline

* Add kwargs parameter to mode.generate

* Take out kwargs parameter after all

* Add generation pipeline example in pipeline docstring

* Fix max length by squeezing tokens tensor

* Apply ensure_tensor_on_device to pytorch tensor

* Include generation step in torch.no_grad

* Take out input from prepare_xlm_input and set 'en' as default xlm_language

* Apply framework specific encoding during prepare_input

* Format w make style

* Move GenerationPipeline import to follow proper import sorting

* Take out training comma from generation dict

* Apply requested changes

* Change name to TextGenerationPipeline

* Apply TextGenerationPipeline rename to __init___

* Changing alias to

* Set input mapping as input to ensure_tensor_on_device

* Fix assertion placement

* Add test_text_generation

* Add TextGenerationPipeline to PipelineCommonTests

* Take out whitespace

* Format __init__ w black

* Fix __init__ style

* Forman __init___

* Add line to end of __init__

* Correct model tokenizer set for test_text_generation

* Ensure to return list of list, not list of string (to pass test)

* Limit test models to only 3 to limit runtime to address circleCI timeout error

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/test_pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Remove argument docstring, __init__, add additional __call__ arguments, and reformat results to list of dict

* Fix blank result list

* Add TextGenerationPipeline to pipelines.rst

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix typos from adding PADDING_TEXT_TOKEN_LENGTH

* Fix incorrectly moved result list

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

* Update src/transformers/pipelines.py

Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* Add back generation line and make style

* Take out blank whitespace

* Apply new alis, text-generation, to test_pipelines

* Fix text generation alias in test

* Update src/transformers/pipelines.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-04-22 09:37:03 -04:00
..
_static Adding usage examples for common tasks (#2850) 2020-02-25 13:48:24 -05:00
imgs GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
main_classes Pipeline for Text Generation: GenerationPipeline (#3758) 2020-04-22 09:37:03 -04:00
model_doc Cleanup fast tokenizers integration (#3706) 2020-04-18 13:43:57 +02:00
benchmarks.md GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
bertology.rst Fixes #3877 2020-04-22 01:15:10 +00:00
conf.py Release: v2.8.0 2020-04-06 10:03:53 -04:00
converting_tensorflow_models.rst GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
examples.md [docs] Doc tweaks 2019-09-26 18:19:51 -04:00
favicon.ico Adding usage examples for common tasks (#2850) 2020-02-25 13:48:24 -05:00
glossary.rst Can test examples spread over multiple blocks 2020-01-23 09:38:45 -05:00
index.rst [Docs] Add DialoGPT (#3755) 2020-04-16 09:04:32 +02:00
installation.md CPU/GPU memory benchmarking utilities - Remove support for python 3.5 (now only 3.6+) (#3186) 2020-03-17 10:17:11 -04:00
migration.md weigths*weights 2020-04-04 15:03:26 -04:00
model_sharing.md [doc] --organization tweak 2020-03-10 16:52:44 -04:00
multilingual.rst Fix failing doc samples 2020-03-04 19:11:31 -05:00
notebooks.md Update notebooks (#3620) 2020-04-06 14:32:39 -04:00
pretrained_models.rst [Docs] Add DialoGPT (#3755) 2020-04-16 09:04:32 +02:00
quickstart.md Delete all mentions of Model2Model (#3019) 2020-02-26 11:36:27 -05:00
serialization.rst [docs] The use of do_lower_case in scripts is on its way to deprecation (#3738) 2020-04-10 12:34:04 -04:00
torchscript.rst GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
usage.rst [Docs] Add usage examples for translation and summarization (#3538) 2020-03-31 09:36:03 -04:00