
* add watermarking processor
* remove the other hashing (context width=1 always)
* make style
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* update watermarking process
* add detector
* update tests to use detector
* fix failing tests
* rename `input_seq`
* make style
* doc for processor
* minor fixes
* docs
* make quality
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* add PR suggestions
* let's use lru_cache's default max size (128)
* import processor if torch available
* maybe like this
* lets move the config to torch independet file
* add docs
* tiny docs fix to make the test happy
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* PR suggestions
* add docs
* fix test
* fix docs
* address pr comments
* style
* Revert "style"
This reverts commit 7f33cc34ff
.
* correct style
* make doctest green
---------
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2.3 KiB
Generation
Each framework has a generate method for text generation implemented in their respective GenerationMixin
class:
- PyTorch [
~generation.GenerationMixin.generate
] is implemented in [~generation.GenerationMixin
]. - TensorFlow [
~generation.TFGenerationMixin.generate
] is implemented in [~generation.TFGenerationMixin
]. - Flax/JAX [
~generation.FlaxGenerationMixin.generate
] is implemented in [~generation.FlaxGenerationMixin
].
Regardless of your framework of choice, you can parameterize the generate method with a [~generation.GenerationConfig
]
class instance. Please refer to this class for the complete list of generation parameters, which control the behavior
of the generation method.
To learn how to inspect a model's generation configuration, what are the defaults, how to change the parameters ad hoc, and how to create and save a customized generation configuration, refer to the text generation strategies guide. The guide also explains how to use related features, like token streaming.
GenerationConfig
autodoc generation.GenerationConfig - from_pretrained - from_model_config - save_pretrained - update - validate - get_generation_mode
autodoc generation.WatermarkingConfig
GenerationMixin
autodoc generation.GenerationMixin - generate - compute_transition_scores
TFGenerationMixin
autodoc generation.TFGenerationMixin - generate - compute_transition_scores
FlaxGenerationMixin
autodoc generation.FlaxGenerationMixin - generate