mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 05:10:06 +06:00
![]() * make run_generation more generic for other devices * use Accelerate to support any device type it supports. * make style * fix error usage of accelerator.prepare_model * use `PartialState` to make sure everything is running on the right device --------- Co-authored-by: statelesshz <jihuazhong1@huawei.com> |
||
---|---|---|
.. | ||
README.md | ||
requirements.txt | ||
run_generation_contrastive_search.py | ||
run_generation.py |
Language generation
Based on the script run_generation.py
.
Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, GPTJ, Transformer-XL, XLNet, CTRL, BLOOM, LLAMA, OPT. A similar script is used for our official demo Write With Transfomer, where you can try out the different models available in the library.
Example usage:
python run_generation.py \
--model_type=gpt2 \
--model_name_or_path=gpt2