mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
![]() This fix is in reference to issue #382. GPT2 can now be trained in mixed precision, which I've confirmed with testing. I also tested unconditional generation on multiple seeds before and after changing 1e10 to 1e4 and there was no difference. Please let me know if there is anything else I can do to make this pull request better. Thanks for all your work! |
||
---|---|---|
.. | ||
__init__.py | ||
__main__.py | ||
convert_gpt2_checkpoint_to_pytorch.py | ||
convert_openai_checkpoint_to_pytorch.py | ||
convert_tf_checkpoint_to_pytorch.py | ||
convert_transfo_xl_checkpoint_to_pytorch.py | ||
file_utils.py | ||
modeling_gpt2.py | ||
modeling_openai.py | ||
modeling_transfo_xl_utilities.py | ||
modeling_transfo_xl.py | ||
modeling.py | ||
optimization_openai.py | ||
optimization.py | ||
tokenization_gpt2.py | ||
tokenization_openai.py | ||
tokenization_transfo_xl.py | ||
tokenization.py |