mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-13 09:40:06 +06:00
![]() * Add early stopping patience and minimum threshold metric must improve to prevent early stopping to pytorch trainer * Add early stopping test * Set patience counter to 0 if best metric not defined yet * Make early stopping a callback. Add callback event for updating the best metric for early stopping callback to trigger on. * Run make style * make funciton name sensible * Improve new argument docstring wording and hope that flakey CI test passes. * Use on_evaluation callback instead of custom. Remove some debug printing * Move early stopping arguments and state into early stopping callback * Run make style * Remove old code * Fix docs formatting. make style went rogue on me. * Remove copied attributes and fix variable * Add assertions on training arguments instead of mutating them. Move comment out of public docs. * Make separate test for early stopping callback. Add test of invalid arguments. * Run make style... I remembered before CI this time! * appease flake8 * Add EarlyStoppingCallback to callback docs * Make docstring EarlyStoppingCallabck match other callbacks. * Fix typo in docs |
||
---|---|---|
.. | ||
_static | ||
imgs | ||
internal | ||
main_classes | ||
model_doc | ||
benchmarks.rst | ||
bertology.rst | ||
conf.py | ||
contributing.md | ||
converting_tensorflow_models.rst | ||
custom_datasets.rst | ||
examples.md | ||
favicon.ico | ||
glossary.rst | ||
index.rst | ||
installation.md | ||
migration.md | ||
model_sharing.rst | ||
model_summary.rst | ||
multilingual.rst | ||
notebooks.md | ||
perplexity.rst | ||
philosophy.rst | ||
preprocessing.rst | ||
pretrained_models.rst | ||
quicktour.rst | ||
serialization.rst | ||
task_summary.rst | ||
testing.rst | ||
tokenizer_summary.rst | ||
training.rst |