mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-05 22:00:09 +06:00

* Just import torch AdamW instead * Update docs too * Make AdamW undocumented * make fixup * Add a basic wrapper class * Add it back to the docs * Just remove AdamW entirely * Remove some AdamW references * Drop AdamW from the public init * make fix-copies * Cleanup some references * make fixup * Delete lots of transformers.AdamW references * Remove extra references to adamw_hf
2.3 KiB
2.3 KiB
Optimization
The .optimization
module provides:
- an optimizer with weight decay fixed that can be used to fine-tuned models, and
- several schedules in the form of schedule objects that inherit from
_LRSchedule
: - a gradient accumulation class to accumulate the gradients of multiple batches
AdaFactor (PyTorch)
autodoc Adafactor
AdamWeightDecay (TensorFlow)
autodoc AdamWeightDecay
autodoc create_optimizer
Schedules
Learning Rate Schedules (PyTorch)
autodoc SchedulerType
autodoc get_scheduler
autodoc get_constant_schedule
autodoc get_constant_schedule_with_warmup

autodoc get_cosine_schedule_with_warmup

autodoc get_cosine_with_hard_restarts_schedule_with_warmup

autodoc get_linear_schedule_with_warmup

autodoc get_polynomial_decay_schedule_with_warmup
autodoc get_inverse_sqrt_schedule
autodoc get_wsd_schedule
Warmup (TensorFlow)
autodoc WarmUp
Gradient Strategies
GradientAccumulator (TensorFlow)
autodoc GradientAccumulator