mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
link for optimizer names (#32400)
* link for optimizer names Add a note and link to where the user can find more optimizer names easily because there are many more optimizers than are mentioned in the docstring. * make fixup
This commit is contained in:
parent
078d5a88cd
commit
1dde50c7d2
@ -611,8 +611,9 @@ class TrainingArguments:
|
||||
|
||||
The options should be separated by whitespaces.
|
||||
optim (`str` or [`training_args.OptimizerNames`], *optional*, defaults to `"adamw_torch"`):
|
||||
The optimizer to use: adamw_hf, adamw_torch, adamw_torch_fused, adamw_apex_fused, adamw_anyprecision or
|
||||
adafactor.
|
||||
The optimizer to use, such as "adamw_hf", "adamw_torch", "adamw_torch_fused", "adamw_apex_fused", "adamw_anyprecision",
|
||||
"adafactor". See `OptimizerNames` in [training_args.py](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py)
|
||||
for a full list of optimizers.
|
||||
optim_args (`str`, *optional*):
|
||||
Optional arguments that are supplied to AnyPrecisionAdamW.
|
||||
group_by_length (`bool`, *optional*, defaults to `False`):
|
||||
|
Loading…
Reference in New Issue
Block a user