This commit is contained in:
thomwolf 2019-08-21 22:22:17 +02:00
parent 2f9397139d
commit e00b4ff1de

View File

@ -393,8 +393,8 @@ for batch in train_data:
loss = model(batch)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
scheduler.step()
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```