mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-03 03:31:05 +06:00
![]() * more torch.hpu patches * increase top_k because it results in flaky behavior when Tempreture, TopP and TopK are used together, which ends up killing beams early. * remove temporal fix * fix scatter operation when input and src are the same * trigger * fix and reduce * skip finding batch size as it makes the hpu go loco * fix fsdp (yay all are passing) * fix checking equal nan values * style * remove models list * order * rename to cuda_extensions * Update src/transformers/trainer.py |
||
---|---|---|
.. | ||
conda | ||
ISSUE_TEMPLATE | ||
scripts | ||
workflows | ||
PULL_REQUEST_TEMPLATE.md |