* more torch.hpu patches
* increase top_k because it results in flaky behavior when Tempreture, TopP and TopK are used together, which ends up killing beams early.
* remove temporal fix
* fix scatter operation when input and src are the same
* trigger
* fix and reduce
* skip finding batch size as it makes the hpu go loco
* fix fsdp (yay all are passing)
* fix checking equal nan values
* style
* remove models list
* order
* rename to cuda_extensions
* Update src/transformers/trainer.py