mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
![]() * siwtch to device agnostic autocast in nemotron to align xpu behavior w/ cuda Signed-off-by: Matrix Yao <matrix.yao@intel.com> * fix issue Signed-off-by: Matrix Yao <matrix.yao@intel.com> * fix style Signed-off-by: Matrix Yao <matrix.yao@intel.com> * use torch.cast as other modeling code for decision_transformer&gpt2&imagegpt Signed-off-by: Matrix Yao <matrix.yao@intel.com> * refine Signed-off-by: Matrix Yao <matrix.yao@intel.com> * update get_autocast_gpu_dtype to device agnostic one Signed-off-by: Matrix YAO <matrix.yao@intel.com> * fix style Signed-off-by: Matrix YAO <matrix.yao@intel.com> * fix comments Signed-off-by: YAO Matrix <matrix.yao@intel.com> * fix style Signed-off-by: YAO Matrix <matrix.yao@intel.com> --------- Signed-off-by: Matrix Yao <matrix.yao@intel.com> Signed-off-by: Matrix YAO <matrix.yao@intel.com> Signed-off-by: YAO Matrix <matrix.yao@intel.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> |
||
---|---|---|
.. | ||
bettertransformer | ||
deepspeed | ||
extended | ||
fixtures | ||
fsdp | ||
generation | ||
models | ||
optimization | ||
peft_integration | ||
pipelines | ||
quantization | ||
repo_utils | ||
sagemaker | ||
tensor_parallel | ||
tokenization | ||
trainer | ||
utils | ||
__init__.py | ||
causal_lm_tester.py | ||
test_backbone_common.py | ||
test_configuration_common.py | ||
test_feature_extraction_common.py | ||
test_image_processing_common.py | ||
test_image_transforms.py | ||
test_modeling_common.py | ||
test_pipeline_mixin.py | ||
test_processing_common.py | ||
test_sequence_feature_extraction_common.py | ||
test_tokenization_common.py | ||
test_training_args.py | ||
test_video_processing_common.py |