mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-05 22:00:09 +06:00
![]() * draft, run model as compreszed/uncompressed mode * draft * run run_compressed=False * run_compressed as attr * set run_compressed=False using quantization_config * remove redundant line * make is_qat_trainable dependent on run_compressed status * add tests * lint * full in docstring * add decompress * comments * decompress if model is compresssed and not run_compressed * apply_quant_config logic fix -- populate statedict properly * comments * remove non compressed model * make is_compressed as property * cosmetic * run apply_quant_config for non-compressed models -- popualte scales and zeropoints * add pahtway for decompressing sparse models * typo on is_quantization_compressed * lint * fix typo |
||
---|---|---|
.. | ||
__init__.py | ||
test_compressed_tensors.py | ||
test_load_sparse_model.py | ||
test_run_compressed_model.py |