transformers/tests/quantization
Alazar 96429e74a8
Add support for GGUF Phi-3 (#31844)
* Update docs for GGUF supported models

* Add tensor mappings and define class GGUFPhi3Converter

* Fix tokenizer

* Working version

* Attempt to fix some CI failures

* Run ruff format

* Add vocab, merges, decoder methods like LlamaConverter

* Resolve conflicts since Qwen2Moe was added to gguf

- I missed one place when resolving conflict
- I also made a mistake with tests_ggml.py and now has been fixed to reflect
its master version.
2024-09-10 13:32:38 +02:00
..
aqlm_integration Cache: use batch_size instead of max_batch_size (#32657) 2024-08-16 11:48:45 +01:00
autoawq Skip tests properly (#31308) 2024-06-26 21:59:08 +01:00
bnb remove to restriction for 4-bit model (#33122) 2024-09-02 16:28:50 +02:00
eetq_integration [FEAT]: EETQ quantizer support (#30262) 2024-04-22 20:38:58 +01:00
fbgemm_fp8 Add new quant method (#32047) 2024-07-22 20:21:59 +02:00
ggml Add support for GGUF Phi-3 (#31844) 2024-09-10 13:32:38 +02:00
gptq 🚨 Remove dataset with restrictive license (#31452) 2024-06-17 17:56:51 +01:00
hqq Quantization / HQQ: Fix HQQ tests on our runner (#30668) 2024-05-06 11:33:52 +02:00
quanto_integration Skip tests properly (#31308) 2024-06-26 21:59:08 +01:00
torchao_integration Add TorchAOHfQuantizer (#32306) 2024-08-14 16:14:24 +02:00