Llama.cpp hybrid layer quantization of Qwen2.5-Coder-7B-Instruct by Qwen

Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants.

Q6_K_H layer quants are as follows:

Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0

LAYER_TYPES='[
   [0 ,"Q6_K_M"],[1 ,"Q5_K_L"],[2 ,"Q5_K_M"],[3 ,"Q5_K_M"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],[6 ,"Q5_K_M"],
   [7 ,"Q5_K_L"],[8 ,"Q5_K_M"],[9 ,"Q5_K_L"],[10,"Q5_K_M"],[11,"Q5_K_L"],[12,"Q5_K_M"],[13,"Q5_K_L"],
   [14,"Q6_K_S"],[15,"Q5_K_L"],[16,"Q6_K_S"],[17,"Q5_K_L"],[18,"Q6_K_S"],[19,"Q6_K_M"],[20,"Q6_K_S"],
   [21,"Q6_K_M"],[22,"Q6_K_L"],[23,"Q6_K_L"],[24,"Q6_K_L"],[25,"Q6_K_L"],[26,"Q6_K_L"],[27,"Q8_0"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

A second smaller Q4_K_H quant is also available:

Q4_K_L : Q4_K_M + attn_o = q6_k

LAYER_TYPES='[
   [0 ,"Q4_K_L"],[1 ,"Q4_K_M"],[2 ,"Q4_K_S"],[3 ,"Q4_K_M"],[4 ,"Q4_K_S"],[5 ,"Q4_K_M"],[6 ,"Q4_K_S"],
   [7 ,"Q4_K_S"],[8 ,"Q4_K_M"],[9 ,"Q4_K_S"],[10,"Q4_K_M"],[11,"Q4_K_S"],[12,"Q4_K_M"],[13,"Q4_K_S"],
   [14,"Q4_K_M"],[15,"Q4_K_S"],[16,"Q4_K_M"],[17,"Q4_K_S"],[18,"Q4_K_M"],[19,"Q4_K_M"],[20,"Q4_K_M"],
   [21,"Q4_K_L"],[22,"Q4_K_M"],[23,"Q4_K_L"],[24,"Q4_K_M"],[25,"Q4_K_L"],[26,"Q4_K_L"],[27,"Q5_K_M"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"

These quants were optimized over a small set of curated test prompts for code generation ability and then sanity checked for good performance on humaneval.

Comparison:

Quant size PPL Comment
IQ4_XS 4.25e9 9.4 -
Q4_K_H 4.8e9 9.4 Hybrid quant with Q4_K embedding Q6_K output
Q6_K 6.3e9 9.3 -
Q6_K_H 6.2e9 9.3 Hybrid quant with Q6_K embedding Q6_K output

Usage:

The model can be speculated with Qwen 2.5 Coder 0.5B Instruct with no vocab translation. It is trained at 32k context which can be extended to 128k using YARN:

-rope-scaling yarn --yarn-orig-ctx 32768 --rope_scale 4

For other than 128k context set rope_scale to the fraction of configured context size / 32768.0.

Approximate performance on 12G VRAM 4070 with weigths and context in VRAM:

Q QKV NKV gen tps spec gen tps Comment
Q4_K_H F16 32k 90 175 VRAM left over
Q4_K_H F16 83k 90 134 VRAM full
Q4_K_H Q8_0 32k 90 138 -
Q4_K_H Q8_0 128k 90 176 VRAM full
Q6_K_H F16 32k 75 130 -
Q6_K_H F16 66k 75 124 -
Q6_K_H Q8_0 32k 75 158 -
Q6_K_H Q8_0 102k 75 124 -

for speculation a fixed length 10 token draft was used with a custom downstream speculator.

Benchmarks:

A full set of code benchmarks for the two quants are given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen2.5-Coder-7B-Instruct.Q4_K_H.gguf Q4_K_H 4.8e9 B 1.4B smaller than Q6_K_H
Qwen2.5-Coder-7B-Instruct.Q6_K_H.gguf Q6_K_H 6.2e9 B ~Q6_K size

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
166
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen2.5-Coder-7B-Instruct-Hybrid-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(156)
this model