MagicQuant GGUF Hybrids - Qwen3 30B A3B Thinking 2507
MagicQuant is an automated quantization, benchmarking, and evolutionary hybrid-GGUF search system for LLMs.
Each release includes models optimized to outperform standard baseline quants (Q8, Q6, Q5, Q4). If a baseline GGUF exists in this repo, the evolutionary engine couldn’t beat it. If a baseline is missing, it’s because a hybrid configuration outperformed it so completely that including the baseline would've been pointless.
These hybrid GGUFs are built to be as small, fast, and low-drift as possible while preserving model capability.
To dive deeper into how MagicQuant works, see the main repo: MagicQuant on GitHub (by MagicCodingMan)
Notes:
- The HuggingFace hardware compatibility where it shows the bits is usually wrong. It doesn't understand hybrid mixes, so don't trust it.
- Naming scheme can be found on the MagicQuant Wiki.
- (tips) Less precision loss means less brain damage. More TPS means faster! Smaller is always better right?
Precision Loss Guide
- 0–0.1% → God-tier, scientifically exact
- 0.1–1% → True near-lossless, agent-ready
- 1–3% → Minimal loss, great for personal use
- 3–5% → Borderline, but still functional
- 5%+ → Toys, not tools, outside MagicQuant’s scope
Learn more about precision loss here.
Table - File Size + TPS + Avg Precision Loss
| model_name | file_size_gb | bench_tps | avg_prec_loss |
|---|---|---|---|
| mxfp4_moe-HQKOR-B16-U-Q5K-E-Q6K-D-Q8_0 | 36.31 | 85.41 | 0.0223% |
| Q8_0 | 30.25 | 99.66 | 0.1182% |
| Q5_K | 20.23 | 123.94 | 0.2558% |
| mxfp4_moe-H-B16-EUD-IQ4NL-R-Q6K-QKO-Q8_0 | 19.20 | 115.33 | 0.4621% |
| iq4_nl-QKOUD-IQ4NL-EH-Q8_0 | 16.33 | 145.90 | 0.8683% |
| iq4_nl-QKOUD-IQ4NL-E-MXFP4-H-Q5K | 16.07 | 153.05 | 1.1878% |
Table - PPL Columns
| model_name | gen | gen_er | code | code_er | math | math_er |
|---|---|---|---|---|---|---|
| mxfp4_moe-HQKOR-B16-U-Q5K-E-Q6K-D-Q8_0 | 6.2842 | 0.1284 | 1.2904 | 0.0068 | 5.6809 | 0.1047 |
| Q8_0 | 6.2952 | 0.1287 | 1.2894 | 0.0069 | 5.6903 | 0.1050 |
| Q5_K | 6.3057 | 0.1289 | 1.2963 | 0.0069 | 5.6818 | 0.1045 |
| mxfp4_moe-H-B16-EUD-IQ4NL-R-Q6K-QKO-Q8_0 | 6.3141 | 0.1294 | 1.2965 | 0.0070 | 5.7085 | 0.1055 |
| iq4_nl-QKOUD-IQ4NL-EH-Q8_0 | 6.3539 | 0.1294 | 1.3056 | 0.0071 | 5.7017 | 0.1040 |
| iq4_nl-QKOUD-IQ4NL-E-MXFP4-H-Q5K | 6.3772 | 0.1301 | 1.3056 | 0.0071 | 5.7351 | 0.1051 |
Table - Precision Loss Columns
| model_name | loss_general | loss_code | loss_math |
|---|---|---|---|
| mxfp4_moe-HQKOR-B16-U-Q5K-E-Q6K-D-Q8_0 | 0.0573 | 0.0078 | 0.0018 |
| Q8_0 | 0.1177 | 0.0698 | 0.1672 |
| Q5_K | 0.2847 | 0.4650 | 0.0176 |
| mxfp4_moe-H-B16-EUD-IQ4NL-R-Q6K-QKO-Q8_0 | 0.4183 | 0.4805 | 0.4876 |
| iq4_nl-QKOUD-IQ4NL-EH-Q8_0 | 1.0512 | 1.1858 | 0.3679 |
| iq4_nl-QKOUD-IQ4NL-E-MXFP4-H-Q5K | 1.4218 | 1.1858 | 0.9559 |
Baseline Models (Reference)
Table - File Size + TPS + Avg Precision Loss
| model_name | file_size_gb | bench_tps | avg_prec_loss |
|---|---|---|---|
| BF16 | 56.90 | 51.02 | 0.0000% |
| Q8_0 | 30.25 | 99.66 | 0.1182% |
| Q5_K | 20.23 | 123.94 | 0.2558% |
| Q6_K | 23.37 | 114.97 | 0.2965% |
| IQ4_NL | 16.26 | 138.47 | 1.0534% |
| Q4_K_M | 17.28 | 130.97 | 1.3851% |
| MXFP4_MOE | 15.15 | 141.87 | 10.2733% |
Table - PPL Columns
| model_name | gen | gen_er | code | code_er | math | math_er |
|---|---|---|---|---|---|---|
| BF16 | 6.2878 | 0.1285 | 1.2903 | 0.0069 | 5.6808 | 0.1047 |
| Q8_0 | 6.2952 | 0.1287 | 1.2894 | 0.0069 | 5.6903 | 0.1050 |
| Q5_K | 6.3057 | 0.1289 | 1.2963 | 0.0069 | 5.6818 | 0.1045 |
| Q6_K | 6.3172 | 0.1294 | 1.2927 | 0.0069 | 5.6942 | 0.1051 |
| IQ4_NL | 6.3497 | 0.1293 | 1.3042 | 0.0070 | 5.7432 | 0.1057 |
| Q4_K_M | 6.4310 | 0.1316 | 1.3029 | 0.0070 | 5.7320 | 0.1055 |
| MXFP4_MOE | 7.1681 | 0.1508 | 1.3566 | 0.0080 | 6.3444 | 0.1214 |
Table - Precision Loss Columns
| model_name | loss_general | loss_code | loss_math |
|---|---|---|---|
| BF16 | 0.0000 | 0.0000 | 0.0000 |
| Q8_0 | 0.1177 | 0.0698 | 0.1672 |
| Q5_K | 0.2847 | 0.4650 | 0.0176 |
| Q6_K | 0.4676 | 0.1860 | 0.2359 |
| IQ4_NL | 0.9844 | 1.0773 | 1.0984 |
| Q4_K_M | 2.2774 | 0.9765 | 0.9013 |
| MXFP4_MOE | 14.0001 | 5.1383 | 11.6815 |
Support
I’m a solo developer working full time for myself to achieve my dream, pouring nights and weekends into open protocols and tools that I hope make the world a little better. If you chip in, you're helping me keep the lights on while I keep shipping.
Click here to see ways to support - BTC, Paypal, GitHub sponsors.
Or, just drop a like on the repo :)
- Downloads last month
- 663
4-bit
8-bit
Model tree for magiccodingman/Qwen3-30B-A3B-Thinking-2507-unsloth-MagicQuant-Hybrid-GGUF
Base model
Qwen/Qwen3-30B-A3B-Thinking-2507