See DeepSeek-V3.2 5.5bit MLX in action - demonstration video

q5.5bit quant typically achieves 1.141 perplexity in our testing

Quantization Perplexity
q2.5 41.293
q3.5 1.900
q4.5 1.168
q5.5 1.141
q6.5 1.128
q8.5 1.128

Usage Notes

M3 Ultra 512GB RAM using Inferencer app v1.7.3

  • Expect ~16.5 tokens/s @ 1000 tokens
  • Memory usage: ~450 GB
    • For a larger context window (>11k tokens) you can expand the RAM limit:
      sudo sysctl iogpu.wired_limit_mb=507000
      

M3 Ultra 512GB RAM connected to MBP 128GB RAM using Inferencer app v1.7.3 with LAN distributed compute

  • Expect ~13.7 tokens/s @ 1000 tokens
  • Example memory usage: MBP ~20GB + Mac Studio ~430GB
    • More RAM available for larger context window using this method
Quantized with a modified version of MLX 0.28
For more details see demonstration video or visit DeepSeek-V3.2.

Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.

Downloads last month
1,392
Safetensors
Model size
672B params
Tensor type
BF16
·
U32
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support