Moxin x llama.cpp Customized Quant for GLM-4.6

We sincerely thank the open-source community developers and contributors unsloth for providing BF16 version and imatrix file.

We really appreciate the attention and weโ€™re also happy to share additional quantization variants for everyone to try out and experiment with โ€” hope you enjoy them!

For llama.cpp, please use --jinja

- Q4_K_XL : 204.34 GiB (4.92 BPW)
- Other Quant Versions (Coming soon)
๐Ÿ‘ˆ Download Guide
huggingface-cli download moxin-org/GLM-4.6-GGUF --include "*Q4_K_XL*" --local-dir ./GLM-4.6-GGUF
# !pip install huggingface_hub hf_transfer
import os
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "moxin-org/GLM-4.6-GGUF",
    local_dir = "GLM-4.6-GGUF",
    allow_patterns = ["*Q4_K_XL*"],
)

Download Available for huggingface_hub, huggingface-cli, snapshot_download, xet.

Usage

Example of runing gguf with local build of llama.cpp. (llama-cli/llama-server)

๐Ÿ‘ˆ Build llama.cpp locally
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp

# -DLLAMA_CURL=OFF if error
cmake -B build -DGGML_CUDA=ON -DBUILD_SHARED_LIBS=OFF 
cmake --build build --config Release -j --clean-first
build/bin/llama-cli -m GLM-4.6-GGUF/Moxin-Q4_K_XL/GLM-4.6-Q4_K_XL-00001-of-00009.gguf \
  -ngl 99 \
  --jinja \
  --temp 1.0 \
  --top-k 40 \
  --top-p 0.95 \
  --min-p 0.01 \
  --ctx-size 16384 \ # 4096, 8192

Citation

If this work is helpful, please kindly helpe cite as:

@article{chen2025collaborative,
  title={Collaborative Compression for Large-Scale MoE Deployment on Edge},
  author={Chen, Yixiao and Xie, Yanyue and Yang, Ruining and Jiang, Wei and Wang, Wei and He, Yong and Chen, Yue and Zhao, Pu and Wang, Yanzhi},
  journal={arXiv preprint arXiv:2509.25689},
  year={2025}
}

Acknowledgements

This repository builds upon the outstanding work of the following open-source authors and projects:

We sincerely thank them for their excellent contributions to the open-source community.

Downloads last month
152
GGUF
Model size
357B params
Architecture
glm4moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for moxin-org/GLM-4.6-GGUF

Base model

zai-org/GLM-4.6
Quantized
(39)
this model

Collection including moxin-org/GLM-4.6-GGUF