Text Generation
Transformers
GGUF
Safetensors
PyTorch
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Safetensors
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.1
teknium/Mistral-Trismegistus-7B
mistral-7b
instruct
finetune
gpt4
synthetic data
distillation
en
conversational
Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF
/
Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1.Q4_K_M.gguf
- SHA256:
- 331e30e4794d9c2adca1eb4e5fc479f4d58e73c1b9501c4a6866bb1cc5f4c711
- Pointer size:
- 135 Bytes
- Size of remote file:
- 4.37 GB
- Xet hash:
- 8b3c7c118676e862c06f9d1c4b5ae5fc503af2851d30b2f34f6b11cc441a73ef
·
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.