trlm-135m-GGUF
The Tiny Reasoning Language Model (trlm-135m) is a 135 million parameter research prototype aimed at exploring how smaller language models can acquire step-by-step reasoning abilities. Built on the SmolLM2-135M-Instruct model (a Llama 3 based decoder-only transformer), it undergoes a three-stage fine-tuning pipeline: Stage 1 for general instruction tuning without reasoning, Stage 2 for incorporating reasoning traces marked by tags, and Stage 3 for preference alignment to refine reasoning style using Direct Preference Optimization (DPO).
If you are running in LM Studio, start with a context length of 1024 and adjust it based on the responses. Itโs recommended to use high-precision quants for better performance.
Execute using Ollama
run ->
ollama run hf.co/prithivMLmods/trlm-135m-GGUF:BF16
Model Files
| File Name | Quant Type | File Size |
|---|---|---|
| trlm-135m.BF16.gguf | BF16 | 271 MB |
| trlm-135m.F16.gguf | F16 | 271 MB |
| trlm-135m.F32.gguf | F32 | 540 MB |
| trlm-135m.Q2_K.gguf | Q2_K | 88.2 MB |
| trlm-135m.Q3_K_L.gguf | Q3_K_L | 97.5 MB |
| trlm-135m.Q3_K_M.gguf | Q3_K_M | 93.5 MB |
| trlm-135m.Q3_K_S.gguf | Q3_K_S | 88.2 MB |
| trlm-135m.Q4_0.gguf | Q4_0 | 91.7 MB |
| trlm-135m.Q4_1.gguf | Q4_1 | 98.4 MB |
| trlm-135m.Q4_K.gguf | Q4_K | 105 MB |
| trlm-135m.Q4_K_M.gguf | Q4_K_M | 105 MB |
| trlm-135m.Q4_K_S.gguf | Q4_K_S | 102 MB |
| trlm-135m.Q5_0.gguf | Q5_0 | 105 MB |
| trlm-135m.Q5_1.gguf | Q5_1 | 112 MB |
| trlm-135m.Q5_K.gguf | Q5_K | 112 MB |
| trlm-135m.Q5_K_M.gguf | Q5_K_M | 112 MB |
| trlm-135m.Q5_K_S.gguf | Q5_K_S | 110 MB |
| trlm-135m.Q6_K.gguf | Q6_K | 138 MB |
| trlm-135m.Q8_0.gguf | Q8_0 | 145 MB |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 110
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Model tree for prithivMLmods/trlm-135m-GGUF
Base model
HuggingFaceTB/SmolLM2-135M