Uploaded finetuned model

  • Developed by: Jackrong

  • License: apache-2.0

  • Finetuned from model : Qwen/qwen3-1.7b

  • gpt-oss-120b-Distill-Qwen3-1.7B-Thinking is a distilled model of GPT-oss-120B, achieving efficient resource utilization by pruning parameters and optimizing the inference path while preserving GPT style and natural language processing capabilities. In conversational scenarios, it exhibits smooth context-aware interactions, avoids over-inflation, produces concise yet logically rigorous outputs; it introduces table-based problem categorization, enhancing structured task representation, and adapting to multi-domain knowledge integration and high resource constraints environments. Its compact inference path optimizes dialogue response efficiency while retaining the natural language output style of GPT-oss, making it suitable for customer service, Q&A APIs, and educational chatbots.

Downloads last month
22
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jackrong/gpt-oss-120b-Distill-Qwen3-1.7B-Thinking

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(346)
this model

Dataset used to train Jackrong/gpt-oss-120b-Distill-Qwen3-1.7B-Thinking