File size: 1,883 Bytes
89fc4dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-3B-Instruct
tags:
- merge-conflict-resolution
- code
- qwen
- qwen2.5
- coding-assistant
- git
- version-control
- developer-tools
- code-generation
- conflict-resolution
pipeline_tag: text-generation
library_name: transformers
language:
- en
---
# ๐ฌ๏ธ Breeze-3B: AI-Powered Git Merge Conflict Resolution
**Breeze-3B** is a specialized coding model fine-tuned on [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) to automatically resolve Git merge conflicts with reasoning and context awareness.
## ๐ Key Features
- **Intelligent Resolution**: Analyzes merge conflicts and provides reasoned solutions
- **Multi-Language Support**: Works across Python, JavaScript, Java, C++, and more
- **Preserves Code Quality**: Maintains general coding capabilities while specializing in conflict resolution
- **Multiple Deployment Options**: Cloud inference, local GGUF, and Ollama support
- **Lightweight**: Only 3B parameters - runs efficiently on consumer hardware
## ๐ Model Details
| Property | Value |
|----------|-------|
| **Base Model** | Qwen/Qwen2.5-Coder-3B-Instruct |
| **Training Data** | 7,165 curated merge conflicts from ConGra dataset |
| **Fine-tuning Method** | LoRA (rank-8 adapters) |
| **Parameters** | 3B |
| **Quantization** | Q4_K_M GGUF available |
| **License** | Apache 2.0 |
### Local Inference
```python
from llama_cpp import Llama
llm = Llama(model_path="breeze-3b.Q4_K_M.gguf")
response = llm(f"Resolve this merge conflict:\n\n{conflict}")
print(response["choices"][0]["text"])
```
### Ollama Inference
``` bash
ollama run hf.co/SoarAILabs/breeze-3b
````
## Features
- Resolves merge conflicts with reasoning
- Supports multiple programming languages
- No catastrophic forgetting of general coding skills
- Works with both cloud and local inference
|