bachvudinh commited on
Commit
fd7a865
·
verified ·
1 Parent(s): 1ca0ce0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen3-VL-8B-Thinking
7
+ pipeline_tag: image-text-to-text
8
+ library_name: transformers
9
+ ---
10
+ # Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks
11
+
12
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan)
13
+ [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
14
+ [![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/)
15
+
16
+ ![image/gif](demo.gif)
17
+
18
+ ## Overview
19
+
20
+ **Jan-v2-VL** is an 8B-parameter vision–language model for long-horizon, multi-step tasks in real software environments (e.g., browsers and desktop apps). It combines language reasoning with visual perception to follow complex instructions, maintain intermediate state, and recover from minor execution errors.
21
+
22
+ We recognize the importance of **long-horizon execution** for real-world tasks, where small per-step gains compound into much longer successful chains—so **Jan-v2-VL** is built for stable, many-step execution. For evaluation, we use **[The Illusion of Diminishing Returns: Measuring Long-Horizon Execution in LLMs](https://arxiv.org/pdf/2509.09677)**, which measures execution length. This benchmark aligns with public consensus on what makes a strong coding model—steady, low-drift step execution—suggesting that robust long-horizon ability closely tracks better user experience.
23
+
24
+ **Variants**
25
+
26
+ * **Jan-v2-VL-low** — efficiency-oriented, lower latency
27
+ * **Jan-v2-VL-med** — balanced latency/quality
28
+ * **Jan-v2-VL-high** — deeper reasoning; higher think time
29
+
30
+ ### Intended Use
31
+ Tasks where the plan and/or knowledge can be provided up front, and success hinges on stable, many-step execution with minimal drift:
32
+
33
+ * **Agentic automation & UI control:** Stepwise operation in browsers/desktop apps with screenshot grounding and tool calls (e.g., BrowserMCP).
34
+
35
+ ## Model Performance
36
+
37
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/q4DzuOjmcZOik2c8ZQSCN.png)
38
+
39
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/JdA1kFh2IEJesQsOAOTrh.png)
40
+
41
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/fuuZ5pMOGsbbEpKCM5xy8.png)
42
+
43
+ ## Local Deployment
44
+
45
+ ### Integration with Jan App
46
+
47
+ Jan-v2-VL is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.
48
+
49
+ ### Local Deployment
50
+
51
+ **Using vLLM:**
52
+ ```bash
53
+ vllm serve Menlo/Jan-v2-VL-high \
54
+ --host 0.0.0.0 \
55
+ --port 1234 \
56
+ --enable-auto-tool-choice \
57
+ --tool-call-parser hermes \
58
+ --reasoning-parser qwen3
59
+
60
+ ```
61
+
62
+ **Using llama.cpp:**
63
+ ```bash
64
+ llama-server --model Jan-v2-VL-high-Q8_0.gguf \
65
+ --vision-model-path mmproj-Jan-v2-VL-high.gguf \
66
+ --host 0.0.0.0 \
67
+ --port 1234 \
68
+ --jinja \
69
+ --no-context-shift
70
+ ```
71
+
72
+ ### Recommended Parameters
73
+ For optimal performance in agentic and general tasks, we recommend the following inference parameters:
74
+ ```yaml
75
+ temperature: 1.0
76
+ top_p: 0.95
77
+ top_k: 20
78
+ repetition_penalty: 1.0
79
+ presence_penalty: 1.5
80
+ ```
81
+
82
+ ## 🤝 Community & Support
83
+
84
+ - **Discussions**: [Hugging Face Community](https://huggingface.co/janhq/Jan-v2-VL-8B/discussions)
85
+ - **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
86
+
87
+ ## 📄 Citation
88
+ ```bibtex
89
+ Updated Soon
90
+ ```