Update README.md
Browse files
README.md
CHANGED
|
@@ -14,8 +14,10 @@ pipeline_tag: image-text-to-text
|
|
| 14 |
|
| 15 |
This repository contains the LLaVA-OneVision-1.5 models, as presented in the paper [LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training](https://huggingface.co/papers/2509.23661).
|
| 16 |
|
| 17 |
-
Project Page: [https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5](https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5)
|
| 18 |
-
|
|
|
|
|
|
|
| 19 |
|
| 20 |
**LLaVA-OneVision1.5** introduces a novel family of **fully open-source** Large Multimodal Models (LMMs) that achieves **state-of-the-art performance** with substantially **lower cost** through training on **native resolution** images.
|
| 21 |
|
|
|
|
| 14 |
|
| 15 |
This repository contains the LLaVA-OneVision-1.5 models, as presented in the paper [LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training](https://huggingface.co/papers/2509.23661).
|
| 16 |
|
| 17 |
+
Project Page: [https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5](https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5)
|
| 18 |
+
|
| 19 |
+
Code: [https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5](https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5)
|
| 20 |
+
|
| 21 |
|
| 22 |
**LLaVA-OneVision1.5** introduces a novel family of **fully open-source** Large Multimodal Models (LMMs) that achieves **state-of-the-art performance** with substantially **lower cost** through training on **native resolution** images.
|
| 23 |
|