Image-Text-to-Text
Transformers
TensorBoard
Safetensors
feature-extraction
conversational
custom_code
xiangan commited on
Commit
ae3f2e6
·
verified ·
1 Parent(s): 746e144

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -14,8 +14,10 @@ pipeline_tag: image-text-to-text
14
 
15
  This repository contains the LLaVA-OneVision-1.5 models, as presented in the paper [LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training](https://huggingface.co/papers/2509.23661).
16
 
17
- Project Page: [https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5](https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5)
18
- Code: [https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5](https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5)
 
 
19
 
20
  **LLaVA-OneVision1.5** introduces a novel family of **fully open-source** Large Multimodal Models (LMMs) that achieves **state-of-the-art performance** with substantially **lower cost** through training on **native resolution** images.
21
 
 
14
 
15
  This repository contains the LLaVA-OneVision-1.5 models, as presented in the paper [LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training](https://huggingface.co/papers/2509.23661).
16
 
17
+ Project Page: [https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5](https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5)
18
+
19
+ Code: [https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5](https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5)
20
+
21
 
22
  **LLaVA-OneVision1.5** introduces a novel family of **fully open-source** Large Multimodal Models (LMMs) that achieves **state-of-the-art performance** with substantially **lower cost** through training on **native resolution** images.
23