dataset_info:
features:
- name: question_id
dtype: int64
- name: image
dtype: image
- name: text
dtype: string
- name: category
dtype: string
- name: label
dtype: string
- name: image_source
dtype: string
splits:
- name: assamese
num_bytes: 455367007.7
num_examples: 8910
- name: bengali
num_bytes: 455101633.7
num_examples: 8910
- name: english
num_bytes: 454020487.7
num_examples: 8910
- name: gujarati
num_bytes: 455105448.7
num_examples: 8910
- name: hindi
num_bytes: 455210630.7
num_examples: 8910
- name: kannada
num_bytes: 455153061.7
num_examples: 8910
- name: malayalam
num_bytes: 455401526.7
num_examples: 8910
- name: marathi
num_bytes: 455379587.7
num_examples: 8910
- name: odia
num_bytes: 455463255.7
num_examples: 8910
- name: sanskrit
num_bytes: 455470746.7
num_examples: 8910
- name: tamil
num_bytes: 455693348.7
num_examples: 8910
- name: telugu
num_bytes: 455307739.7
num_examples: 8910
download_size: 956887209
dataset_size: 5462674475.399999
configs:
- config_name: default
data_files:
- split: assamese
path: data/assamese-*
- split: bengali
path: data/bengali-*
- split: english
path: data/english-*
- split: gujarati
path: data/gujarati-*
- split: hindi
path: data/hindi-*
- split: kannada
path: data/kannada-*
- split: malayalam
path: data/malayalam-*
- split: marathi
path: data/marathi-*
- split: odia
path: data/odia-*
- split: sanskrit
path: data/sanskrit-*
- split: tamil
path: data/tamil-*
- split: telugu
path: data/telugu-*
license: other
license_name: krutrim-community-license-agreement-version-1.0
license_link: LICENSE.md
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
language:
- as
- hi
- gu
- ml
- te
- ta
- kn
- or
- bn
- en
- mr
- sa
IndicPope: Indian Multilingual Translation Dataset For Evaluating Large Vision Language Models
- You can find the performance of Chitrarth on IndicPope here : Paper | Github | HuggingFace
- Evaluation Scripts of BharatBench is available here : Github
1. Introduction
IndicPope is a new dataset designed for evaluating Large Vision-Language Models (LVLMs) on Visual Question Answering (VQA) tasks. It focuses on simple Yes-or-No questions probing objects in images (e.g., Is there a car in the image?).
This dataset is built upon POPE: Polling-based Object Probing Evaluation for Object Hallucination (GitHub), which employs negative sampling techniques to test hallucination in vision-language models under Random, Popular, and Adversarial settings.
2. Dataset Details
IndicPope consists of 8.91k samples spanning 12 Indic languages along with English. Each sample includes:
- Text: The question about the image.
- Category: The type of sampling used (Random/Popular/Adversarial).
- Label: The answer (Yes/No).
Supported Languages
- Assamese
- Bengali
- English
- Gujarati
- Hindi
- Kannada
- Malayalam
- Marathi
- Odia
- Sanskrit
- Tamil
- Telugu
3. How to Use and Run
You can load the dataset using the datasets library:
from datasets import load_dataset
dataset = load_dataset("krutrim-ai-labs/IndicPope")
print(dataset)
4. License
This code repository and the model weights are licensed under the Krutrim Community License.
5. Citation
@article{khan2025chitrarth,
title={Chitrarth: Bridging Vision and Language for a Billion People},
author={Shaharukh Khan, Ayush Tarun, Abhinav Ravi, Ali Faraz, Akshat Patidar, Praveen Kumar Pokala, Anagha Bhangare, Raja Kolla, Chandra Khatri, Shubham Agarwal},
journal={arXiv preprint arXiv:2502.15392},
year={2025}
}
@misc{liu2023improvedllava,
title={Improved Baselines with Visual Instruction Tuning},
author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
publisher={arXiv:2310.03744},
year={2023},
}
@misc{liu2023llava,
title={Visual Instruction Tuning},
author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
publisher={NeurIPS},
year={2023},
}
@article{li2023evaluating,
title={Evaluating object hallucination in large vision-language models},
author={Li, Yifan and Du, Yifan and Zhou, Kun and Wang, Jinpeng and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2305.10355},
year={2023}
}
@article{gala2023indictrans2,
title={Indictrans2: Towards high-quality and accessible machine translation models for all 22 scheduled indian languages},
author={Gala, Jay and Chitale, Pranjal A and AK, Raghavan and Gumma, Varun and Doddapaneni, Sumanth and Kumar, Aswanth and Nawale, Janki and Sujatha, Anupama and Puduppully, Ratish and Raghavan, Vivek and others},
journal={arXiv preprint arXiv:2305.16307},
year={2023}
}
6. Contact
Contributions are welcome! If you have any improvements or suggestions, feel free to submit a pull request on GitHub.
7. Acknowledgement
IndicPope is built with reference to the code of the following projects: POPE, and LLaVA-1.5. Thanks for their awesome work!