MusiXQA is a multimodal dataset for evaluating and training music sheet understanding systems. Each data sample is composed of:
- A scanned music sheet image (
.png) - Its corresponding MIDI file (
.mid) - A structured annotation (from
metadata.json) - QuestionβAnswer (QA) pairs targeting musical structure, semantics, and optical music recognition (OMR)

π Dataset Structure
MusiXQA/
βββ images/ # PNG files of music sheets (e.g., 0000000.png)
βββ midi.tar # MIDI files (e.g., 0000000.mid), compressed
βββ train_qa_omr.json # OMR-tasks QA pairs (train split)
βββ train_qa_simple.json # Simple musical info QAs (train split)
βββ test_qa_omr.json # OMR-tasks QA pairs (test split)
βββ test_qa_simple.json # Simple musical info QAs (test split)
βββ metadata.json # Annotation for each document (e.g., key, time, instruments)
π§Ύ Metadata
The metadata.json file provides comprehensive annotations of the full music sheet content, facilitating research in symbolic music reasoning, score reconstruction, and multimodal alignment with audio or MIDI.

β QA Data Format
Each QA file (e.g., train_qa_omr.json) is a list of QA entries like this:
{
"doc_id": "0000001",
"question": "What is the title of the music sheet?",
"answer": "Minuet in G Major",
"encode_format": "beat"
}
β’ doc_id: corresponds to a sample in images/, midi/, and metadata.json β’ question: natural language query β’ answer: ground truth answer β’ encode_format: how the input is encoded (e.g., "beat", "note", etc.)
π Reference
If you use this dataset in your work, please cite it using the following reference:
@article{chen2025musixqa,
title={MusiXQA: Advancing Visual Music Understanding in Multimodal Large Language Models},
author={Chen, Jian and Ma, Wenye and Liu, Penghang and Wang, Wei and Song, Tengwei and Li, Ming and Wang, Chenguang and Zhang, Ruiyi and Chen, Changyou},
journal={arXiv preprint arXiv:2506.23009},
year={2025}
}