MusiXQA / README.md
puar
Initial upload with tar files and metadata
1ac2c06
|
raw
history blame
2.26 kB

MusiXQA is a multimodal dataset for evaluating and training music sheet understanding systems. Each data sample is composed of:

  • A scanned music sheet image (.png)
  • Its corresponding MIDI file (.mid)
  • A structured annotation (from metadata.json)
  • Question–Answer (QA) pairs targeting musical structure, semantics, and optical music recognition (OMR) demo1

πŸ“‚ Dataset Structure

MusiXQA/
β”œβ”€β”€ images/                # PNG files of music sheets (e.g., 0000000.png)
β”œβ”€β”€ midi.tar               # MIDI files (e.g., 0000000.mid), compressed
β”œβ”€β”€ train_qa_omr.json      # OMR-tasks QA pairs (train split)
β”œβ”€β”€ train_qa_simple.json   # Simple musical info QAs (train split)
β”œβ”€β”€ test_qa_omr.json       # OMR-tasks QA pairs (test split)
β”œβ”€β”€ test_qa_simple.json    # Simple musical info QAs (test split)
β”œβ”€β”€ metadata.json          # Annotation for each document (e.g., key, time, instruments)

🧾 Metadata

The metadata.json file provides comprehensive annotations of the full music sheet content, facilitating research in symbolic music reasoning, score reconstruction, and multimodal alignment with audio or MIDI. demo2

❓ QA Data Format

Each QA file (e.g., train_qa_omr.json) is a list of QA entries like this:

{
  "doc_id": "0000001",
  "question": "What is the title of the music sheet?",
  "answer": "Minuet in G Major",
  "encode_format": "beat"
}

β€’ doc_id: corresponds to a sample in images/, midi/, and metadata.json β€’ question: natural language query β€’ answer: ground truth answer β€’ encode_format: how the input is encoded (e.g., "beat", "note", etc.)

πŸŽ“ Reference

If you use this dataset in your work, please cite it using the following reference:

@article{chen2025musixqa,
  title={MusiXQA: Advancing Visual Music Understanding in Multimodal Large Language Models},
  author={Chen, Jian and Ma, Wenye and Liu, Penghang and Wang, Wei and Song, Tengwei and Li, Ming and Wang, Chenguang and Zhang, Ruiyi and Chen, Changyou},
  journal={arXiv preprint arXiv:2506.23009},
  year={2025}
}