Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,250 Bytes
0ada3e9
 
7bd1ef2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba88b32
 
 
 
 
 
 
 
7bd1ef2
 
 
ba88b32
 
 
 
0ada3e9
5064ba7
c264453
d43ae0d
5064ba7
 
 
 
d52c139
 
 
 
 
 
 
 
 
 
5064ba7
 
 
 
 
 
 
81b6560
5064ba7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf146ea
 
 
cd1539c
cf146ea
5064ba7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
license: cc
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: instruction
    dtype: string
  - name: stem
    dtype: string
  - name: options
    struct:
    - name: A
      dtype: string
    - name: B
      dtype: string
    - name: C
      dtype: string
    - name: D
      dtype: string
  - name: subject
    dtype: string
  - name: answer
    dtype: string
  - name: split
    dtype: string
  - name: abc_score
    dtype: string
  - name: analysis
    dtype: string
  splits:
  - name: dev
    num_bytes: 2599.489247311828
    num_examples: 5
  - name: test
    num_bytes: 190802.51075268816
    num_examples: 367
  download_size: 0
  dataset_size: 193402.0
configs:
- config_name: default
  data_files:
  - split: dev
    path: data/dev-*
  - split: test
    path: data/test-*
---

[**🌐 DemoPage**](https://ezmonyi.github.io/ChatMusician/) | [**πŸ€— Dataset**](https://huggingface.co/datasets/m-a-p/MusicPile) | [**πŸ€— Benchmark**](https://huggingface.co/datasets/m-a-p/MusicTheoryBench) | [**πŸ“– arXiv**](http://arxiv.org/abs/2402.16153) | [πŸ’» **Code**](https://github.com/hf-lin/ChatMusician) | [**πŸ€– Model**](https://huggingface.co/m-a-p/ChatMusician)

# Dataset Card for MusicTheoryBench

MusicTheoryBench is a benchmark designed to **assess the advanced music understanding capabilities** of current LLMs.

You can easily load it:
```
from datasets import load_dataset

dataset = load_dataset("m-a-p/MusicTheoryBench")
```

The evaluation code will be available in the coming weeks.


## Dataset Structure

MusicTheoryBench consists of 372 questions, formatted as multiple-choice questions, each with 4 options, among which only one is correct. There are 269 questions on music knowledge and 98 questions on music reasoning, along with 5 questions held out for enabling few-shot evaluation.

## Dataset Details

Despite the significant advancements in music information retrieval,the definition of advanced music understanding capabilities remains unclear in current research. 
To measure the advanced understanding abilities of existing LLMs in music, [MAP](https://m-a-p.ai/) first define two critical elements of music understanding: **music knowledge** and **music reasoning**. Definition of music knowledge and reasoning is discussed in [ChatMusician paper](http://arxiv.org/abs/2402.16153).

### music knowledge subset
In the music knowledge subset, the questions span Eastern and Western musical aspects. 
It includes 30 topics such as notes, rhythm, beats, chords, counterpoint, orchestration and instrumentation, music-related culture, history, etc. 
Each major area undergoes targeted examination under the guidance of experts and is divided into various subcategories.
For example, in the triads section, the test set specifically examines the definition, types, and related technical details of triads. 
This test also features different levels of difficulty, corresponding to the high school and college levels of music major students.

### music reasoning subset
Most of the questions in the reasoning subset require both music knowledge and reasoning capabilities. Correctly answering these questions requires detailed analysis of the given information and multi-step logical reasoning, calculating chords, melodies, scales, rhythms, etc.

## Curation Process

To ensure consistency with human testing standards, MusicTheoryBenchmark is crafted by a professional college music teacher according to college-level textbooks and exam papers. The content underwent multiple rounds of discussions and reviews by a team of musicians. The team carefully selected questions and manually compiled them into JSON and ABC notation. The questions are then labeled into music knowledge and music reasoning subsets. Since the teacher is from China, half of the questions are delivered in Chinese, and later translated into English with GPT-4 Azure API and proofread by the team.

### Languages

MusicTheoryBench primarily contains English.


## Limitations

- The MusicThoeryBench results reported in [ChatMusician paper](http://arxiv.org/abs/2402.16153) are obtained with perplexity mode. Direct generation may result in a worse performance. See [Opencompass documentaion](https://opencompass.readthedocs.io/en/latest/get_started/faq.html#what-are-the-differences-and-connections-between-ppl-and-gen) for more details.

## Citation

If you find our work helpful, feel free to give us a cite.

```
@misc{yuan2024chatmusician,
      title={ChatMusician: Understanding and Generating Music Intrinsically with LLM}, 
      author={Ruibin Yuan and Hanfeng Lin and Yi Wang and Zeyue Tian and Shangda Wu and Tianhao Shen and Ge Zhang and Yuhang Wu and Cong Liu and Ziya Zhou and Ziyang Ma and Liumeng Xue and Ziyu Wang and Qin Liu and Tianyu Zheng and Yizhi Li and Yinghao Ma and Yiming Liang and Xiaowei Chi and Ruibo Liu and Zili Wang and Pengfei Li and Jingcheng Wu and Chenghua Lin and Qifeng Liu and Tao Jiang and Wenhao Huang and Wenhu Chen and Emmanouil Benetos and Jie Fu and Gus Xia and Roger Dannenberg and Wei Xue and Shiyin Kang and Yike Guo},
      year={2024},
      eprint={2402.16153},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}
```

## Dataset Card Contact

Authors of ChatMusician.