Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -27,6 +27,37 @@ We present TMMLU+ a traditional Chinese massive multitask language understanding
|
|
27 |
|
28 |
TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models.
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
## Benchmark on direct prompting
|
31 |
|
32 |
| model | STEM | Social Science | Humanities | Other | Average |
|
@@ -59,3 +90,6 @@ TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to
|
|
59 |
| [yentinglin/Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 14.99 | 16.23 | 15.00 | 16.22 |15.61|
|
60 |
| [FlagAlpha/Atom-7B](https://huggingface.co/FlagAlpha/Atom-7B) | 5.60 | 13.57 | 7.71 | 11.84 |9.68|
|
61 |
|
|
|
|
|
|
|
|
27 |
|
28 |
TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models.
|
29 |
|
30 |
+
|
31 |
+
```python
|
32 |
+
from datasets import load_dataset
|
33 |
+
task_list = [
|
34 |
+
'engineering_math', 'dentistry', 'traditional_chinese_medicine_clinical_medicine', 'clinical_psychology', 'technical', 'culinary_skills', 'mechanical', 'logic_reasoning', 'real_estate',
|
35 |
+
'general_principles_of_law', 'finance_banking', 'anti_money_laundering', 'ttqav2', 'marketing_management', 'business_management', 'organic_chemistry', 'advance_chemistry',
|
36 |
+
'physics', 'secondary_physics', 'human_behavior', 'national_protection', 'jce_humanities', 'politic_science', 'agriculture', 'official_document_management',
|
37 |
+
'financial_analysis', 'pharmacy', 'educational_psychology', 'statistics_and_machine_learning', 'management_accounting', 'introduction_to_law', 'computer_science', 'veterinary_pathology',
|
38 |
+
'accounting', 'fire_science', 'optometry', 'insurance_studies', 'pharmacology', 'taxation', 'trust_practice', 'geography_of_taiwan', 'physical_education', 'auditing', 'administrative_law',
|
39 |
+
'education_(profession_level)', 'economics', 'veterinary_pharmacology', 'nautical_science', 'occupational_therapy_for_psychological_disorders',
|
40 |
+
'basic_medical_science', 'macroeconomics', 'trade', 'chinese_language_and_literature', 'tve_design', 'junior_science_exam', 'junior_math_exam', 'junior_chinese_exam',
|
41 |
+
'junior_social_studies', 'tve_mathematics', 'tve_chinese_language', 'tve_natural_sciences', 'junior_chemistry', 'music', 'education', 'three_principles_of_people',
|
42 |
+
'taiwanese_hokkien'
|
43 |
+
]
|
44 |
+
for task in task_list:
|
45 |
+
val = load_dataset('ikala/tmmluplus', task)['validation']
|
46 |
+
dev = load_dataset('ikala/tmmluplus', task)['train']
|
47 |
+
test = load_dataset('ikala/tmmluplus', task)['test']
|
48 |
+
```
|
49 |
+
|
50 |
+
Statistic on datasets result
|
51 |
+
|
52 |
+
| Category | Test | Dev | Validation |
|
53 |
+
|----------------------------------|-------|------|------------|
|
54 |
+
| STEM | 3458 | 70 | 385 |
|
55 |
+
| Other (Business, Health, Misc.) | 8939 | 135 | 995 |
|
56 |
+
| Social Sciences | 5958 | 90 | 665 |
|
57 |
+
| Humanities | 1763 | 35 | 197 |
|
58 |
+
| **Total** | 20118 | 330 | 2242 |
|
59 |
+
|
60 |
+
|
61 |
## Benchmark on direct prompting
|
62 |
|
63 |
| model | STEM | Social Science | Humanities | Other | Average |
|
|
|
90 |
| [yentinglin/Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 14.99 | 16.23 | 15.00 | 16.22 |15.61|
|
91 |
| [FlagAlpha/Atom-7B](https://huggingface.co/FlagAlpha/Atom-7B) | 5.60 | 13.57 | 7.71 | 11.84 |9.68|
|
92 |
|
93 |
+
|
94 |
+
|
95 |
+
|