Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -42,11 +42,21 @@ illustrated by the introduction of open-source models such as Polyglot-Ko and pr
|
|
42 |
Yet, as the development of larger and superior language models accelerates, evaluation methods aren't keeping pace.
|
43 |
Recognizing this gap, we at HAE-RAE are dedicated to creating tailored benchmarks for the rigorous evaluation of these models.
|
44 |
|
45 |
-
CSAT-QA
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
please check out the [accompanying blog post](https://github.com/guijinSON/hae-rae/blob/main/blog/CSAT-QA.md),
|
48 |
and for evaluation check out [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) on github.
|
49 |
|
|
|
50 |
## Evaluation Results
|
51 |
|
52 |
| Category | Polyglot-Ko-12.8B | GPT-3.5-16k | GPT-4 | Human_Performance |
|
@@ -66,7 +76,7 @@ The CSAT-QA includes two subsets. The full version with 936 questions can be dow
|
|
66 |
|
67 |
```
|
68 |
from datasets import load_dataset
|
69 |
-
dataset = load_dataset("EleutherAI/CSAT-QA",
|
70 |
```
|
71 |
|
72 |
A more condensed version, which includes human accuracy data, can be downloaded using the following code:
|
@@ -74,8 +84,32 @@ A more condensed version, which includes human accuracy data, can be downloaded
|
|
74 |
from datasets import load_dataset
|
75 |
import pandas as pd
|
76 |
|
77 |
-
dataset = load_dataset("EleutherAI/CSAT-QA",
|
78 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
```
|
80 |
|
81 |
## License
|
|
|
42 |
Yet, as the development of larger and superior language models accelerates, evaluation methods aren't keeping pace.
|
43 |
Recognizing this gap, we at HAE-RAE are dedicated to creating tailored benchmarks for the rigorous evaluation of these models.
|
44 |
|
45 |
+
CSAT-QA is a comprehensive collection of 936 multiple choice question answering (MCQA) questions,
|
46 |
+
manually collected the College Scholastic Ability Test (CSAT), a rigorous Korean University entrance exam.
|
47 |
+
The CSAT-QA is divided into two subsets: a complete version encompassing all 936 questions,
|
48 |
+
and a smaller, specialized version used for targeted evaluations.
|
49 |
+
|
50 |
+
The smaller subset further diversifies into six distinct categories:
|
51 |
+
Writing (WR), Grammar (GR), Reading Comprehension: Science (RCS), Reading Comprehension: Social Science (RCSS),
|
52 |
+
Reading Comprehension: Humanities (RCH), and Literature (LI). Moreover, the smaller subset includes the recorded accuracy of South Korean students,
|
53 |
+
providing a valuable real-world performance benchmark.
|
54 |
+
|
55 |
+
For a detailed explanation of how the CSAT-QA was created
|
56 |
please check out the [accompanying blog post](https://github.com/guijinSON/hae-rae/blob/main/blog/CSAT-QA.md),
|
57 |
and for evaluation check out [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) on github.
|
58 |
|
59 |
+
|
60 |
## Evaluation Results
|
61 |
|
62 |
| Category | Polyglot-Ko-12.8B | GPT-3.5-16k | GPT-4 | Human_Performance |
|
|
|
76 |
|
77 |
```
|
78 |
from datasets import load_dataset
|
79 |
+
dataset = load_dataset("EleutherAI/CSAT-QA", "full")
|
80 |
```
|
81 |
|
82 |
A more condensed version, which includes human accuracy data, can be downloaded using the following code:
|
|
|
84 |
from datasets import load_dataset
|
85 |
import pandas as pd
|
86 |
|
87 |
+
dataset = load_dataset("EleutherAI/CSAT-QA", "GR") # Choose from either WR, GR, LI, RCH, RCS, RCSS,
|
88 |
+
|
89 |
+
```
|
90 |
+
|
91 |
+
## Evaluate using LM-Eval-Harness
|
92 |
+
To evaluate your model simply by using the LM-Eval-Harness by EleutherAI follow the steps below.
|
93 |
+
|
94 |
+
1. To install lm-eval from the github repository main branch, run:
|
95 |
+
```
|
96 |
+
git clone https://github.com/EleutherAI/lm-evaluation-harness
|
97 |
+
cd lm-evaluation-harness
|
98 |
+
pip install -e .
|
99 |
+
```
|
100 |
+
|
101 |
+
2. To install additional multilingual tokenization and text segmentation packages, you must install the package with the multilingual extra:
|
102 |
+
```
|
103 |
+
pip install -e ".[multilingual]"
|
104 |
+
```
|
105 |
+
|
106 |
+
3. Run the evaluation by:
|
107 |
+
```
|
108 |
+
python main.py \
|
109 |
+
--model hf-causal \
|
110 |
+
--model_args pretrained=EleutherAI/polyglot-ko-1.3b \
|
111 |
+
--tasks csatqa_wr,csatqa_gr,csatqa_rcs,csatqa_rcss,csatqa_rch,csatqa_li \
|
112 |
+
--device cuda:0
|
113 |
```
|
114 |
|
115 |
## License
|