devngho/ko_edu_classifier_v2_nlpai-lab_KoE5
์ด ๋ชจ๋ธ์ ๊ธฐ๋ฐ ๋ชจ๋ธ์
query:
,passage:
์ ๋ถ์ด๋๋ก ํ์ต๋์์ผ๋ฉฐ, ์ด ๋ชจ๋ธ๋passage:
์ ๋ถ์ด๋๋ก ํ์ต๋์์ต๋๋ค. ์ ๋ ฅ ํ ์คํธ ์์ ๊ผญpassage:
์ ์ถ๊ฐํ์ธ์.
์ด ๋ชจ๋ธ์ nlpai-lab/KoE5์ classifier๋ฅผ ์ถ๊ฐํ ๋ชจ๋ธ์ ๋๋ค. HuggingFaceFW/fineweb-edu-classifier์ ํ๊ตญ์ด ๋ฒ์ ์ ๋ชฉํ๋ก, ํ๊ตญ์ด ์น ํ์ด์ง์ ๊ต์ก์ฑ ์ ์๋ฅผ ํ๊ฐํฉ๋๋ค. ํ์ต์๋ blueapple8259/c4-ko-cleaned-2์์ ์ถ์ถํ 500k ์ํ์ Qwen/Qwen2.5-32B-Instruct๋ก ํ๊ฐํ devngho/ko_llm_annotations ๋ฐ์ดํฐ์ ์ด ์ฌ์ฉ๋์์ต๋๋ค.
์ด ์ฐ๊ตฌ๋ Google์ TPU Research Cloud (TRC)์ Cloud TPU ์ ๊ณต์ผ๋ก ์ํ๋์์ต๋๋ค. โก
์์ธ
- ์ ์: devngho
- ์ธ์ด: ko
- ๋ผ์ด์ ์ค: mit
- ๊ธฐ๋ฐ ๋ชจ๋ธ: nlpai-lab/KoE5
ํ์ต ์์ธ
- learning_rate: 3e-4 (cosine)
- warmup_ratio: 0.1
- batch_size: 2048(512*4)
- optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
- duration: 8h 12m
ํ์ต ์ฅ๋น
TPU v4-8
์ฑ๋ฅ
Validation Report:
precision recall f1-score support
0 0.66 0.33 0.44 198
1 0.75 0.63 0.68 1553
2 0.46 0.68 0.55 1159
3 0.63 0.56 0.59 967
4 0.62 0.26 0.36 219
accuracy 0.59 4096
macro avg 0.62 0.49 0.52 4096
weighted avg 0.62 0.59 0.59 4096
Confusion Matrix:
[[ 66 116 16 0 0]
[ 34 977 520 22 0]
[ 0 207 791 159 2]
[ 0 11 382 541 33]
[ 0 0 20 143 56]]
๋ค๋ฅธ ์์ ๋ชจ๋ธ๋ค๋ณด๋ค๋ ๋์ ์ฑ๋ฅ์ ๋ณด์ด์ง๋ง, ํ๊ตญ์ด ์๋ฒ ๋ฉ์ ํ๊ณ์ qwen2.5 32b ๋ชจ๋ธ์ ํ๊ฐ ํ๊ณ๋ก ์ฑ๋ฅ์ด ๋ฎ์ ๊ฒ์ผ๋ก ๋ณด์ ๋๋ค. 3 ์ด์๊ณผ ๋ฏธ๋ง์ผ๋ก ๊ตฌ๋ถํ ๋ f1 score๋ ์ฝ 0.72์ ๋๋ค.
devngho/ko_edu_classifier_v2_nlpai-lab_KoE5
This model is based on the model
query:
,passage:
, and this model has also been trained to prependpassage:
. Be sure to **prefixpassage:
before your input text.
This model is nlpai-lab/KoE5 with classfier head. It is designed to evaluate the educational value of Korean web pages, similar to the HuggingFaceFW/fineweb-edu-classifier, but focused on Korean content. The training data comes from devngho/ko_llm_annotations dataset, contains 500k samples extracted from blueapple8259/c4-ko-cleaned-2 and evaluated using Qwen/Qwen2.5-32B-Instruct.
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).โก
- Developed by: devngho
- Language(s): ko
- License: mit
- Base model: nlpai-lab/KoE5
Training detail
- learning_rate: 3e-4 (cosine)
- warmup_ratio: 0.1
- batch_size: 2048(512*4)
- optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
- duration: 3h 21m
Training hardware
TPU v4-8
Performance
Validation Report:
precision recall f1-score support
0 0.66 0.33 0.44 198
1 0.75 0.63 0.68 1553
2 0.46 0.68 0.55 1159
3 0.63 0.56 0.59 967
4 0.62 0.26 0.36 219
accuracy 0.59 4096
macro avg 0.62 0.49 0.52 4096
weighted avg 0.62 0.59 0.59 4096
Confusion Matrix:
[[ 66 116 16 0 0]
[ 34 977 520 22 0]
[ 0 207 791 159 2]
[ 0 11 382 541 33]
[ 0 0 20 143 56]]
The low performance is likely due to the limitations of Korean embeddings and the evaluation limitations of the Qwen2.5 32B model. The F1 score is about 0.72 when separating above and below 3.
- Downloads last month
- 8