Update README.md
Browse files
README.md
CHANGED
@@ -39,10 +39,10 @@ We used a set of publicly available text corpus, including:
|
|
39 |
- English: including [The Pile](https://github.com/EleutherAI/the-pile), [RedPajama](https://github.com/togethercomputer/RedPajama-Data), [C4](https://huggingface.co/datasets/c4) and so on.
|
40 |
|
41 |
### Tokenizer
|
42 |
-
The tokenizer is based on Byte-level BPE algorithm. We trained its vocabulary from scratch using a subset of the pre-training corpus. For constructing subset, 10M and 10M documents are sampled from Korean and English corpus respectively. The resulting vocabulary sizes about 50K.
|
43 |
|
44 |
### Zero-shot evaluations
|
45 |
-
We evaluate 42dot-PLM on a variety of academic benchmarks both
|
46 |
#### Korean (KOBEST)
|
47 |
|
48 |
<figure align="center">
|
@@ -90,7 +90,7 @@ We evaluate 42dot-PLM on a variety of academic benchmarks both on Korean and Eng
|
|
90 |
| **avearge** | 0.479 | 0.482 | 0.452 | 0.429 | **0.489** |
|
91 |
|
92 |
## Limitations and Ethical Considerations
|
93 |
-
42dot-PLM shares a number of well-known limitations of other large language models (LLMs). For example, it may generate false and misinformative
|
94 |
|
95 |
## Disclaimer
|
96 |
The contents generated by 42dot LLM series ("42dot LLMs") do not necessarily reflect the views or opinions of 42dot Inc. ("42dot"). 42dot disclaims any and all liability to any part for any direct, indirect, implied, punitive, special, incidental or other consequential damages arising any use of the 42dot LLMs and theirs generated contents.
|
|
|
39 |
- English: including [The Pile](https://github.com/EleutherAI/the-pile), [RedPajama](https://github.com/togethercomputer/RedPajama-Data), [C4](https://huggingface.co/datasets/c4) and so on.
|
40 |
|
41 |
### Tokenizer
|
42 |
+
The tokenizer is based on the Byte-level BPE algorithm. We trained its vocabulary from scratch using a subset of the pre-training corpus. For constructing a subset, 10M and 10M documents are sampled from Korean and English corpus respectively. The resulting vocabulary sizes about 50K.
|
43 |
|
44 |
### Zero-shot evaluations
|
45 |
+
We evaluate 42dot-PLM on a variety of academic benchmarks both in Korean and English. All the results are obtained using [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and models released on the Hugging Face Hub.
|
46 |
#### Korean (KOBEST)
|
47 |
|
48 |
<figure align="center">
|
|
|
90 |
| **avearge** | 0.479 | 0.482 | 0.452 | 0.429 | **0.489** |
|
91 |
|
92 |
## Limitations and Ethical Considerations
|
93 |
+
42dot-PLM shares a number of well-known limitations of other large language models (LLMs). For example, it may generate false and misinformative content since 42dot-PLM is also subject to [hallucination](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)). In addition, 42dot-PLM may generate toxic, harmful, and biased content due to the use of web-available training data. We strongly suggest that 42dot-PLM users should be aware of those limitations and take necessary steps to mitigate those issues.
|
94 |
|
95 |
## Disclaimer
|
96 |
The contents generated by 42dot LLM series ("42dot LLMs") do not necessarily reflect the views or opinions of 42dot Inc. ("42dot"). 42dot disclaims any and all liability to any part for any direct, indirect, implied, punitive, special, incidental or other consequential damages arising any use of the 42dot LLMs and theirs generated contents.
|