Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,4 @@ language:
|
|
4 |
---
|
5 |
T‑LLaMA: a Tibetan large language model based on LLaMA2
|
6 |
|
7 |
-
In this study, we trained Tibetan LLaMA based on LLaMA2
|
|
|
4 |
---
|
5 |
T‑LLaMA: a Tibetan large language model based on LLaMA2
|
6 |
|
7 |
+
In this study, we built a corpus containing 2.2 billion Tibetan characters and trained Tibetan LLaMA based on LLaMA2 7B. We achieved state-of-the-art performance in the text classification task using the open-source TNCC dataset, with an accuracy of 79.8%. Additionally, we obtained promising results in text generation and text summarization tasks.
|