oheast commited on
Commit
164f3d6
1 Parent(s): 2f04424

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ language:
4
+ - ko
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
+ **The license is `cc-by-nc-4.0`.**
9
+
10
+ # **GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15**
11
+
12
+ ## Model Details
13
+
14
+ **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
15
+
16
+ **Input** Models input text only.
17
+
18
+ **Output** Models generate text only.
19
+
20
+ **Model Architecture**
21
+ GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 is an auto-regressive language model based on the LLaMA2 transformer architecture.
22
+
23
+ **Base Model** [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B)
24
+
25
+ **Training Dataset**
26
+
27
+ - We combined Open Korean Dateset using mixed-strategy
28
+ - We use A100 GPU 80GB * 8, when training.
29
+
30
+ # **Model Benchmark**
31
+
32
+ ## KO-LLM leaderboard
33
+ - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
34
+
35
+
36
+ # Implementation Code
37
+ ```python
38
+ ### GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+ import torch
41
+
42
+ repo = "GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15"
43
+ model = AutoModelForCausalLM.from_pretrained(
44
+ repo,
45
+ return_dict=True,
46
+ torch_dtype=torch.float16,
47
+ device_map='auto'
48
+ )
49
+ tokenizer = AutoTokenizer.from_pretrained(repo)
50
+ ```