DopeorNope commited on
Commit
fa95c37
·
1 Parent(s): c798ad4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ko
4
+ library_name: transformers
5
+ pipeline_tag: text-generation
6
+ license: cc-by-nc-sa-4.0
7
+ ---
8
+ **The license is `cc-by-nc-sa-4.0`.**
9
+
10
+ # **🐻‍❄️SOLARC-M-10.7B🐻‍❄️**
11
+ ![img](https://drive.google.com/uc?export=view&id=1_Qa2TfLMw3WeJ23dHkrP1Xln_RNt1jqG)
12
+
13
+
14
+ ## Model Details
15
+
16
+ **Model Developers** Seungyoo Lee(DopeorNope)
17
+
18
+ I am in charge of Large Language Models (LLMs) at Markr AI team in South Korea.
19
+
20
+ **Input** Models input text only.
21
+
22
+ **Output** Models generate text only.
23
+
24
+ **Model Architecture**
25
+ SOLARC-MOE-10.7Bx6 is an auto-regressive language model based on the SOLAR architecture.
26
+
27
+ ---
28
+
29
+ ## **Base Model**
30
+
31
+ [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)
32
+
33
+
34
+ [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1)
35
+
36
+
37
+ ## **Implemented Method**
38
+
39
+ I have built a model using the merge method, utilizing each of these models as the base.
40
+
41
+
42
+ ---
43
+
44
+ # Implementation Code
45
+
46
+
47
+ ## Load model
48
+ ```python
49
+
50
+ from transformers import AutoModelForCausalLM, AutoTokenizer
51
+ import torch
52
+
53
+ repo = "DopeorNope/SOLARC-M-10.7B"
54
+ OpenOrca = AutoModelForCausalLM.from_pretrained(
55
+ repo,
56
+ return_dict=True,
57
+ torch_dtype=torch.float16,
58
+ device_map='auto'
59
+ )
60
+ OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
61
+ ```
62
+
63
+
64
+ ---