Wonseop Kim commited on
Commit
042a4e7
·
1 Parent(s): 2bdd0a2

Fix README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -33
README.md CHANGED
@@ -1,40 +1,25 @@
1
  ---
 
 
 
 
 
 
2
  tags:
3
- - autotrain
4
- - text-generation
5
- widget:
6
- - text: "I love AutoTrain because "
7
- license: other
 
8
  ---
 
9
 
10
- # Model Trained Using AutoTrain
11
 
12
- This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
13
 
14
- # Usage
15
 
16
- ```python
17
-
18
- from transformers import AutoModelForCausalLM, AutoTokenizer
19
-
20
- model_path = "PATH_TO_THIS_REPO"
21
-
22
- tokenizer = AutoTokenizer.from_pretrained(model_path)
23
- model = AutoModelForCausalLM.from_pretrained(
24
- model_path,
25
- device_map="auto",
26
- torch_dtype='auto'
27
- ).eval()
28
-
29
- # Prompt content: "hi"
30
- messages = [
31
- {"role": "user", "content": "hi"}
32
- ]
33
-
34
- input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
35
- output_ids = model.generate(input_ids.to('cuda'))
36
- response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
37
-
38
- # Model response: "Hello! How can I assist you today?"
39
- print(response)
40
- ```
 
1
  ---
2
+ base_model: beomi/OPEN-SOLAR-KO-10.7B
3
+ license: apache-2.0
4
+ pipeline_tag: text-generation
5
+ language:
6
+ - en
7
+ - ko
8
  tags:
9
+ - finetuned
10
+ - text-generation
11
+ datasets:
12
+ - royboy0416/ko-alpaca
13
+ inference: false
14
+ model_type: mixtral
15
  ---
16
+ # Model Card for OPEN-SOLAR-KO-10.7B-S-Core
17
 
18
+ ## Model Details
19
 
20
+ * **Base Model**: [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B)
21
 
22
+ ## Dataset Details
23
 
24
+ ### Used Datasets
25
+ - royboy0416/ko-alpaca