Commit
·
1f690c1
1
Parent(s):
3ede9bd
Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,9 @@ license: apache-2.0
|
|
5 |
# Model info
|
6 |
This is EleutherAI/pythia-410m finetuned on OpenAssistant/oasst_top1_2023-08-25
|
7 |
|
|
|
|
|
|
|
8 |
# Usage
|
9 |
```
|
10 |
from transformers import pipeline
|
@@ -14,4 +17,6 @@ pipe = pipeline("text-generation", model="SummerSigh/Pythia410m-V0-Instruct")
|
|
14 |
out= pipe("<|im_start|>user\nWhat's the meaning of life?<|im_end|>\n<|im_start|>assistant\n",max_length = 500,repetition_penalty = 1.2, temperature = 0.5, do_sample = True)
|
15 |
|
16 |
print(out[0]["generated_text"])
|
17 |
-
```
|
|
|
|
|
|
5 |
# Model info
|
6 |
This is EleutherAI/pythia-410m finetuned on OpenAssistant/oasst_top1_2023-08-25
|
7 |
|
8 |
+
# Why
|
9 |
+
Plain and simple. Im experimenting with making instruction LLMs under 1B params. I think we can still squeeze out better performance out of these models.
|
10 |
+
|
11 |
# Usage
|
12 |
```
|
13 |
from transformers import pipeline
|
|
|
17 |
out= pipe("<|im_start|>user\nWhat's the meaning of life?<|im_end|>\n<|im_start|>assistant\n",max_length = 500,repetition_penalty = 1.2, temperature = 0.5, do_sample = True)
|
18 |
|
19 |
print(out[0]["generated_text"])
|
20 |
+
```
|
21 |
+
# Contact
|
22 |
+
If you want to contact me and work with me on making good under 1B param models, you can reach me on discord at summer_ai.
|