Commit
·
80a1cba
1
Parent(s):
b4eaaca
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- da
|
5 |
+
---
|
6 |
+
# DanskGPT-tiny
|
7 |
+
DanskGPT-tiny er en 1,1 milliard parametre LLaMA baseret LLM.
|
8 |
+
|
9 |
+
Modellen er trænet på 8 milliarder tokens af dansk syntetisk tekst.
|
10 |
+
|
11 |
+
Denne model er en såkaldt "foundation/completion" model, og er derfor ikke beregnet til at chatte med.
|
12 |
+
|
13 |
+
## Inferens
|
14 |
+
Ved brug af vLLM.
|
15 |
+
|
16 |
+
`pip install vllm`
|
17 |
+
|
18 |
+
```python
|
19 |
+
from vllm import LLM, SamplingParams
|
20 |
+
|
21 |
+
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=512)
|
22 |
+
llm = LLM(model="mhenrichsen/danskgpt-tiny")
|
23 |
+
|
24 |
+
while True:
|
25 |
+
prompt = input("Skriv: ")
|
26 |
+
outputs = llm.generate(prompt, sampling_params)
|
27 |
+
for output in outputs:
|
28 |
+
prompt = output.prompt
|
29 |
+
generated_text = output.outputs[0].text
|
30 |
+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
31 |
+
|
32 |
+
```
|
33 |
+
|