Bazsalanszky commited on
Commit
0df10f1
1 Parent(s): bad1341

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language:
3
- - en
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
@@ -9,14 +9,34 @@ tags:
9
  - mistral
10
  - trl
11
  base_model: unsloth/mistral-7b-bnb-4bit
 
 
12
  ---
13
 
14
- # Uploaded model
15
 
16
  - **Developed by:** Bazsalanszky
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
19
 
20
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
+ - hu
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
 
9
  - mistral
10
  - trl
11
  base_model: unsloth/mistral-7b-bnb-4bit
12
+ datasets:
13
+ - SZTAKI-HLT/HunSum-1
14
  ---
15
 
16
+ # Mistral-7b-0.1-hu
17
 
18
  - **Developed by:** Bazsalanszky
19
  - **License:** apache-2.0
20
  - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
21
 
22
+ Ez a mistral 7b model magyar szövegre lett tanítva 10 000 véletlenszerűen kiválasztott cikken. Így valamivel szebben ír magyarul, mint az alap model.
23
 
24
+ ## Fontos
25
+
26
+ Ez a modell NEM lett instrukciókra tanítva, valószínűleg nem fogja azokat követni.
27
+
28
+ ## Példa használat
29
+
30
+ ```python
31
+ # Load model directly
32
+ from transformers import AutoTokenizer, AutoModel
33
+
34
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
35
+ model = AutoModel.from_pretrained("Bazsalanszky/Mistral-7b-0.1-hu")
36
+
37
+ inputs = tokenizer("Magyarország\nFővárosa:", return_tensors = "pt").to("cpu")
38
+
39
+ from transformers import TextStreamer
40
+ text_streamer = TextStreamer(tokenizer)
41
+ _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 250)
42
+ ```