huseinzol05 commited on
Commit
97910da
1 Parent(s): 1199678

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ms
4
+ ---
5
+
6
+ # Full Parameter Finetuning MaLLaM 🌙 3B 20480 context length on Malaysian instructions dataset
7
+
8
+ README at https://github.com/mesolitica/malaya/tree/5.1/session/mistral#mallam-3b
9
+
10
+ We use exact Mistral Instruct chat template.
11
+
12
+ WandB, https://wandb.ai/mesolitica/fpf-mallam-3b-instructions-16k?workspace=user-husein-mesolitica
13
+
14
+ WandB report, https://wandb.ai/mesolitica/fpf-mallam-5b-instructions-16k/reports/Instruction-finetuning--Vmlldzo2MjE5Njg2
15
+
16
+ ## Limitations
17
+
18
+ This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
19
+ It does not have any moderation mechanisms.
20
+
21
+ ## how-to
22
+
23
+ ```python
24
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
25
+ import torch
26
+ import json
27
+
28
+ def parse_mistral_chat(messages, function_call = None):
29
+
30
+ user_query = messages[-1]['content']
31
+
32
+ users, assistants = [], []
33
+ for q in messages[:-1]:
34
+ if q['role'] == 'user':
35
+ users.append(q['content'])
36
+ elif q['role'] == 'assistant':
37
+ assistants.append(q['content'])
38
+
39
+ texts = ['<s>']
40
+
41
+ if function_call:
42
+ fs = []
43
+ for f in function_call:
44
+ f = json.dumps(f, indent=4)
45
+ fs.append(f)
46
+ fs = '\n\n'.join(fs)
47
+ texts.append(f'\n[FUNCTIONCALL]\n{fs}\n')
48
+
49
+ for u, a in zip(users, assistants):
50
+ texts.append(f'[INST] {u.strip()} [/INST] {a.strip()}</s>')
51
+
52
+ texts.append(f'[INST] {user_query.strip()} [/INST]')
53
+ prompt = ''.join(texts).strip()
54
+ return prompt
55
+
56
+ TORCH_DTYPE = 'bfloat16'
57
+ nf4_config = BitsAndBytesConfig(
58
+ load_in_4bit=True,
59
+ bnb_4bit_quant_type='nf4',
60
+ bnb_4bit_use_double_quant=True,
61
+ bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE)
62
+ )
63
+
64
+ tokenizer = AutoTokenizer.from_pretrained('mesolitica/mallam-3b-20k-instructions')
65
+ model = AutoModelForCausalLM.from_pretrained(
66
+ 'mesolitica/mallam-3b-20k-instructions',
67
+ use_flash_attention_2 = True,
68
+ quantization_config = nf4_config
69
+ )
70
+
71
+ messages = [
72
+ {'role': 'user', 'content': 'kwsp tu apa'}
73
+ ]
74
+ prompt = parse_mistral_chat(messages)
75
+ inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
76
+ generate_kwargs = dict(
77
+ inputs,
78
+ max_new_tokens=1024,
79
+ top_p=0.95,
80
+ top_k=50,
81
+ temperature=0.9,
82
+ do_sample=True,
83
+ num_beams=1,
84
+ )
85
+ r = model.generate(**generate_kwargs)
86
+ tokenizer.decode(r[0])
87
+ ```
88
+
89
+ ```text
90
+ <s> [INST] kwsp tu apa [/INST]KWSP bermaksud Kumpulan Wang Simpanan Pekerja. Ia adalah sebuah institusi simpanan persaraan yang ditubuhkan oleh Kementerian Kewangan Malaysia untuk tujuan mengumpul simpanan ahli untuk dibayar pada umur persaraan, penuh atau penuh persaraan penuh. KWSP ditubuhkan pada tahun 1951 dan mula beroperasi pada tahun 1952. KWSP adalah salah satu institusi simpanan persaraan terbesar di dunia, dengan pangkalan ahli sekitar 14 juta ahli.</s>
91
+ ```