huseinzol05 commited on
Commit
56c9ee1
·
verified ·
1 Parent(s): 9c07e27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -1,6 +1,12 @@
1
  ---
 
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
4
  ---
5
 
6
  # Llama-3.2-3B-Malaysian-Reasoning LoRA
@@ -11,6 +17,6 @@ Full README at [mesolitica/Llama-3.2-3B-Malaysian-Reasoning](https://huggingface
11
 
12
  ## Merging
13
 
14
- Because Llama 3.2 3B is using tied weight embedding, so merging required to clone embedding into lm head, script at https://github.com/mesolitica/malaya/blob/master/session/small-malaysian-reasoning/merge-3b.ipynb
15
 
16
  If the model is not a tied weight, you can use default `merge_and_unload` function from PEFT or unsloth.
 
1
  ---
2
+ base_model: unsloth/Llama-3.2-3B-Instruct
3
  library_name: transformers
4
+ language:
5
+ - en
6
+ - ms
7
+ - id
8
+ - ta
9
+ - zh
10
  ---
11
 
12
  # Llama-3.2-3B-Malaysian-Reasoning LoRA
 
17
 
18
  ## Merging
19
 
20
+ Because Llama 3.2 3B is using tied weight embedding, so merging required to clone embedding into lm head after that `addmm` as usual, script at https://github.com/mesolitica/malaya/blob/master/session/small-malaysian-reasoning/merge-3b.ipynb
21
 
22
  If the model is not a tied weight, you can use default `merge_and_unload` function from PEFT or unsloth.