File size: 789 Bytes
5569802
56c9ee1
5569802
56c9ee1
 
 
 
 
 
5569802
 
b078fe4
5569802
b078fe4
5569802
b078fe4
5569802
b078fe4
5569802
56c9ee1
5569802
b078fe4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
language:
- en
- ms
- id
- ta
- zh
---

# Llama-3.2-3B-Malaysian-Reasoning LoRA

This is Low Rank adapters for [mesolitica/Llama-3.2-3B-Malaysian-Reasoning](https://huggingface.co/mesolitica/Llama-3.2-3B-Malaysian-Reasoning)

Full README at [mesolitica/Llama-3.2-3B-Malaysian-Reasoning](https://huggingface.co/mesolitica/Llama-3.2-3B-Malaysian-Reasoning).

## Merging

Because Llama 3.2 3B is using tied weight embedding, so merging required to clone embedding into lm head after that `addmm` as usual, script at https://github.com/mesolitica/malaya/blob/master/session/small-malaysian-reasoning/merge-3b.ipynb

If the model is not a tied weight, you can use default `merge_and_unload` function from PEFT or unsloth.