|
--- |
|
base_model: unsloth/Llama-3.2-3B-Instruct |
|
library_name: transformers |
|
language: |
|
- en |
|
- ms |
|
- id |
|
- ta |
|
- zh |
|
--- |
|
|
|
# Llama-3.2-3B-Malaysian-Reasoning LoRA |
|
|
|
This is Low Rank adapters for [mesolitica/Llama-3.2-3B-Malaysian-Reasoning](https://huggingface.co/mesolitica/Llama-3.2-3B-Malaysian-Reasoning) |
|
|
|
Full README at [mesolitica/Llama-3.2-3B-Malaysian-Reasoning](https://huggingface.co/mesolitica/Llama-3.2-3B-Malaysian-Reasoning). |
|
|
|
## Merging |
|
|
|
Because Llama 3.2 3B is using tied weight embedding, so merging required to clone embedding into lm head after that `addmm` as usual, script at https://github.com/mesolitica/malaya/blob/master/session/small-malaysian-reasoning/merge-3b.ipynb |
|
|
|
If the model is not a tied weight, you can use default `merge_and_unload` function from PEFT or unsloth. |