huseinzol05's picture
Update README.md
b078fe4 verified
|
raw
history blame
693 Bytes
metadata
library_name: transformers
tags: []

Llama-3.2-3B-Malaysian-Reasoning LoRA

This is Low Rank adapters for mesolitica/Llama-3.2-3B-Malaysian-Reasoning

Full README at mesolitica/Llama-3.2-3B-Malaysian-Reasoning.

Merging

Because Llama 3.2 3B is using tied weight embedding, so merging required to clone embedding into lm head, script at https://github.com/mesolitica/malaya/blob/master/session/small-malaysian-reasoning/merge-3b.ipynb

If the model is not a tied weight, you can use default merge_and_unload function from PEFT or unsloth.