Llama-3.2-3B-Malaysian-Reasoning LoRA
This is Low Rank adapters for mesolitica/Llama-3.2-3B-Malaysian-Reasoning
Full README at mesolitica/Llama-3.2-3B-Malaysian-Reasoning.
Merging
Because Llama 3.2 3B is using tied weight embedding, so merging required to clone embedding into lm head after that addmm
as usual, script at https://github.com/mesolitica/malaya/blob/master/session/small-malaysian-reasoning/merge-3b.ipynb
If the model is not a tied weight, you can use default merge_and_unload
function from PEFT or unsloth.
Model tree for malayloraenjoyer/Llama-3.2-3B-Malaysian-Reasoning-LoRA
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
unsloth/Llama-3.2-3B-Instruct