Can we use LoRA for finetuning the mpt models

#10
by hk11 - opened

Hi,
As per release notes https://github.com/mosaicml/llm-foundry/releases/tag/v0.2.0, LoRA finetuning is officially supported by mptmodels. Can we use get_peft_model on mpt models in order to use LoRA for finetuning ?

Yes. Make sure to target the “Wqkv” parameters.
Could do something like this

lora:
args:
r: 16
lora_alpha: 32
lora_dropout: 0.05
target_modules: ['Wqkv']

And

LoraConfig(**lora_cfg.args)

Sign up or log in to comment