finetuning parameters

#1
by lvkaokao - opened

hi, can you share the finetuning hyper-parameters?

I have finetuned the https://huggingface.co/mistralai/Mistral-7B-v0.1 with your dataset, but the metric of ARC and hellaswag decreases significantly during the training

here are some information of my hyper-parameters

  1. full parameters finetuning
  2. learning rate = 5e-6
  3. batch_size=64
  4. epoch=3
OpenOrca org

The batch_size is too huge and you should set to 4 or 6 is better.
I would prefer set the learning rate as 2e-5

Thanks for your response~
I will try!

@xDAN2099
hi, what about other parameters?

I set
full parameters finetuning
learning rate = 2e-5
batch_size=8
epoch=3
warmup_ratio = 0.03

but the training loss don't coverage

@lvkaokao how did u fine-tune? was it using axolotl too?

@xDAN2099 and others, I'm trying to finetune Mistral 7B with SlimOrca and the MT-bench score is consistently well-below the OpenOrca benchmarked 6.84. Could you please share the exact training script along with any other details on hardware used to get on-par score on MT-bench? Any help in that regard is much appreciated, thank you!

Sign up or log in to comment