finetuning
Hi,
I have a short question .
How did you fine-tuned it ? could you please share the scripts or a link where I can find more information ?
here is the notebook he used : https://github.com/Vasanthengineer4949/NLP-Projects-NHV/tree/main/LLMs%20Related/Finetune%20Phi_1_5
and here is the video where he explain it : https://www.youtube.com/watch?v=R8CKx5yNEDo
thank you
Hi,
@ahmed000000000
Just curious
mine fine tuning qlora of phi1.5 show the message of attenion_mask not supported during fine tuning
`attention_mask` is not supported during training. Using it might lead to unexpected results.
why there's no this kind of message in our ipynb ??
secondly, if they do not support attention_mask, when we set
tokenizer.pad_token = tokenizer.eos_token
do we need to change the padding_side.
I mean, if padding_side = 'left'as some other tutorial suggestion,
the input_ids will become EOS EOS EOS EOS EOS Below is the instruction, blab la bla bla.....
the model will be trained on predicting a lot of EOS token in the beginning of sentence. wouldn't that somewhat weird ?