How to fine-tune the Guanaco (7B, 13B) model?

#5
by mvermand - opened

I have read this post https://huggingface.co/blog/4bit-transformers-bitsandbytes, which ends with a demo of the Guanaco Playground: https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi. Which does really nice. Though I would like to fine-tune it to my needs. The same article has a link on how to fine-tune a QLora model (resources section -> https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) but that seems to be about fine-tuning a EleutherAI/gpt-neox-20b completion model, not the Guanaco chat-instruction model, right? Is there a colab available the shows how to fine-tune the model that is used in the guanaco-playground? (https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi)

University of Washington NLP org

We just uploaded scripts to replicate the guanaco finetuning. Take a look at: https://github.com/artidoro/qlora/tree/main/scripts

Let me know if you have questions

We just uploaded scripts to replicate the guanaco finetuning. Take a look at: https://github.com/artidoro/qlora/tree/main/scripts

Let me know if you have questions

How to fine tune guanaco to the new dataset? For example, training codes for guanaco 33B with qlora to my own dataset which contains new language?

Sign up or log in to comment