## Alpaca-Lora-Swe 7B | |
Alpaca-Lora-Swe-7b is a LLaMA-7B model fine-tuned on the translated [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset to follow the 🇸🇪 Swedish instructions | |
For more information, please visit the Github repo: https://github.com/jeremycochoy/alpaca-lora-swe | |