File size: 566 Bytes
cbbe0bc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# Alpaca LoRa 7B
This repository contains a LLaMA-7B fine-tuned model on the [Standford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) cleaned version dataset.
I used [LLaMA-7B-hf](decapoda-research/llama-7b-hf) as a base model
# Usage
## Using the model
```python
from transformers import LlamaTokenizer, LlamaForCausalLM,
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/alpaca-lora-7b")
model = LlamaForCausalLM.from_pretrained(
"chainyo/alpaca-lora-7b",
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
) |