|
--- |
|
license: llama2 |
|
language: |
|
- en |
|
library_name: transformers |
|
datasets: |
|
- togethercomputer/llama-instruct |
|
--- |
|
|
|
# LLaMA-2-7B-32K-Instruct |
|
|
|
## Model Description |
|
|
|
LLaMA-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instruction and chat data. |
|
We built LLaMA-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/LLaMA-2-32K-Instruct). |
|
We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback! |
|
|
|
## Data Collection Details |
|
|
|
LLaMA-2-7B-32K-Instruct is fine-tuned over a combination of two parts: |
|
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**. |
|
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)). |
|
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct). |
|
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat). |
|
|
|
2. **Long-context Summarization and Long-context QA**. |
|
We follow the recipe of [LLaMA-2-7B-32K](https://together.ai/blog/llama-2-7b-32k), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172). |
|
|
|
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%). |
|
|
|
## Model Usage |
|
|
|
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference. |
|
Alternatively, you can load the model directly from the Hugging Face model hub using |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Instruct") |
|
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16) |
|
input_ids = tokenizer.encode(<your instruction>, return_tensors="pt") |
|
output = model.generate(input_ids, max_length=..., temperature=...) |
|
output_text = tokenizer.decode(output[0], skip_special_tokens=True) |
|
``` |
|
|
|
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by: |
|
|
|
``` |
|
[INST] <your instruction here> [\INST]. |
|
``` |
|
|
|
For example, if we query the model with |
|
|
|
``` |
|
[INST] Write a poem about cats [\INST], |
|
``` |
|
|
|
the model will return |
|
``` |
|
Sure, here's a short poem about cats: |
|
|
|
Cats, cats, oh how they enthrall, |
|
With their soft fur and gentle grace, |
|
They purr with joy, their eyes aglow, |
|
And steal our hearts with their charm and grace. |
|
|
|
In their own time, they move about, |
|
With graceful strides, they tiptoe out, |
|
Their whiskers twitching, as they go, |
|
Their curiosity piqued, they want to know. |
|
|
|
They climb, they jump, they play, they chase, |
|
Their agility, a sight to behold, |
|
Their naps, a treat, a break from the chase, |
|
A time to rest, a time to be told. |
|
|
|
Cats, cats, they're unique and special, |
|
A part of our lives, a treasure to hold, |
|
Their personalities, so diverse, |
|
Their charm, a gift, that's forever told. |
|
``` |
|
|
|
## Model Evaluation |
|
|
|
We evaluate the model from three aspects: 1) [Normalized perplexity](https://together.ai/blog/llama-2-7b-32k) over [PG19 dataset](https://huggingface.co/datasets/pg19); |
|
2) [Rouge score over BookSum](https://together.ai/blog/llama-2-7b-32k); and |
|
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/llama-2-7b-32k). We summarize the results below: |
|
|
|
* Normalized Perplexity over PG19 |
|
| Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq | |
|
| -------- | ------- | ------- | ------- | ------- | ------- | |
|
| LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A | |
|
| LLaMA-2-7B-32K-Instruct (ours) | 1.813 | 1.798 | 1.781 | 1.778 | 1.772| |
|
|
|
* Rouge Score over BookSum |
|
| Model | R1 | R2 | RL | |
|
| -------- | ------- | ------- | ------- | |
|
| LLaMA-2-7B-Chat (Meta) | 0.055 | 0.008 | 0.046 | |
|
| LLaMA-2-7B-32K-Instruct (ours) | 0.365 | 0.086 | 0.192 | |
|
|
|
* Accuracy over MQA |
|
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) | |
|
| -------- | ------- | ------- | ------- | |
|
| LLaMA-2-7B-Chat (Meta) | 0.384 | 0.375 | 0.313 | |
|
| LLaMA-2-7B-32K-Instruct (ours) | 0.451 | 0.434 | 0.373 | |
|
|
|
We observe that LLaMA-2-7B-32K-Instruct obtains reasonable (and even better) perplexity, rouge score and accuracy over the original LLaMA-2-7B-Chat model. |
|
|
|
## Limitations and Bias |
|
|
|
As with all language models, LLaMA-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model. |
|
|
|
## Community |
|
|
|
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |