File size: 1,419 Bytes
8e7b4ae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: cc-by-sa-4.0
---
## emozilla/landmark-llama-7b
This model is an out-of-the-box ready version of the LLaMA-7B variant of [Landmark Attention](https://arxiv.org/abs/2305.16300).
The original code is modified from the [Landmark GitHub](https://github.com/epfml/landmark-attention) and the weights from [here](https://huggingface.co/epfml/landmark-attention-llama7b-wdiff).
As a LLaMA variant, this model may be subject to the LLaMA license.
### To use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
tokenizer = AutoTokenizer.from_pretrained("emozilla/landmark-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \
torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe("Somebody once told me the world is gonna roll me", \
max_new_tokens=256, temperature=0.8, do_sample=True))
```
You can configure the Landmark parameters by editing `mem_freq`, `mem_top_k`, `mem_max_seq_len`, and `mem_max_cache_size`.
```python
config = AutoConfig.from_pretrained("emozilla/landmark-llama-7b", trust_remote_code=True)
config.mem_top_k = 6
model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \
torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", config=config)
```
|