File size: 2,331 Bytes
243f824
 
 
 
 
 
3b7442b
 
 
e1a3a85
243f824
78d99d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
243f824
 
 
 
 
 
 
 
2959013
243f824
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2959013
 
 
 
243f824
40d794f
 
 
 
 
 
 
 
 
 
 
 
 
e1a3a85
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
- Open-Orca/OpenOrca
language:
- en
tags:
- chat
- palmyra
license: apache-2.0
---


**DEPRECATED MODEL NOTICE**
==========================

Please note that this model is no longer maintained or supported by our team. We strongly advise against using it in production or for any critical applications.

Instead, we recommend using our latest and greatest models, which can be found at:

https://huggingface.co/collections/Writer/palmyra-writer-license-66476fa8156169f8720a2c89

==========================




# Writer/palmyra-20b-chat
---

# Usage 

```py

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer

model_name = "Writer/palmyra-20b-chat"

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto",
)

prompt = "What is the meaning of life?"

input_text = (
    "A chat between a curious user and an artificial intelligence assistant. "
    "The assistant gives helpful, detailed, and polite answers to the user's questions. "
    "USER: {prompt} "
    "ASSISTANT:"
)

model_inputs = tokenizer(input_text.format(prompt=prompt), return_tensors="pt").to(
    "cuda"
)

gen_conf = {
    "top_k": 20,
    "max_new_tokens": 2048,
    "temperature": 0.6,
    "do_sample": True,
    "eos_token_id": tokenizer.eos_token_id,
}

streamer = TextStreamer(tokenizer)
if "token_type_ids" in model_inputs:
    del model_inputs["token_type_ids"]

all_inputs = {**model_inputs, **gen_conf}
output = model.generate(**all_inputs, streamer=streamer)

print("-"*20)
print(output)

```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Writer__palmyra-20b-chat)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 38.97   |
| ARC (25-shot)         | 43.52          |
| HellaSwag (10-shot)   | 72.83    |
| MMLU (5-shot)         | 35.18         |
| TruthfulQA (0-shot)   | 43.17   |
| Winogrande (5-shot)   | 66.46   |
| GSM8K (5-shot)        | 3.94        |
| DROP (3-shot)         | 7.7         |