Update README.md
Browse filesIncluded model card based on original by Teknium, with slight modifications for the GPT4all community.
README.md
CHANGED
@@ -1,11 +1,170 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
-
|
6 |
-
>This is a featured model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models that also are featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
|
7 |
|
8 |
-
|
9 |
|
10 |
-
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: NousResearch/Meta-Llama-3-8B
|
3 |
+
tags:
|
4 |
+
- Llama-3
|
5 |
+
- instruct
|
6 |
+
- finetune
|
7 |
+
- chatml
|
8 |
+
- DPO
|
9 |
+
- RLHF
|
10 |
+
- gpt4
|
11 |
+
- synthetic data
|
12 |
+
- distillation
|
13 |
+
- function calling
|
14 |
+
- json mode
|
15 |
+
model-index:
|
16 |
+
- name: Hermes-2-Pro-Llama-3-8B
|
17 |
+
results: []
|
18 |
+
license: apache-2.0
|
19 |
+
language:
|
20 |
+
- en
|
21 |
+
datasets:
|
22 |
+
- teknium/OpenHermes-2.5
|
23 |
---
|
24 |
|
25 |
+
# Hermes 2 Pro - Llama-3 8B Quantized to Q4_0 for the GPT4All community by 3Simplex
|
|
|
26 |
|
27 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png)
|
28 |
|
29 |
+
## Model Description
|
30 |
+
|
31 |
+
### This is the llama.cpp GGUF Quantized version of Hermes 2 Pro Llama-3 8B, for the full version, click [Here](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
|
32 |
+
|
33 |
+
Hermes 2 Pro is an upgraded version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
|
34 |
+
|
35 |
+
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
|
36 |
+
|
37 |
+
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
|
38 |
+
|
39 |
+
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now.
|
40 |
+
|
41 |
+
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
|
42 |
+
|
43 |
+
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
|
44 |
+
|
45 |
+
# Prompt Format
|
46 |
+
|
47 |
+
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
|
48 |
+
|
49 |
+
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
|
50 |
+
|
51 |
+
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
|
52 |
+
|
53 |
+
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
|
54 |
+
|
55 |
+
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
|
56 |
+
```
|
57 |
+
<|im_start|>system
|
58 |
+
You are a helpful assistant.<|im_end|>
|
59 |
+
```
|
60 |
+
|
61 |
+
```
|
62 |
+
<|im_start|>user
|
63 |
+
{user input}<|im_end|>
|
64 |
+
<|im_start|>assistant
|
65 |
+
{assistant response}<|im_end|>
|
66 |
+
```
|
67 |
+
|
68 |
+
## Prompt Format for JSON Mode / Structured Outputs
|
69 |
+
|
70 |
+
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
|
71 |
+
|
72 |
+
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
|
73 |
+
|
74 |
+
```
|
75 |
+
<|im_start|>system
|
76 |
+
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
|
77 |
+
```
|
78 |
+
|
79 |
+
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
|
80 |
+
|
81 |
+
|
82 |
+
# Benchmarks
|
83 |
+
|
84 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vOYv9wJUMn1Xrf4BvmO_x.png)
|
85 |
+
|
86 |
+
## GPT4All:
|
87 |
+
```
|
88 |
+
| Task |Version| Metric |Value | |Stderr|
|
89 |
+
|-------------|------:|--------|-----:|---|-----:|
|
90 |
+
|arc_challenge| 0|acc |0.5520|± |0.0145|
|
91 |
+
| | |acc_norm|0.5887|± |0.0144|
|
92 |
+
|arc_easy | 0|acc |0.8350|± |0.0076|
|
93 |
+
| | |acc_norm|0.8123|± |0.0080|
|
94 |
+
|boolq | 1|acc |0.8584|± |0.0061|
|
95 |
+
|hellaswag | 0|acc |0.6265|± |0.0048|
|
96 |
+
| | |acc_norm|0.8053|± |0.0040|
|
97 |
+
|openbookqa | 0|acc |0.3800|± |0.0217|
|
98 |
+
| | |acc_norm|0.4580|± |0.0223|
|
99 |
+
|piqa | 0|acc |0.8003|± |0.0093|
|
100 |
+
| | |acc_norm|0.8118|± |0.0091|
|
101 |
+
|winogrande | 0|acc |0.7490|± |0.0122|
|
102 |
+
```
|
103 |
+
Average: 72.62
|
104 |
+
|
105 |
+
## AGIEval:
|
106 |
+
```
|
107 |
+
| Task |Version| Metric |Value | |Stderr|
|
108 |
+
|------------------------------|------:|--------|-----:|---|-----:|
|
109 |
+
|agieval_aqua_rat | 0|acc |0.2520|± |0.0273|
|
110 |
+
| | |acc_norm|0.2559|± |0.0274|
|
111 |
+
|agieval_logiqa_en | 0|acc |0.3548|± |0.0188|
|
112 |
+
| | |acc_norm|0.3625|± |0.0189|
|
113 |
+
|agieval_lsat_ar | 0|acc |0.1826|± |0.0255|
|
114 |
+
| | |acc_norm|0.1913|± |0.0260|
|
115 |
+
|agieval_lsat_lr | 0|acc |0.5510|± |0.0220|
|
116 |
+
| | |acc_norm|0.5255|± |0.0221|
|
117 |
+
|agieval_lsat_rc | 0|acc |0.6431|± |0.0293|
|
118 |
+
| | |acc_norm|0.6097|± |0.0298|
|
119 |
+
|agieval_sat_en | 0|acc |0.7330|± |0.0309|
|
120 |
+
| | |acc_norm|0.7039|± |0.0319|
|
121 |
+
|agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343|
|
122 |
+
| | |acc_norm|0.3689|± |0.0337|
|
123 |
+
|agieval_sat_math | 0|acc |0.3909|± |0.0330|
|
124 |
+
| | |acc_norm|0.3773|± |0.0328|
|
125 |
+
```
|
126 |
+
Average: 42.44
|
127 |
+
|
128 |
+
## BigBench:
|
129 |
+
```
|
130 |
+
| Task |Version| Metric |Value | |Stderr|
|
131 |
+
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|
132 |
+
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360|
|
133 |
+
|bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246|
|
134 |
+
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|
135 |
+
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201|
|
136 |
+
| | |exact_str_match |0.0000|± |0.0000|
|
137 |
+
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|
138 |
+
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152|
|
139 |
+
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288|
|
140 |
+
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222|
|
141 |
+
|bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158|
|
142 |
+
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102|
|
143 |
+
|bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236|
|
144 |
+
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126|
|
145 |
+
|bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327|
|
146 |
+
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159|
|
147 |
+
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158|
|
148 |
+
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119|
|
149 |
+
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086|
|
150 |
+
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288|
|
151 |
+
```
|
152 |
+
Average: 43.55
|
153 |
+
|
154 |
+
## TruthfulQA:
|
155 |
+
```
|
156 |
+
| Task |Version|Metric|Value| |Stderr|
|
157 |
+
|-------------|------:|------|----:|---|-----:|
|
158 |
+
|truthfulqa_mc| 1|mc1 |0.410|± |0.0172|
|
159 |
+
| | |mc2 |0.578|± |0.0157|
|
160 |
+
```
|
161 |
+
|
162 |
+
# How to cite:
|
163 |
+
|
164 |
+
```bibtext
|
165 |
+
@misc{Hermes-2-Pro-Llama-3-8B,
|
166 |
+
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)},
|
167 |
+
title={Hermes-2-Pro-Llama-3-8B},
|
168 |
+
author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
|
169 |
+
}
|
170 |
+
```
|