brittlewis12
commited on
Commit
•
e5d6b8e
1
Parent(s):
457d1a0
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: h2oai/h2o-danube-1.8b-chat
|
3 |
+
datasets:
|
4 |
+
- HuggingFaceH4/ultrafeedback_binarized
|
5 |
+
- Intel/orca_dpo_pairs
|
6 |
+
- argilla/distilabel-math-preference-dpo
|
7 |
+
- Open-Orca/OpenOrca
|
8 |
+
- OpenAssistant/oasst2
|
9 |
+
- HuggingFaceH4/ultrachat_200k
|
10 |
+
- meta-math/MetaMathQA
|
11 |
+
license: apache-2.0
|
12 |
+
language:
|
13 |
+
- en
|
14 |
+
model_creator: h2oai
|
15 |
+
model_name: h2o-danube-1.8b-chat
|
16 |
+
model_type: mistral
|
17 |
+
inference: false
|
18 |
+
tags:
|
19 |
+
- gpt
|
20 |
+
- llm
|
21 |
+
- large language model
|
22 |
+
- h2o-llmstudio
|
23 |
+
pipeline_tag: text-generation
|
24 |
+
prompt_template: |
|
25 |
+
<|system|>{{system_message}}</s>
|
26 |
+
<|prompt|>{{prompt}}</s>
|
27 |
+
<|answer|>
|
28 |
+
|
29 |
+
quantized_by: brittlewis12
|
30 |
+
---
|
31 |
+
|
32 |
+
# h2o-danube-1.8b-chat GGUF
|
33 |
+
|
34 |
+
|
35 |
+
Original model: [h2o-danube-1.8b-chat](https://huggingface.co/h2oai/h2o-danube-1.8b-chat)
|
36 |
+
Model creator: [h2oai](https://huggingface.co/h2oai)
|
37 |
+
|
38 |
+
This repo contains GGUF format model files for h2oai’s h2o-danube-1.8b-chat.
|
39 |
+
|
40 |
+
> h2o-danube-1.8b-chat is an chat fine-tuned model by H2O.ai with 1.8 billion parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We release three versions of this model:
|
41 |
+
>
|
42 |
+
> - h2oai/h2o-danube-1.8b-base Base model
|
43 |
+
> - h2oai/h2o-danube-1.8b-sft SFT tuned
|
44 |
+
> - h2oai/h2o-danube-1.8b-chat SFT + DPO tuned
|
45 |
+
|
46 |
+
> We adjust the Llama 2 architecture for a total of around 1.8b parameters. We use the original Llama 2 tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 16,384. We incorporate the sliding window attention from mistral with a size of 4,096.
|
47 |
+
|
48 |
+
Refer to h2o.ai’s model [disclaimer](https://huggingface.co/h2oai/h2o-danube-1.8b-chat/blob/main/README.md#disclaimer) for terms of use.
|
49 |
+
|
50 |
+
|
51 |
+
### What is GGUF?
|
52 |
+
|
53 |
+
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
54 |
+
Converted using llama.cpp b2037 ([1cfb537](https://github.com/ggerganov/llama.cpp/commits/1cfb5372cf5707c8ec6dde7c874f4a44a6c4c915))
|
55 |
+
|
56 |
+
### Prompt template:
|
57 |
+
|
58 |
+
```
|
59 |
+
<|system|>{{system_message}}</s>
|
60 |
+
<|prompt|>{{prompt}}</s>
|
61 |
+
<|answer|>
|
62 |
+
```
|
63 |
+
|
64 |
+
---
|
65 |
+
|
66 |
+
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
|
67 |
+
|
68 |
+
![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg)
|
69 |
+
|
70 |
+
[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
|
71 |
+
- create & save **Characters** with custom system prompts & temperature settings
|
72 |
+
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
|
73 |
+
- make it your own with custom **Theme colors**
|
74 |
+
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
|
75 |
+
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
|
76 |
+
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
|
77 |
+
|
78 |
+
---
|
79 |
+
|
80 |
+
## Original Model Evaluations:
|
81 |
+
|
82 |
+
> Commonsense, world-knowledge and reading comprehension tested in 0-shot:
|
83 |
+
|
84 |
+
| Benchmark | acc_n |
|
85 |
+
|:--------------|:--------:|
|
86 |
+
| ARC-easy | 67.51 |
|
87 |
+
| ARC-challenge | 39.25 |
|
88 |
+
| BoolQ | 77.89 |
|
89 |
+
| Hellaswag | 67.60 |
|
90 |
+
| OpenBookQA | 39.20 |
|
91 |
+
| PiQA | 76.71 |
|
92 |
+
| TriviaQA | 36.29 |
|
93 |
+
| Winogrande | 65.35 |
|