Create README.md
#2
by
MaziyarPanahi
- opened
README.md
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- quantized
|
4 |
+
- 2-bit
|
5 |
+
- 3-bit
|
6 |
+
- 4-bit
|
7 |
+
- 5-bit
|
8 |
+
- 6-bit
|
9 |
+
- 8-bit
|
10 |
+
- GGUF
|
11 |
+
- transformers
|
12 |
+
- safetensors
|
13 |
+
- mistral
|
14 |
+
- text-generation
|
15 |
+
- arxiv:2304.12244
|
16 |
+
- arxiv:2306.08568
|
17 |
+
- arxiv:2308.09583
|
18 |
+
- license:apache-2.0
|
19 |
+
- autotrain_compatible
|
20 |
+
- endpoints_compatible
|
21 |
+
- text-generation-inference
|
22 |
+
- region:us
|
23 |
+
- text-generation
|
24 |
+
model_name: WizardLM-2-8x22B-GGUF
|
25 |
+
base_model: microsoft/WizardLM-2-7B
|
26 |
+
inference: false
|
27 |
+
model_creator: microsoft
|
28 |
+
pipeline_tag: text-generation
|
29 |
+
quantized_by: MaziyarPanahi
|
30 |
+
---
|
31 |
+
# [MaziyarPanahi/WizardLM-2-8x22B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-8x22B-GGUF)
|
32 |
+
- Model creator: [microsoft](https://huggingface.co/microsoft)
|
33 |
+
- Original model: [microsoft/WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B)
|
34 |
+
|
35 |
+
## Description
|
36 |
+
[MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-8x22B-GGUF) contains GGUF format model files for [microsoft/WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B).
|
37 |
+
|
38 |
+
|
39 |
+
## Prompt template
|
40 |
+
|
41 |
+
```
|
42 |
+
{system_prompt}
|
43 |
+
USER: {prompt}
|
44 |
+
ASSISTANT: </s>
|
45 |
+
```
|
46 |
+
|
47 |
+
or
|
48 |
+
|
49 |
+
```
|
50 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
|
51 |
+
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
|
52 |
+
USER: {prompt} ASSISTANT: </s>......
|
53 |
+
```
|