File size: 7,364 Bytes
576085e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1188607
 
 
 
 
 
 
 
 
 
 
576085e
 
 
e7f7d5b
576085e
 
 
 
 
468a358
cdcfdf0
576085e
 
468a358
576085e
468a358
576085e
356012b
576085e
468a358
 
e7f7d5b
 
 
 
 
 
576085e
 
 
468a358
54c6814
468a358
 
 
576085e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1188607
 
 
 
 
 
 
 
 
 
 
576085e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1188607
 
576085e
 
 
 
 
 
 
 
 
 
 
1188607
576085e
 
 
1188607
576085e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPT4-LLM-Cleaned
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- tasksource/mmlu
- openai/summarize_from_feedback
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
<div style="width: 100%;">
    <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
    <div style="display: flex; flex-direction: column; align-items: flex-start;">
        <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
    </div>
    <div style="display: flex; flex-direction: column; align-items: flex-end;">
        <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
    </div>
</div>

# Manticore 13B GGML

This is GGML format quantised 4-bit, 5-bit and 8-bit models of epoch 3 of [OpenAccess AI Collective's Manticore 13B](https://huggingface.co/openaccess-ai-collective/manticore-13b).

This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).

## Repositories available

* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/TheBloke/Manticore-13B-GPTQ).
* [4-bit, 5-bit and 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/TheBloke/Manticore-13B-GGML).
* [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b).

## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.

For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.

## Epoch

The files in the `main` branch are from Epoch 3 of Manticore 13B, as of May 19th.

The files in the `previous_llama_ggmlv2` branch are from Epoch 1.

## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`manticore-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
`manticore-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`manticore-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`manticore-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`manticore-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |

## How to run in `llama.cpp`

I use the following command line; adjust for your tastes and needs:

```
./main -t 8 -m manticore-13B-.ggmlv2.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
```

Change `-t 8` to the number of physical CPU cores you have.

## How to run in `text-generation-webui`

GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.

Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).

## Want to support my work?

I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.

So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.

Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.

* Patreon: coming soon! (just awaiting approval)
* Ko-Fi: https://ko-fi.com/TheBlokeAI
* Discord: https://discord.gg/UBgz4VXf
# Original Model Card: Manticore 13B - Preview Release (previously Wizard Mega)

Manticore 13B is a Llama 13B model fine-tuned on the following datasets:
- [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- ARC-Easy & ARC-Challenge - instruct augmented  for detailed responses
- mmlu: instruct augmented for detailed responses subset including
  - abstract_algebra
  - conceptual_physics
  - formal_logic
  - high_school_physics
  - logical_fallacies
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization


# Demo

Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
- https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml

## Release Notes

- https://wandb.ai/wing-lian/manticore-13b/runs/nq3u3uoh/workspace

## Build

Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
 - Preview Release: 1 epoch taking 8 hours.
 - The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).

## Bias, Risks, and Limitations
Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.

## Examples

````
### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.

### Assistant:
````

```
### Instruction: Finish the joke, a mechanic and a car salesman walk into a bar...

### Assistant:
```