Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,141 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
inference: false
|
3 |
+
language:
|
4 |
+
- zh
|
5 |
license: apache-2.0
|
6 |
+
model_name: Breeze-7B-Instruct-v0.1
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
prompt_template: '<s> SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]'
|
9 |
+
quantized_by: yuuko-eth
|
10 |
+
tags:
|
11 |
+
- nlp
|
12 |
+
- chinese
|
13 |
+
- mistral
|
14 |
+
- traditional_chinese
|
15 |
---
|
16 |
+
|
17 |
+
|
18 |
+
# Breeze-7B-Instruct-v0.1-GGUF
|
19 |
+
- Model creator: [MTK Research](https://huggingface.co/MediaTek-Research)
|
20 |
+
- Original model: [Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1)
|
21 |
+
|
22 |
+
<!-- description start -->
|
23 |
+
## Description
|
24 |
+
|
25 |
+
This repo contains GGUF format model files for [Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1).
|
26 |
+
|
27 |
+
<!-- description end -->
|
28 |
+
|
29 |
+
### About GGUF
|
30 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
31 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
32 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
33 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
34 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
35 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
36 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
37 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
38 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
39 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
40 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
41 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
42 |
+
|
43 |
+
|
44 |
+
---
|
45 |
+
|
46 |
+
> Original `README.MD` is as follows.
|
47 |
+
|
48 |
+
---
|
49 |
+
|
50 |
+
# Model Card for Breeze-7B-Instruct-v0.1
|
51 |
+
|
52 |
+
Breeze-7B is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use.
|
53 |
+
|
54 |
+
[Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) is the base model for the Breeze series.
|
55 |
+
It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
|
56 |
+
|
57 |
+
[Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
|
58 |
+
|
59 |
+
[Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is a slightly modified version of
|
60 |
+
Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters.
|
61 |
+
|
62 |
+
The current release version of Breeze is v0.1.
|
63 |
+
|
64 |
+
Practicality-wise:
|
65 |
+
- Breeze expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
|
66 |
+
- Breeze-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
|
67 |
+
- In particular, Breeze-Instruct-64k can perform tasks at a document level, not a chapter level.
|
68 |
+
|
69 |
+
Performance-wise:
|
70 |
+
- Breeze demonstrates impressive performance in benchmarks for Traditional Chinese, when compared to similar sized open-source contemporaries such as Taiwan-LLM, QWen, and Yi. [See [Chat Model Performance](#chat-model-performance).]
|
71 |
+
- Breeze shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See [Chat Model Performance](#chat-model-performance).]
|
72 |
+
|
73 |
+
|
74 |
+
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
|
75 |
+
|
76 |
+
## Features
|
77 |
+
|
78 |
+
- Breeze-7B-Base-v0.1
|
79 |
+
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
|
80 |
+
- 8k-token context length
|
81 |
+
- Breeze-7B-Instruct-v0.1
|
82 |
+
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
|
83 |
+
- 8k-token context length
|
84 |
+
- Multi-turn dialogue (without special handling for harmfulness)
|
85 |
+
- Breeze-7B-Instruct-64k-v0.1
|
86 |
+
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
|
87 |
+
- 64k-token context length
|
88 |
+
- Multi-turn dialogue (without special handling for harmfulness)
|
89 |
+
|
90 |
+
## Model Details
|
91 |
+
|
92 |
+
- Breeze-7B-Base-v0.1
|
93 |
+
- Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
94 |
+
- Model type: Causal decoder-only transformer language model
|
95 |
+
- Language: English and Traditional Chinese (zh-tw)
|
96 |
+
- Breeze-7B-Instruct-v0.1
|
97 |
+
- Finetuned from: [MediaTek-Research/Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1)
|
98 |
+
- Model type: Causal decoder-only transformer language model
|
99 |
+
- Language: English and Traditional Chinese (zh-tw)
|
100 |
+
- Breeze-7B-Instruct-64k-v0.1
|
101 |
+
- Finetuned from: [MediaTek-Research/Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1)
|
102 |
+
- Model type: Causal decoder-only transformer language model
|
103 |
+
- Language: English and Traditional Chinese (zh-tw)
|
104 |
+
|
105 |
+
|
106 |
+
## Inference
|
107 |
+
The template for inference instances is as follows:
|
108 |
+
<style>
|
109 |
+
.pTemplate code { background-color:white;padding:2px 3px;border-radius:4px }
|
110 |
+
</style>
|
111 |
+
<div class="pTemplate" style="background-color: #f6f8fa; padding: 20px; border-radius: 10px; border: 1px solid #e1e4e8; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
|
112 |
+
<strong>Prompting template:</strong><br/><br/>
|
113 |
+
|
114 |
+
<code>
|
115 |
+
<s> SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]
|
116 |
+
</code>
|
117 |
+
<br/><br/>
|
118 |
+
where <code>SYS_PROMPT</code>, <code>QUERY1</code>, <code>RESPONSE1</code>, and <code>QUERY2</code> can be provided by the user.
|
119 |
+
<br/><br/>
|
120 |
+
The suggested default <code>SYS_PROMPT</code> is:
|
121 |
+
<br/><br/>
|
122 |
+
<code>
|
123 |
+
You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan.
|
124 |
+
</code>
|
125 |
+
<br/><br/>
|
126 |
+
</div>
|
127 |
+
|
128 |
+
## License
|
129 |
+
|
130 |
+
This model and its associated data are released under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
131 |
+
|
132 |
+
## Citation
|
133 |
+
|
134 |
+
```
|
135 |
+
@article{breeze7b2024,
|
136 |
+
title={},
|
137 |
+
author={},
|
138 |
+
journal={arXiv},
|
139 |
+
year={2024}
|
140 |
+
}
|
141 |
+
```
|