Upload 2 files
Browse files- README.md +248 -3
- mergekit_config.yml +127 -0
README.md
CHANGED
@@ -1,3 +1,248 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- nitky/Oumuamua-7b-base
|
4 |
+
- stabilityai/japanese-stablelm-base-gamma-7b
|
5 |
+
- tokyotech-llm/Swallow-MS-7b-v0.1
|
6 |
+
- DataPilot/ArrowPro-7B-KillerWhale
|
7 |
+
- nitky/RP-7b-instruct
|
8 |
+
- mistralai/Mistral-7B-Instruct-v0.3
|
9 |
+
- mistralai/Mistral-7B-v0.1
|
10 |
+
library_name: transformers
|
11 |
+
tags:
|
12 |
+
- mergekit
|
13 |
+
- merge
|
14 |
+
language:
|
15 |
+
- ja
|
16 |
+
- en
|
17 |
+
pipeline_tag: text-generation
|
18 |
+
license: apache-2.0
|
19 |
+
---
|
20 |
+
# Oumuamua-7b-instruct
|
21 |
+
|
22 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
23 |
+
|
24 |
+
## Output example
|
25 |
+
|
26 |
+
```
|
27 |
+
[INST] <<SYS>>
|
28 |
+
あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。
|
29 |
+
<</SYS>>
|
30 |
+
|
31 |
+
道路で泣いている子供がいます。その子供には何があったと考えられますか? [/INST] 道路で泣いている子供がいる場合、考えられる理由はいくつかあります。
|
32 |
+
|
33 |
+
1. 怪我をした: 転んで怪我をしたり、何かにぶつかって痛みを感じている可能性があります。
|
34 |
+
2. 迷子になった: 親や保護者とはぐれてしまった、または道に迷って不安になっている可能性があります。
|
35 |
+
3. お腹が空いた: 長時間食事をしていない、またはお腹が空いているために泣いている可能性があります。
|
36 |
+
4. 疲れた: 長時間歩いていたり、遊び疲れて疲れている可能性があります。
|
37 |
+
5. 寂しかった: 一人でいることに寂しさを感じている、または親や友達がいなくて寂しいと感じている可能性があります。
|
38 |
+
6. 怖い思いをした: 車や不審者に遭遇した、または何かに驚いた可能性があります。
|
39 |
+
7. 体調が悪い: 発熱や頭痛、吐き気などの体調不良を感じている可能性があります。
|
40 |
+
8. トイレに行きたい: おむつが濡れていたり、トイレに行きたくて泣いている可能性があります。
|
41 |
+
9. 何かを求めている: 注意を引きたかったり、助けを求めている可能性があります。
|
42 |
+
|
43 |
+
子供の状況をよく観察し、声をかけて話を聞いてみることが大切です。必要に応じて、保護者や警察、救急車を呼ぶなどの対応をする必要があるかもしれません。
|
44 |
+
```
|
45 |
+
|
46 |
+
## Test environment
|
47 |
+
|
48 |
+
This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.3 for Generation.
|
49 |
+
|
50 |
+
|
51 |
+
## Usage
|
52 |
+
|
53 |
+
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
|
54 |
+
|
55 |
+
The template used to construct a prompt for the Instruct model is specified as follows:
|
56 |
+
|
57 |
+
```
|
58 |
+
<s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST]
|
59 |
+
```
|
60 |
+
|
61 |
+
|
62 |
+
Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings.
|
63 |
+
|
64 |
+
For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。"
|
65 |
+
|
66 |
+
For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input}
|
67 |
+
|
68 |
+
In other words, We recommend the following:
|
69 |
+
|
70 |
+
```
|
71 |
+
<s>[INST] <<SYS>>\nあなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST]
|
72 |
+
```
|
73 |
+
|
74 |
+
### Use the instruct model
|
75 |
+
|
76 |
+
```python
|
77 |
+
import torch
|
78 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
79 |
+
|
80 |
+
model_name = "nitky/Oumuamua-7b-instruct"
|
81 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
|
82 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
83 |
+
|
84 |
+
device = "cuda"
|
85 |
+
|
86 |
+
messages = [
|
87 |
+
{"role": "system", "content": "あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。"},
|
88 |
+
{"role": "user", "content": "道路で泣いている子供がいます。その子供には何があったと考えられますか?"}
|
89 |
+
]
|
90 |
+
|
91 |
+
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
92 |
+
|
93 |
+
model_inputs = encodeds.to(device)
|
94 |
+
model.to(device)
|
95 |
+
|
96 |
+
generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.3)
|
97 |
+
decoded = tokenizer.batch_decode(generated_ids)
|
98 |
+
print(decoded[0])
|
99 |
+
```
|
100 |
+
|
101 |
+
## Merge Details
|
102 |
+
### Merge Method
|
103 |
+
|
104 |
+
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base.
|
105 |
+
|
106 |
+
### Models Merged
|
107 |
+
|
108 |
+
The following models were included in the merge:
|
109 |
+
* [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b)
|
110 |
+
* [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1)
|
111 |
+
* [DataPilot/ArrowPro-7B-KillerWhale](https://huggingface.co/DataPilot/ArrowPro-7B-KillerWhale)
|
112 |
+
* [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct)
|
113 |
+
* [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
|
114 |
+
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
115 |
+
|
116 |
+
### Configuration
|
117 |
+
|
118 |
+
The following YAML configuration was used to produce this model:
|
119 |
+
|
120 |
+
```yaml
|
121 |
+
merge_method: task_arithmetic
|
122 |
+
base_model: mistralai/Mistral-7B-v0.1
|
123 |
+
models:
|
124 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
125 |
+
parameters:
|
126 |
+
weight:
|
127 |
+
- filter: embed_tokens
|
128 |
+
value: 1.0
|
129 |
+
- value: 0
|
130 |
+
dtype: bfloat16
|
131 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
132 |
+
name: Mistral-7B-v0.1-VE-Swallow-MS
|
133 |
+
---
|
134 |
+
merge_method: task_arithmetic
|
135 |
+
base_model: stabilityai/japanese-stablelm-base-gamma-7b
|
136 |
+
models:
|
137 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
138 |
+
parameters:
|
139 |
+
weight:
|
140 |
+
- filter: embed_tokens
|
141 |
+
value: 1.0
|
142 |
+
- value: 0
|
143 |
+
dtype: bfloat16
|
144 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
145 |
+
name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
|
146 |
+
---
|
147 |
+
merge_method: task_arithmetic
|
148 |
+
base_model: DataPilot/ArrowPro-7B-KillerWhale
|
149 |
+
models:
|
150 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
151 |
+
parameters:
|
152 |
+
weight:
|
153 |
+
- filter: embed_tokens
|
154 |
+
value: 1.0
|
155 |
+
- value: 0
|
156 |
+
dtype: bfloat16
|
157 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
158 |
+
name: ArrowPro-7B-KillerWhale-VE-Swallow-MS
|
159 |
+
---
|
160 |
+
merge_method: task_arithmetic
|
161 |
+
base_model: nitky/RP-7b-instruct
|
162 |
+
models:
|
163 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
164 |
+
parameters:
|
165 |
+
weight:
|
166 |
+
- filter: embed_tokens
|
167 |
+
value: 1.0
|
168 |
+
- value: 0
|
169 |
+
dtype: bfloat16
|
170 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
171 |
+
name: RP-7b-instruct-VE-Swallow-MS
|
172 |
+
---
|
173 |
+
merge_method: task_arithmetic
|
174 |
+
base_model: mistralai/Mistral-7B-Instruct-v0.3
|
175 |
+
models:
|
176 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
177 |
+
parameters:
|
178 |
+
weight:
|
179 |
+
- filter: embed_tokens
|
180 |
+
value: 1.0
|
181 |
+
- value: 0
|
182 |
+
dtype: bfloat16
|
183 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
184 |
+
name: Mistral-7B-Instruct-v0.3-VE-Swallow-MS
|
185 |
+
---
|
186 |
+
merge_method: linear
|
187 |
+
models:
|
188 |
+
- model: nitky/Oumuamua-7b-base
|
189 |
+
parameters:
|
190 |
+
weight: 1.0
|
191 |
+
- model: tokyotech-llm/Swallow-MS-7b-instruct-v0.1
|
192 |
+
parameters:
|
193 |
+
weight: 0.8
|
194 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
195 |
+
parameters:
|
196 |
+
weight: -0.8
|
197 |
+
- model: Mistral-7B-Instruct-v0.3-VE-Swallow-MS
|
198 |
+
parameters:
|
199 |
+
weight: 0.8
|
200 |
+
- model: Mistral-7B-v0.1-VE-Swallow-MS
|
201 |
+
parameters:
|
202 |
+
weight: -0.8
|
203 |
+
dtype: bfloat16
|
204 |
+
name: Oumuamua-7b-instruct-alpha
|
205 |
+
---
|
206 |
+
merge_method: linear
|
207 |
+
models:
|
208 |
+
- model: nitky/Oumuamua-7b-base
|
209 |
+
parameters:
|
210 |
+
weight: 1.0
|
211 |
+
- model: ArrowPro-7B-KillerWhale-VE-Swallow-MS # include Mistral-7B-Instruct-v0.3
|
212 |
+
parameters:
|
213 |
+
weight: 0.8
|
214 |
+
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
|
215 |
+
parameters:
|
216 |
+
weight: -0.8
|
217 |
+
dtype: bfloat16
|
218 |
+
name: Oumuamua-7b-instruct-beta
|
219 |
+
---
|
220 |
+
merge_method: linear
|
221 |
+
models:
|
222 |
+
- model: nitky/Oumuamua-7b-base
|
223 |
+
parameters:
|
224 |
+
weight: 1.0
|
225 |
+
- model: RP-7b-instruct-VE-Swallow-MS
|
226 |
+
parameters:
|
227 |
+
weight: 0.8
|
228 |
+
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
|
229 |
+
parameters:
|
230 |
+
weight: -0.8
|
231 |
+
- model: Mistral-7B-Instruct-v0.3-VE-Swallow-MS
|
232 |
+
parameters:
|
233 |
+
weight: 0.8
|
234 |
+
- model: Mistral-7B-v0.1-VE-Swallow-MS
|
235 |
+
parameters:
|
236 |
+
weight: -0.8
|
237 |
+
dtype: bfloat16
|
238 |
+
name: Oumuamua-7b-instruct-gamma
|
239 |
+
---
|
240 |
+
merge_method: model_stock
|
241 |
+
base_model: nitky/Oumuamua-7b-base
|
242 |
+
models:
|
243 |
+
- model: Oumuamua-7b-instruct-alpha
|
244 |
+
- model: Oumuamua-7b-instruct-beta
|
245 |
+
- model: Oumuamua-7b-instruct-gamma
|
246 |
+
dtype: bfloat16
|
247 |
+
name: Oumuamua-7b-instruct
|
248 |
+
```
|
mergekit_config.yml
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
merge_method: task_arithmetic
|
2 |
+
base_model: mistralai/Mistral-7B-v0.1
|
3 |
+
models:
|
4 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
5 |
+
parameters:
|
6 |
+
weight:
|
7 |
+
- filter: embed_tokens
|
8 |
+
value: 1.0
|
9 |
+
- value: 0
|
10 |
+
dtype: bfloat16
|
11 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
12 |
+
name: Mistral-7B-v0.1-VE-Swallow-MS
|
13 |
+
---
|
14 |
+
merge_method: task_arithmetic
|
15 |
+
base_model: stabilityai/japanese-stablelm-base-gamma-7b
|
16 |
+
models:
|
17 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
18 |
+
parameters:
|
19 |
+
weight:
|
20 |
+
- filter: embed_tokens
|
21 |
+
value: 1.0
|
22 |
+
- value: 0
|
23 |
+
dtype: bfloat16
|
24 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
25 |
+
name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
|
26 |
+
---
|
27 |
+
merge_method: task_arithmetic
|
28 |
+
base_model: DataPilot/ArrowPro-7B-KillerWhale
|
29 |
+
models:
|
30 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
31 |
+
parameters:
|
32 |
+
weight:
|
33 |
+
- filter: embed_tokens
|
34 |
+
value: 1.0
|
35 |
+
- value: 0
|
36 |
+
dtype: bfloat16
|
37 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
38 |
+
name: ArrowPro-7B-KillerWhale-VE-Swallow-MS
|
39 |
+
---
|
40 |
+
merge_method: task_arithmetic
|
41 |
+
base_model: nitky/RP-7b-instruct
|
42 |
+
models:
|
43 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
44 |
+
parameters:
|
45 |
+
weight:
|
46 |
+
- filter: embed_tokens
|
47 |
+
value: 1.0
|
48 |
+
- value: 0
|
49 |
+
dtype: bfloat16
|
50 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
51 |
+
name: RP-7b-instruct-VE-Swallow-MS
|
52 |
+
---
|
53 |
+
merge_method: task_arithmetic
|
54 |
+
base_model: mistralai/Mistral-7B-Instruct-v0.3
|
55 |
+
models:
|
56 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
57 |
+
parameters:
|
58 |
+
weight:
|
59 |
+
- filter: embed_tokens
|
60 |
+
value: 1.0
|
61 |
+
- value: 0
|
62 |
+
dtype: bfloat16
|
63 |
+
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
|
64 |
+
name: Mistral-7B-Instruct-v0.3-VE-Swallow-MS
|
65 |
+
---
|
66 |
+
merge_method: linear
|
67 |
+
models:
|
68 |
+
- model: nitky/Oumuamua-7b-base
|
69 |
+
parameters:
|
70 |
+
weight: 1.0
|
71 |
+
- model: tokyotech-llm/Swallow-MS-7b-instruct-v0.1
|
72 |
+
parameters:
|
73 |
+
weight: 0.8
|
74 |
+
- model: tokyotech-llm/Swallow-MS-7b-v0.1
|
75 |
+
parameters:
|
76 |
+
weight: -0.8
|
77 |
+
- model: Mistral-7B-Instruct-v0.3-VE-Swallow-MS
|
78 |
+
parameters:
|
79 |
+
weight: 0.8
|
80 |
+
- model: Mistral-7B-v0.1-VE-Swallow-MS
|
81 |
+
parameters:
|
82 |
+
weight: -0.8
|
83 |
+
dtype: bfloat16
|
84 |
+
name: Oumuamua-7b-instruct-alpha
|
85 |
+
---
|
86 |
+
merge_method: linear
|
87 |
+
models:
|
88 |
+
- model: nitky/Oumuamua-7b-base
|
89 |
+
parameters:
|
90 |
+
weight: 1.0
|
91 |
+
- model: ArrowPro-7B-KillerWhale-VE-Swallow-MS # include Mistral-7B-Instruct-v0.3
|
92 |
+
parameters:
|
93 |
+
weight: 0.8
|
94 |
+
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
|
95 |
+
parameters:
|
96 |
+
weight: -0.8
|
97 |
+
dtype: bfloat16
|
98 |
+
name: Oumuamua-7b-instruct-beta
|
99 |
+
---
|
100 |
+
merge_method: linear
|
101 |
+
models:
|
102 |
+
- model: nitky/Oumuamua-7b-base
|
103 |
+
parameters:
|
104 |
+
weight: 1.0
|
105 |
+
- model: RP-7b-instruct-VE-Swallow-MS
|
106 |
+
parameters:
|
107 |
+
weight: 0.8
|
108 |
+
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
|
109 |
+
parameters:
|
110 |
+
weight: -0.8
|
111 |
+
- model: Mistral-7B-Instruct-v0.3-VE-Swallow-MS
|
112 |
+
parameters:
|
113 |
+
weight: 0.8
|
114 |
+
- model: Mistral-7B-v0.1-VE-Swallow-MS
|
115 |
+
parameters:
|
116 |
+
weight: -0.8
|
117 |
+
dtype: bfloat16
|
118 |
+
name: Oumuamua-7b-instruct-gamma
|
119 |
+
---
|
120 |
+
merge_method: model_stock
|
121 |
+
base_model: nitky/Oumuamua-7b-base
|
122 |
+
models:
|
123 |
+
- model: Oumuamua-7b-instruct-alpha
|
124 |
+
- model: Oumuamua-7b-instruct-beta
|
125 |
+
- model: Oumuamua-7b-instruct-gamma
|
126 |
+
dtype: bfloat16
|
127 |
+
name: Oumuamua-7b-instruct
|