model card (webui)
Browse files
README.md
CHANGED
@@ -42,6 +42,15 @@ Example input:
|
|
42 |
|
43 |
<|begin_of_text|><|start_header_id|>system<|end_header_id|>You are Shining Valiant, a highly capable chat AI.<|eot_id|><|start_header_id|>user<|end_header_id|>Hi, can you write me a cover letter for a data analyst position?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
## The Model
|
46 |
Shining Valiant 2 is built on top of Llama 3 70b Instruct, the highest performance open-source model currently available.
|
47 |
|
|
|
42 |
|
43 |
<|begin_of_text|><|start_header_id|>system<|end_header_id|>You are Shining Valiant, a highly capable chat AI.<|eot_id|><|start_header_id|>user<|end_header_id|>Hi, can you write me a cover letter for a data analyst position?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
44 |
|
45 |
+
## WARNING: text-generation-webui
|
46 |
+
|
47 |
+
When using Llama 3 Instruct models (including Shining Valiant 2) with [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main) note that a current bug in webui can result in incorrect reading of the model's ending tokens, causing unfinished outputs and incorrect structure.
|
48 |
+
|
49 |
+
For a [temporary workaround](https://github.com/oobabooga/text-generation-webui/issues/5885) if you encounter this issue, edit Shining Valiant 2's tokenizer_config file as indicated:
|
50 |
+
|
51 |
+
from "eos_token": "<|end_of_text|>",
|
52 |
+
to "eos_token": "<|eot_id|>",
|
53 |
+
|
54 |
## The Model
|
55 |
Shining Valiant 2 is built on top of Llama 3 70b Instruct, the highest performance open-source model currently available.
|
56 |
|