brucethemoose commited on
Commit
16fafb6
1 Parent(s): cc64495

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -8,8 +8,13 @@ library_name: transformers
8
  pipeline_tag: text-generation
9
  tags:
10
  - text-generation-inference
 
11
  ---
12
 
 
 
 
 
13
  **Dolphin-2.2-yi-34b-200k**, **Nous-Capybara-34B**, **Tess-M-v1.4**, **Airoboros-3_1-yi-34b-200k**, **PlatYi-34B-200K-Q**, and **Una-xaberius-34b-v1beta** merged with a new, experimental implementation of "dare ties" via mergekit.
14
 
15
  Quantized with the git version of exllamav2 with 200 rows (400K tokens) on a long Orca-Vicuna format chat, a selected sci fi story and a fantasy story. This should hopefully yield better chat/storytelling performance than the short, default wikitext quantization.
 
8
  pipeline_tag: text-generation
9
  tags:
10
  - text-generation-inference
11
+ - merge
12
  ---
13
 
14
+ ### Obsolete, see https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5
15
+
16
+ ***
17
+
18
  **Dolphin-2.2-yi-34b-200k**, **Nous-Capybara-34B**, **Tess-M-v1.4**, **Airoboros-3_1-yi-34b-200k**, **PlatYi-34B-200K-Q**, and **Una-xaberius-34b-v1beta** merged with a new, experimental implementation of "dare ties" via mergekit.
19
 
20
  Quantized with the git version of exllamav2 with 200 rows (400K tokens) on a long Orca-Vicuna format chat, a selected sci fi story and a fantasy story. This should hopefully yield better chat/storytelling performance than the short, default wikitext quantization.