Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.
Branches:
main
--measurement.json
3.5b6h
-- 3.5bpw, 6bit lm_head3.7b6h
-- 3.7bpw, 6bit lm_head5b6h
-- 5bpw, 6bit lm_head6b6h
-- 6bpw, 6bit lm_head
Requires ExllamaV2 version 0.0.11 and up.
Original model link: Envoid/BondBurger-8x7B
Original model README below.
Warning:
As always this merge may produce adult content.
BondBurger-8x7B takes ycros/BagelMIsteryTour-8x7B
and 50/50 SLERP merges it onto Envoid/SensualNousInstructDARETIES-CATA-LimaRP-ZlossDT-SLERP-8x7B
But that simply wouldn't be enough.
So I did an additional 50/50 SLERP merge of tenyx/TenyxChat-8x7B-v1 onto the resulting model from the first step.
After getting tired of typing in SensualNousInstructDARETIES-CATA-LimaRP-ZlossDT-SLERP-MagelMIsterySLERP-TenyxChat-SLERP every time I went to test something I decided to reset the naming stack to something more meaningful and descriptive.
This model has all the same wonderful tokenizer weirdness caused by the Nous DNA and has not been tested in FP-16 HF format.
It responds best to [INST] DO A THING [/INST] instruct format.