Edit model card

Dicephal logo llama with two heads

I took base llama 2 70b model and frankenmerged it with itself using mergekit. Somehow it is coherent.

Thanks for featuring me at https://merge.moe/. I'll try my best to make even more good(!?) merges in the future.

Observations

  • It is more creative than the base model and has a sense of humor.
  • Just like Goliath is sometimes makes new words without meaning.
  • Just like the base model, it is quite disobedient, clever prompting is needed to get it to output answers.
  • Should be great for storywriting.
  • Significantly better than the base model in stylized writing and poems. Still far away from finetuned models.
  • The way it comes back at its past mistakes and fails my tests is almost human. (After model failed the test, I haven't told it yet that it failed) Me: "Why did you pick that?" Dicephal: "Because I am an idiot."

Benchmarks

NeoEvalPlusN_benchmark

My meme benchmark.

Test name Base llama Dicephal
B 0 0
C 2 0
D 0.5 1
S 1.25 2.25
P 0 2.25
Total 3.75 5.5

+75% in size, +47% in meme benchmark performance!

Politiscales test

Politiscales for llama

name whacky left/right
ChuckMcSneed/Dicephal-123B 1.742262578 -0.131433424
meta-llama/Llama-2-70b-hf 1.930293804 0.178771095
Downloads last month
13
Safetensors
Model size
124B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ChuckMcSneed/Dicephal-123B

Quantizations
2 models