miquliz-120b / README.md
wolfram's picture
Update README.md
7e27dea verified
---
base_model:
- 152334H/miqu-1-70b-sf
- lizpreciatior/lzlv_70b_fp16_hf
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- mergekit
- merge
---
# miquliz-120b
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/RFEW_K0ABp3k_N3j02Ki4.jpeg)
⚠️ **This older model that has been replaced by its improved successor: [miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0)** ⚠️
- EXL2: [2.4bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.4bpw-h6-exl2) | [2.65bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.65bpw-h6-exl2) | [2.9bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.9bpw-h6-exl2) | [4.0bpw](https://huggingface.co/LoneStriker/miquliz-120b-4.0bpw-h6-exl2)
- GGUF: [IQ3_XXS](https://huggingface.co/wolfram/miquliz-120b-GGUF) | [Q4_K_S+Q4_K_M](https://huggingface.co/NanoByte/miquliz-120b-Q4-GGUF)
This is a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit).
Inspired by [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker) and [NanoByte](https://huggingface.co/NanoByte)!
## Review
u/SomeOddCodeGuy wrote on r/LocalLLaMA:
> So I did try out Miquliz last night, and Im not sure if it was the character prompt or what... but it's a lot less coherent than [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b) is.
>
> Quality wise, I feel like Miqu-1-120b has dethroned Goliath-120b as the most coherent model I've ever worked with. Alternatively, Miquliz felt a bit closer to what I've come to see from some of the Yi-34b fine-tunes: some impressive moments, but also some head-scratchers that made me wonder what in the world it was talking about lol.
>
> I'll keep trying it a little more, but I think the difference between the two is night and day, with Miqu-1-120b still being the best model I've ever used for non-coding tasks (haven't tested it on coding yet).
**Note:** I have made a version 2.0 of MiquLiz with an improved "mixture" that better combines the two very different models used: [wolfram/miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0)
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
See also: [πŸΊπŸ¦β€β¬› LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
## Model Details
- Max Context: 32768 tokens
- Layers: 137
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [8, 24]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [17, 32]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [25, 40]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [33, 48]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [41, 56]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [49, 64]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [57, 72]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [65, 80]
model: 152334H/miqu-1-70b-sf
```
## Credits & Special Thanks
- 1st model:
- original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
- leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
- f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- 2nd model: [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
- mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
- mergekit_config.yml: [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
### Support
- [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
## Disclaimer
*This model contains leaked weights and due to its content it should not be used by anyone.* 😜
But seriously:
### License
**What I *know*:** [Weights produced by a machine are not copyrightable](https://www.reddit.com/r/LocalLLaMA/comments/1amc080/psa_if_you_use_miqu_or_a_derivative_please_keep/kpmamte/) so there is no copyright owner who could grant permission or a license to use, or restrict usage, once you have acquired the files.
### Ethics
**What I *believe*:** All generative AI, including LLMs, only exists because it is trained mostly on human data (both public domain and copyright-protected, most likely acquired without express consent) and possibly synthetic data (which is ultimately derived from human data, too). It is only fair if something that is based on everyone's knowledge and data is also freely accessible to the public, the actual creators of the underlying content. Fair use, fair AI!