llmixer's picture
Update README.md
6eb57d5 verified
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- aurelian
- WinterGoddess
- frankenmerge
- 120b
- 32k
---
# BigAurelian v0.5 120b 32k
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/QqGLOoVkdN-Z4kLD6OWP6.png" width=600>
A Goliath-120b style frankenmerge of aurelian-v0.5-70b-32K and WinterGoddess-1.4x-70b. The goal is to have similar performance with an extended context size. **Important:** Use a positional embeddings compression factor (**compress_pos_emb**) of **8** when loading this model.
# Prompting Format
Llama2 and Alpaca.
# Merge process
The models used in the merge are [aurelian-v0.5-70b-32K](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
The layer mix:
```yaml
- range 0, 16
aurelian
- range 8, 24
WinterGoddess
- range 17, 32
aurelian
- range 25, 40
WinterGoddess
- range 33, 48
aurelian
- range 41, 56
WinterGoddess
- range 49, 64
aurelian
- range 57, 72
WinterGoddess
- range 65, 80
aurelian
```
# Acknowledgements
[@grimulkan](https://huggingface.co/grimulkan) For creating aurelian-v0.5-70b-32K
[@Sao10K](https://huggingface.co/Sao10K) For creating WinterGoddess
[@alpindale](https://huggingface.co/alpindale) For creating the original Goliath
[@chargoddard](https://huggingface.co/chargoddard) For developing [mergekit](https://github.com/cg123/mergekit).