File size: 1,265 Bytes
f98da0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: llama3
library_name: transformers
tags:
- nsfw
- not-for-all-audiences
- llama-3
- text-generation-inference
- mergekit
- merge
---

# Llama-3-8B-Stroganoff-2.0

# Details
- **License**: [llama3](https://llama.meta.com/llama3/license/)
- **Instruct Format**: [llama-3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/)
- **Context Size**: 8K

## Models Used
- [LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
- [badger-writer-llama-3-8b](https://huggingface.co/maldv/badger-writer-llama-3-8b)
- [L3-8B-Niitama-v1](https://huggingface.co/Sao10K/L3-8B-Niitama-v1)
- [Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85)
- [L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
- [Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule)

## Merge Config
```yaml
models:
    - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
    - model: maldv/badger-writer-llama-3-8b
    - model: Sao10K/L3-8B-Niitama-v1
    - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
    - model: Sao10K/L3-8B-Stheno-v3.2
merge_method: model_stock
base_model: failspy/Llama-3-8B-Instruct-MopeyMule
dtype: bfloat16
```