File size: 1,155 Bytes
ffd8b42 f680bdc ffd8b42 f680bdc ffd8b42 f680bdc ffd8b42 f680bdc ffd8b42 f680bdc ffd8b42 f680bdc ffd8b42 f680bdc ffd8b42 f680bdc ffd8b42 f680bdc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
library_name: transformers
license: apache-2.0
base_model:
- mistralai/Mistral-Nemo-Base-2407
---
![Zinakha-12b Banner](https://cdn-uploads.huggingface.co/production/uploads/652c2a63d78452c4742cd3d3/QmwflzYQaj2ok1KYXMvAD.jpeg)
# Zinakha-12b ๐งโโ๏ธ
Zinakha 12b tries to become the perfect companion for any chat which involves multiple roles. The ability to understand context is pretty awesome and excels in creativity and storytelling.
It is built on Nemo 12b and trained on different datasets as well as some layer merges to ehance its capabilities.
## Model Details ๐
- **Developed by:** Aixon Lab
- **Model type:** Causal Language Model
- **Language(s):** English (primarily), may support other languages
- **License:** Apache 2.0
- **Repository:** https://huggingface.co/aixonlab/Zinakha-12b
## Quantization
- **GGUF:** https://huggingface.co/mradermacher/Zinakha-12b-GGUF
- **iMatrix GGUF:** https://huggingface.co/mradermacher/Zinakha-12b-i1-GGUF
## Model Architecture ๐๏ธ
- **Base model:** mistralai/Mistral-Nemo-Base-2407
- **Parameter count:** ~12 billion
- **Architecture specifics:** Transformer-based language model |