|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: |
|
- mistralai/Mistral-Nemo-Base-2407 |
|
--- |
|
|
|
![Zinakha-12b Banner](https://cdn-uploads.huggingface.co/production/uploads/652c2a63d78452c4742cd3d3/QmwflzYQaj2ok1KYXMvAD.jpeg) |
|
|
|
# Zinakha-12b π§ββοΈ |
|
|
|
|
|
Zinakha 12b tries to become the perfect companion for any chat which involves multiple roles. The ability to understand context is pretty awesome and excels in creativity and storytelling. |
|
It is built on Nemo 12b and trained on different datasets as well as some layer merges to ehance its capabilities. |
|
|
|
## Model Details π |
|
|
|
- **Developed by:** Aixon Lab |
|
- **Model type:** Causal Language Model |
|
- **Language(s):** English (primarily), may support other languages |
|
- **License:** Apache 2.0 |
|
- **Repository:** https://huggingface.co/aixonlab/Zinakha-12b |
|
|
|
## Quantization |
|
- **GGUF:** https://huggingface.co/mradermacher/Zinakha-12b-GGUF |
|
- **iMatrix GGUF:** https://huggingface.co/mradermacher/Zinakha-12b-i1-GGUF |
|
|
|
## Model Architecture ποΈ |
|
|
|
- **Base model:** mistralai/Mistral-Nemo-Base-2407 |
|
- **Parameter count:** ~12 billion |
|
- **Architecture specifics:** Transformer-based language model |