File size: 1,501 Bytes
d37506b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
license: other
language:
- en
base_model:
- ChaoticNeutrals/Sekhmet_Gimmel-L3.1-8B-v0.3
---
[NitralAI's measurement.json](https://huggingface.co/Nitral-AI/Sekhmet_Gimmel-L3.1-8B-v0.3-5bpw-exl2/blob/main/measurement.json) was used for quantization
## **Sekhmet_Gimmel-L3.1-8B-v0.3**
[exllamav2](https://github.com/turboderp/exllamav2) quant for [ChaoticNeutrals/Sekhmet_Gimmel-L3.1-8B-v0.3](https://huggingface.co/ChaoticNeutrals/Sekhmet_Gimmel-L3.1-8B-v0.3)
**Original model information:**
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/BjXS7pqiq7REnP2KnDzJG.jpeg)
# Sekhmet_Gimmel [v0.3] - Designed to provide robust solutions to complex problems while offering support and insightful guidance.
# GGUF Quant's available thanks to: Soon <3 [GGUF Here]()
# Additional GGUF Quant's available thanks to: Soon <3 [GGUF Here]()
# EXL2 Quant: [5bpw Exl2 Here]()
# Recomended ST Presets: [Sekhmet Presets(Same as Hathor's)](https://huggingface.co/Nitral-AI/Hathor_Presets/tree/main)
---
# Training Note: Sekhmet_Gimmel [v0.3] is trained on: 1.5 epoch's of Private - Hathor_0.85 Instructions, small subset of creative writing data, roleplaying chat pairs over Sekhmet_Aleph-L3.1-8B-v0.2
# Additional Note's: This model was quickly assembled to provide users with a relatively uncensored alternative to L3.1 Instruct, featuring extended context capabilities. I do not expect it to match the performance levels demonstrated by Hathor_Tahsin version 0.9. |