File size: 1,139 Bytes
d132394
308a686
 
d132394
308a686
 
 
 
 
d132394
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
base_model: NeuralNovel/Mini-Mixtral-v0.2
inference: false
license: apache-2.0
merged_models:
- unsloth/mistral-7b-v0.2
- mistralai/Mistral-7B-Instruct-v0.2
pipeline_tag: text-generation
quantized_by: Suparious
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- unsloth/mistral-7b-v0.2
- mistralai/Mistral-7B-Instruct-v0.2
- quantized
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
---
# NeuralNovel/Mini-Mixtral-v0.2 AWQ

- Model creator: [NeuralNovel](https://huggingface.co/NeuralNovel)
- Original model: [Mini-Mixtral-v0.2](https://huggingface.co/NeuralNovel/Mini-Mixtral-v0.2)

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/DOoAs2yzNOUC465BSM9-s.jpeg)

## Model Summary

Mini-Mixtral-v0.2 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)