Update README.md
Browse files
README.md
CHANGED
@@ -3,19 +3,22 @@ license: apache-2.0
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
|
|
|
|
6 |
---
|
7 |
|
8 |
# Aegolius Acadicus 34b v3
|
9 |
|
|
|
|
|
10 |
![img](./aegolius-acadicus.png)
|
11 |
|
12 |
-
I like to call this model series "The little professor". I am funding this out of my pocket on rented hardware and runpod to create lora adapters and then assemble MOE models from them and others. Ultimately I hope to have them all be lora's that I have made.
|
13 |
|
14 |
In this particular run I am expanding data sets and model count to see if that helps/hurts. I am also moving to more of my own fine tuned mistrals
|
15 |
|
16 |
-
I am paying for the fine tunes on runpod myself on these and then merging to larger models to allow them to load as a single model. Soon I hope to be using entirely models that I have fine tuned myself.
|
17 |
|
18 |
-
This model is
|
19 |
|
20 |
[Fine Tuned Mistral of Mine](https://huggingface.co/ibivibiv/temp_tuned_mistral2)
|
21 |
[Fine Tuned Mistral of Mine](https://huggingface.co/ibivibiv/temp_tuned_mistral3)
|
@@ -24,7 +27,6 @@ This model is merged from the following sources:
|
|
24 |
[senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
|
25 |
[WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
|
26 |
|
27 |
-
Unless those models are "contaminated" this one is not. This is a proof of concept version of this series and you can find others where I am tuning my own models and using moe mergekit to combine them to make moe models that I can run on lower tier hardware with better results.
|
28 |
|
29 |
The goal here is to create specialized models that can collaborate and run as one model.
|
30 |
|
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- moe
|
8 |
---
|
9 |
|
10 |
# Aegolius Acadicus 34b v3
|
11 |
|
12 |
+
MOE model using the Mixtral branch of the mergekit. NOT A MERGE. It is tagged as an moe and is an moe. It is not a merge of models.
|
13 |
+
|
14 |
![img](./aegolius-acadicus.png)
|
15 |
|
16 |
+
I like to call this model series "The little professor". I am funding this out of my pocket on rented hardware and runpod to create lora adapters and then assemble MOE models from them and others. Ultimately I hope to have them all be lora's that I have made. This is no different than Mixtral and I am literally using their tooling. It is simply a MOE of lora merged models across Llama2 and Mistral. I am using this as a test case to move to larger models and get my gate discrimination set correctly. This model is best suited for knowledge related use cases, I did not give it a specific workload target as I did with some of the other models in the "Owl Series".
|
17 |
|
18 |
In this particular run I am expanding data sets and model count to see if that helps/hurts. I am also moving to more of my own fine tuned mistrals
|
19 |
|
|
|
20 |
|
21 |
+
This model is an moe of the following models:
|
22 |
|
23 |
[Fine Tuned Mistral of Mine](https://huggingface.co/ibivibiv/temp_tuned_mistral2)
|
24 |
[Fine Tuned Mistral of Mine](https://huggingface.co/ibivibiv/temp_tuned_mistral3)
|
|
|
27 |
[senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
|
28 |
[WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
|
29 |
|
|
|
30 |
|
31 |
The goal here is to create specialized models that can collaborate and run as one model.
|
32 |
|