|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- fblgit/tree-of-knowledge |
|
- Open-Orca/SlimOrca-Dedup |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
library_name: transformers |
|
tags: |
|
- juanako |
|
- UNA |
|
- cybertron |
|
- fbl |
|
--- |
|
|
|
# Model Card for una-cybertron-7b-v2-bf16 (UNA: Uniform Neural Alignment) |
|
|
|
We strike back, introducing **Cybertron 7B v1** a 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets. |
|
He scores **64.60**+ on HF LeaderBoard at least, we'll update the final test soon, .. and we have in the oven a few surprises for all the christmas, subscribe. |
|
T |
|
* v1 Scoring **#1** at 2 December 2023 with 64.60 |
|
* v2 Scoring **?** ..? |
|
|
|
|
|
| Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) | |
|
| --- | --- | --- | --- | --- | --- | --- | --- | |
|
| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 | |
|
| [perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2) | 63.54 | 66.47 | 85.17 | 64.49 | 57.6 | 79.16 | 28.35 | |
|
| [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **64.60** | **68.17** | 85.14 | 62.07 | **63.98** | **80.9** | 27.34 | |
|
| [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **6?.?0** | **68.17** | 85.?4 | 62.07 | **6?.98** | **80.9** | ?0.34 | |
|
|
|
The model excels in mathematics, logic, reasoning, overall very smart. |
|
|
|
## Model Details |
|
|
|
Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon). |
|
|
|
### Model Description |
|
|
|
- **Developed by:** [juanako.ai](https://juanako.ai) |
|
- **Author:** [Xavier M.]([email protected]) |
|
- **Model type:** MistralAI 7B |
|
- **Funded by Cybertron's H100's** |
|
|
|
### Prompt |
|
The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best |
|
``` |
|
<|im_start|>system |
|
- You are a helpful assistant chatbot trained by MosaicML. |
|
- You answer questions. |
|
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. |
|
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|> |
|
<|im_start|>user |
|
Explain QKV<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
``` |
|
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat! |
|
|
|
### Human: Explain QKV |
|
### Assistant: |
|
``` |
|
``` |
|
[Round <|round|>] |
|
问:Explain QKV |
|
答: |
|
``` |
|
``` |
|
[Round <|round|>] |
|
Question:Explain QKV |
|
Answer: |
|
``` |
|
``` |
|
Question:Explain QKV |
|
Answer: |
|
``` |
|
|
|
## Evaluation (UNA-Cybertron-7B-v1-fp16) |
|
``` |
|
| Tasks |Version|Shots | Metric |Value | |Stderr| |
|
|--------------|-------|------|--------qqqqqqqqqqqqqqqqqqqqq|-----:|---|-----:| |
|
|arc_challenge | | 25 |acc_norm|0.6817|± |0.0136| |
|
|truthfulqa_mc2| | 0 |acc |0.6398|± |0.0151| |
|
|hellaswag | | 10 |acc_norm|0.8492|± |0.0036| |
|
|winogrande | | 0 |acc |0.809 |± |0.011 | |
|
|gsm8k | | 5 |acc |0.2733|± |0.0137| |
|
|mmlu | | 5 |acc |0.6207|± |0.1230| |
|
| |average| |acc |**0.6456**| | |
|
|
|
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr| |
|
|------------------|-------|------|-----:|------|-----:|---|-----:| |
|
|mmlu |N/A |none | 0|acc |0.6207|_ |0.1230| |
|
| - humanities |N/A |none | 5|acc |0.5675|_ |0.1125| |
|
| - other |N/A |none | 5|acc |0.6933|_ |0.1108| |
|
| - social_sciences|N/A |none | 5|acc |0.7270|_ |0.0666| |
|
| - stem |N/A |none | 5|acc |0.5249|_ |0.1311| |
|
``` |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.35.0-UNA |
|
- Pytorch 2.1.0 |
|
- Datasets 2.14.6 |
|
- Tokenizers 0.14.1 |
|
|
|
### Citations |
|
If you find Cybertron, Juanako or any of our models useful, specially if you use it for your big brand.. cite please: |
|
``` |
|
@misc{unacybertron7a, |
|
title={Cybertron: Uniform Neural Alignment}, |
|
author={Xavier Murias}, |
|
year={2023}, |
|
publisher = {HuggingFace}, |
|
journal = {HuggingFace repository}, |
|
howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v1}}, |
|
} |
|
``` |