File size: 1,209 Bytes
adefc9c 47e07bd adefc9c c76d232 47e07bd 947f89b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
language:
- en
license: mit
---
Nape-0
Nape series are small models that tries to exihibit much capabilities.
The model is still in training process. This is very early preview.
You can load it as follows:
```
from transformers import LlamaForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nnpy/Nape-0")
model = LlamaForCausalLM.from_pretrained("nnpy/Nape-0")
```
## Training
It took 1 days to train 3 epochs on 4x A6000s using native deepspeed.
```
assistant role: You are Semica, a helpful AI assistant.
user: {prompt}
assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nnpy__Nape-0)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 30.93 |
| ARC (25-shot) | 32.68 |
| HellaSwag (10-shot) | 58.68 |
| MMLU (5-shot) | 24.88 |
| TruthfulQA (0-shot) | 38.99 |
| Winogrande (5-shot) | 57.3 |
| GSM8K (5-shot) | 0.08 |
| DROP (3-shot) | 3.89 |
|