|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: Fox-1-1.6B |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 27.66 |
|
name: strict accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tensoropera/Fox-1-1.6B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 7.4 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tensoropera/Fox-1-1.6B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 1.28 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tensoropera/Fox-1-1.6B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 1.79 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tensoropera/Fox-1-1.6B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 3.87 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tensoropera/Fox-1-1.6B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 4.13 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tensoropera/Fox-1-1.6B |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
## Model Card for Fox-1-1.6B |
|
|
|
> [!IMPORTANT] |
|
> This model is a base pretrained model which requires further finetuning for most use cases. |
|
> For a more interactive experience, we |
|
> recommend [tensoropera/Fox-1-1.6B-Instruct-v0.1](https://huggingface.co/tensoropera/Fox-1-1.6B-Instruct-v0.1), the |
|
> instruction-tuned version of Fox-1. |
|
|
|
Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed |
|
by [TensorOpera AI](https://tensoropera.ai/). The model was trained with a 3-stage data curriculum on 3 trillion |
|
tokens of text and code data in 8K sequence length. Fox-1 uses Grouped Query Attention (GQA) with 4 key-value heads and |
|
16 attention heads for faster inference. |
|
|
|
For the full details of this model please read [Fox-1 technical report](https://arxiv.org/abs/2411.05281) |
|
and [release blog post](https://blog.tensoropera.ai/tensoropera-unveils-fox-foundation-model-a-pioneering-open-source-slm-leading-the-way-against-tech-giants). |
|
|
|
## Benchmarks |
|
|
|
We evaluated Fox-1 on ARC Challenge (25-shot), HellaSwag (10-shot), TruthfulQA (0-shot), MMLU (5-shot), |
|
Winogrande (5-shot), and GSM8k (5-shot). We follow the Open LLM Leaderboard's evaluation setup and report the average |
|
score of the 6 benchmarks. The model was evaluated on a machine with 8*H100 GPUs. |
|
|
|
| | Fox-1-1.6B | Qwen-1.5-1.8B | Gemma-2B | StableLM-2-1.6B | OpenELM-1.1B | |
|
|---------------|------------|---------------|----------|-----------------|--------------| |
|
| GSM8k | 36.39% | 34.04% | 17.06% | 17.74% | 2.27% | |
|
| MMLU | 43.05% | 47.15% | 41.71% | 39.16% | 27.28% | |
|
| ARC Challenge | 41.21% | 37.20% | 49.23% | 44.11% | 36.26% | |
|
| HellaSwag | 62.82% | 61.55% | 71.60% | 70.46% | 65.23% | |
|
| TruthfulQA | 38.66% | 39.37% | 33.05% | 38.77% | 36.98% | |
|
| Winogrande | 60.62% | 65.51% | 65.51% | 65.27% | 61.64% | |
|
| Average | 47.13% | 46.81% | 46.36% | 45.92% | 38.28% | |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tensoropera__Fox-1-1.6B) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. | 7.69| |
|
|IFEval (0-Shot) |27.66| |
|
|BBH (3-Shot) | 7.40| |
|
|MATH Lvl 5 (4-Shot)| 1.28| |
|
|GPQA (0-shot) | 1.79| |
|
|MuSR (0-shot) | 3.87| |
|
|MMLU-PRO (5-shot) | 4.13| |
|
|
|
|