Phi-2
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0781
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 2048
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 12.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.2903 | 0.64 | 25 | 0.1770 |
0.1566 | 1.28 | 50 | 0.1319 |
0.1379 | 1.92 | 75 | 0.1253 |
0.1246 | 2.56 | 100 | 0.1165 |
0.1159 | 3.2 | 125 | 0.1049 |
0.1048 | 3.84 | 150 | 0.0998 |
0.0947 | 4.48 | 175 | 0.0949 |
0.0872 | 5.12 | 200 | 0.0906 |
0.0836 | 5.76 | 225 | 0.0890 |
0.0774 | 6.39 | 250 | 0.0850 |
0.0717 | 7.03 | 275 | 0.0827 |
0.0639 | 7.67 | 300 | 0.0807 |
0.0596 | 8.31 | 325 | 0.0789 |
0.0555 | 8.95 | 350 | 0.0773 |
0.0498 | 9.59 | 375 | 0.0777 |
0.0491 | 10.23 | 400 | 0.0781 |
0.0467 | 10.87 | 425 | 0.0780 |
0.0459 | 11.51 | 450 | 0.0781 |
Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 15
Inference API (serverless) is not available, repository is disabled.
Model tree for StanfordAIMI/GREEN-Phi2
Base model
microsoft/phi-2
Finetuned
this model