Edit model card

my_finetuned_pho-gpt4b_model_kc_March15th

This model is a fine-tuned version of vinai/PhoGPT-4B-Chat on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 465 nan
0.0 2.0 930 nan
0.0 3.0 1395 nan
0.0 4.0 1860 nan
0.0 5.0 2325 nan
0.0 6.0 2790 nan
0.0 7.0 3255 nan
0.0 8.0 3720 nan
0.0 9.0 4185 nan
0.0 10.0 4650 nan
0.0 11.0 5115 nan
0.0 12.0 5580 nan
0.0 13.0 6045 nan
0.0 14.0 6510 nan
0.0 15.0 6975 nan
0.0 16.0 7440 nan
0.0 17.0 7905 nan
0.0 18.0 8370 nan
0.0 19.0 8835 nan
0.0 20.0 9300 nan
0.0 21.0 9765 nan
0.0 22.0 10230 nan
0.0 23.0 10695 nan
0.0 24.0 11160 nan
0.0 25.0 11625 nan
0.0 26.0 12090 nan
0.0 27.0 12555 nan
0.0 28.0 13020 nan
0.0 29.0 13485 nan
0.0 30.0 13950 nan
0.0 31.0 14415 nan
0.0 32.0 14880 nan
0.0 33.0 15345 nan
0.0 34.0 15810 nan
0.0 35.0 16275 nan
0.0 36.0 16740 nan
0.0 37.0 17205 nan
0.0 38.0 17670 nan
0.0 39.0 18135 nan
0.0 40.0 18600 nan
0.0 41.0 19065 nan
0.0 42.0 19530 nan
0.0 43.0 19995 nan
0.0 44.0 20460 nan
0.0 45.0 20925 nan
0.0 46.0 21390 nan
0.0 47.0 21855 nan
0.0 48.0 22320 nan
0.0 49.0 22785 nan
0.0 50.0 23250 nan
0.0 51.0 23715 nan
0.0 52.0 24180 nan
0.0 53.0 24645 nan
0.0 54.0 25110 nan
0.0 55.0 25575 nan
0.0 56.0 26040 nan
0.0 57.0 26505 nan
0.0 58.0 26970 nan
0.0 59.0 27435 nan
0.0 60.0 27900 nan
0.0 61.0 28365 nan
0.0 62.0 28830 nan
0.0 63.0 29295 nan
0.0 64.0 29760 nan
0.0 65.0 30225 nan
0.0 66.0 30690 nan
0.0 67.0 31155 nan
0.0 68.0 31620 nan
0.0 69.0 32085 nan
0.0 70.0 32550 nan
0.0 71.0 33015 nan
0.0 72.0 33480 nan
0.0 73.0 33945 nan
0.0 74.0 34410 nan
0.0 75.0 34875 nan
0.0 76.0 35340 nan
0.0 77.0 35805 nan
0.0 78.0 36270 nan
0.0 79.0 36735 nan
0.0 80.0 37200 nan
0.0 81.0 37665 nan
0.0 82.0 38130 nan
0.0 83.0 38595 nan
0.0 84.0 39060 nan
0.0 85.0 39525 nan
0.0 86.0 39990 nan
0.0 87.0 40455 nan
0.0 88.0 40920 nan
0.0 89.0 41385 nan
0.0 90.0 41850 nan
0.0 91.0 42315 nan
0.0 92.0 42780 nan
0.0 93.0 43245 nan
0.0 94.0 43710 nan
0.0 95.0 44175 nan
0.0 96.0 44640 nan
0.0 97.0 45105 nan
0.0 98.0 45570 nan
0.0 99.0 46035 nan
0.0 100.0 46500 nan

Framework versions

  • PEFT 0.8.2
  • Transformers 4.37.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.17.0
  • Tokenizers 0.15.2
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Kudod/my_finetuned_pho-gpt4b_model_kc_March15th

Adapter
(11)
this model