Edit model card

esm2_t12_35M-lora-binding-sites_2024-04-25_14-35-31

This model is a fine-tuned version of facebook/esm2_t12_35M_UR50D on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3589
  • Accuracy: 0.8457

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005701568055793089
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 8893
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6703 1.0 24 0.6807 0.5820
0.6449 2.0 48 0.6703 0.5820
0.6659 3.0 72 0.6458 0.5977
0.6432 4.0 96 0.6612 0.6328
0.6322 5.0 120 0.6051 0.6523
0.6176 6.0 144 0.6062 0.6504
0.4904 7.0 168 0.5762 0.6777
0.4426 8.0 192 0.5784 0.6953
0.6014 9.0 216 0.5497 0.7148
0.4484 10.0 240 0.5399 0.7227
0.552 11.0 264 0.5142 0.7480
0.3581 12.0 288 0.4395 0.7930
0.3604 13.0 312 0.4201 0.8066
0.2733 14.0 336 0.4107 0.8262
0.2539 15.0 360 0.4373 0.8008
0.3538 16.0 384 0.3954 0.8301
0.4363 17.0 408 0.3852 0.8320
0.3433 18.0 432 0.3735 0.8418
0.2758 19.0 456 0.3685 0.8438
0.2073 20.0 480 0.3860 0.8262
0.3578 21.0 504 0.3689 0.8301
0.3114 22.0 528 0.3626 0.8418
0.3296 23.0 552 0.3621 0.8438
0.276 24.0 576 0.3602 0.8457
0.2583 25.0 600 0.3622 0.8457
0.1917 26.0 624 0.3597 0.8477
0.3588 27.0 648 0.3603 0.8477
0.219 28.0 672 0.3606 0.8438
0.3091 29.0 696 0.3586 0.8457
0.2235 30.0 720 0.3589 0.8457

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.3
  • Pytorch 2.2.1
  • Datasets 2.16.1
  • Tokenizers 0.15.2
Downloads last month
3
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for wcvz/esm2_t12_35M-lora-binding-sites_2024-04-25_14-35-31

Adapter
(5)
this model