Edit model card

flan-t5-base-ARv1-ARv2

This model is a fine-tuned version of G-R-A-V-I-T-Y/flan-t5-base-ARv1 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7981
  • Exact Match: 10.0
  • Gen Len: 4.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Exact Match Gen Len
No log 1.0 7 0.7981 10.0 4.0
No log 2.0 14 0.7982 10.0 4.0
No log 3.0 21 0.7982 10.0 4.0
No log 4.0 28 0.7982 10.0 4.0
No log 5.0 35 0.7982 10.0 4.0
No log 6.0 42 0.7982 10.0 4.0
No log 7.0 49 0.7982 10.0 4.0
No log 8.0 56 0.7982 10.0 4.0
No log 9.0 63 0.7982 10.0 4.0
No log 10.0 70 0.7982 10.0 4.0
No log 11.0 77 0.7983 10.0 4.0
No log 12.0 84 0.7983 10.0 4.0
No log 13.0 91 0.7983 10.0 4.0
No log 14.0 98 0.7983 10.0 4.0
No log 15.0 105 0.7984 10.0 4.0
No log 16.0 112 0.7984 10.0 4.0
No log 17.0 119 0.7984 10.0 4.0
No log 18.0 126 0.7984 10.0 4.0
No log 19.0 133 0.7984 10.0 4.0
No log 20.0 140 0.7984 10.0 4.0

Framework versions

  • Transformers 4.41.0
  • Pytorch 2.2.1
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
248M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for G-R-A-V-I-T-Y/flan-t5-base-ARv1-ARv2

Finetuned
(1)
this model