for_test13
This model is a fine-tuned version of team-lucid/hubert-base-korean on the None dataset. It achieves the following results on the evaluation set:
- Loss: 9.0800
- Per: 0.8663
- Learning Rate: 0.0000
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Per | Rate |
---|---|---|---|---|---|
9.8433 | 1.0417 | 50 | 10.8927 | 1.9730 | 0.0001 |
6.2994 | 2.0833 | 100 | 10.1801 | 1.4889 | 0.0001 |
5.6979 | 3.125 | 150 | 9.8748 | 1.1627 | 0.0001 |
5.5696 | 4.1667 | 200 | 9.6279 | 0.9856 | 0.0001 |
5.5354 | 5.2083 | 250 | 9.4447 | 0.9282 | 0.0001 |
5.3749 | 6.25 | 300 | 9.3013 | 0.8952 | 4e-05 |
5.6517 | 7.2917 | 350 | 9.1784 | 0.8771 | 0.0000 |
5.1293 | 8.3333 | 400 | 9.0618 | 0.8661 | 0.0000 |
5.5912 | 9.375 | 450 | 9.0800 | 0.8663 | 0.0000 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 162
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for ppparkker/for_test13
Base model
team-lucid/hubert-base-korean