license: apache-2.0 | |
base_model: distilgpt2 | |
tags: | |
- generated_from_keras_callback | |
model-index: | |
- name: kr-manish/distilgpt2-finetuned-rawHrPolicy | |
results: [] | |
<!-- This model card has been generated automatically according to the information Keras had access to. You should | |
probably proofread and complete it, then remove this comment. --> | |
# kr-manish/distilgpt2-finetuned-rawHrPolicy | |
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. | |
It achieves the following results on the evaluation set: | |
- Train Loss: 4.0134 | |
- Epoch: 0 | |
## Model description | |
More information needed | |
## Intended uses & limitations | |
More information needed | |
## Training and evaluation data | |
More information needed | |
## Training procedure | |
### Training hyperparameters | |
The following hyperparameters were used during training: | |
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} | |
- training_precision: float32 | |
### Training results | |
| Train Loss | Epoch | | |
|:----------:|:-----:| | |
| 4.0134 | 0 | | |
### Framework versions | |
- Transformers 4.41.2 | |
- TensorFlow 2.15.0 | |
- Datasets 2.20.0 | |
- Tokenizers 0.19.1 | |