--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - generated_from_trainer model-index: - name: Bit-Llama2-jp-123M results: [] --- # Bit-Llama2-jp-123M This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 156 - eval_batch_size: 156 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 19.3793 | 0.04 | 1000 | 5.3113 | | 5.0921 | 0.08 | 2000 | 4.9641 | | 4.8154 | 0.12 | 3000 | 4.7104 | | 4.6664 | 0.16 | 4000 | 4.5876 | | 4.5545 | 0.2 | 5000 | 4.5258 | | 4.4743 | 0.24 | 6000 | 4.4283 | | 4.4061 | 0.28 | 7000 | 4.3539 | | 4.3117 | 0.32 | 8000 | 4.2735 | | 4.2433 | 0.36 | 9000 | 4.2243 | | 4.2037 | 0.4 | 10000 | 4.1739 | | 4.1576 | 0.44 | 11000 | 4.1266 | | 4.0925 | 0.48 | 12000 | 4.0624 | | 4.0615 | 0.52 | 13000 | 4.0433 | | 4.0151 | 0.56 | 14000 | 3.9993 | | 3.9721 | 0.6 | 15000 | 3.9721 | | 3.941 | 0.64 | 16000 | 3.9185 | | 3.9 | 0.68 | 17000 | 3.8841 | | 3.8719 | 0.72 | 18000 | 3.8539 | | 3.8376 | 0.76 | 19000 | 3.8189 | | 3.8131 | 0.8 | 20000 | 3.7946 | | 3.7801 | 0.84 | 21000 | 3.7739 | | 3.7604 | 0.88 | 22000 | 3.7515 | | 3.7413 | 0.92 | 23000 | 3.7365 | | 3.7265 | 0.96 | 24000 | 3.7231 | | 3.7152 | 1.0 | 25000 | 3.7091 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.2.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2