--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: Mistral-Noromaid-7B results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) # Mistral-Noromaid-7B This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1514 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2103 | 0.0 | 1 | 1.5604 | | 1.3191 | 0.1 | 192 | 1.2539 | | 1.1727 | 0.2 | 384 | 1.2346 | | 1.3466 | 0.3 | 576 | 1.2171 | | 0.9652 | 0.4 | 768 | 1.2073 | | 0.996 | 0.5 | 960 | 1.1920 | | 0.7863 | 0.6 | 1152 | 1.1804 | | 0.8883 | 0.7 | 1344 | 1.1700 | | 0.9351 | 0.8 | 1536 | 1.1590 | | 0.8361 | 0.9 | 1728 | 1.1511 | | 1.2718 | 1.0 | 1920 | 1.1438 | | 0.9613 | 1.09 | 2112 | 1.1585 | | 1.4066 | 1.19 | 2304 | 1.1550 | | 0.7388 | 1.29 | 2496 | 1.1538 | | 1.0686 | 1.39 | 2688 | 1.1531 | | 1.3536 | 1.49 | 2880 | 1.1533 | | 0.4994 | 1.59 | 3072 | 1.1517 | | 0.7574 | 1.69 | 3264 | 1.1519 | | 0.7574 | 1.79 | 3456 | 1.1516 | | 1.1436 | 1.89 | 3648 | 1.1514 | | 1.4085 | 1.99 | 3840 | 1.1514 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0