First model fine tune trained from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1. Code to create this here: https://colab.research.google.com/drive/1Wsi7q1sBJlXrVZAbxhMRZuKnhSFeU9mu?usp=sharing
Parameters used for fine tuning:
model_params={ "project_name": project_name, "model_name": model_name, "repo_id": username+'/'+repo_name, "block_size": block_size, "model_max_length": max_token_length, "logging_steps": -1, "evaluation_strategy": "epoch", "save_total_limit": 1, "save_strategy": "epoch", "mixed_precision": "fp16", "lr": 0.00003, "epochs": 3, "batch_size": 1, "warmup_ratio": 0.1, "gradient_accumulation": 1, "optimizer": "adamw_torch", "scheduler": "linear", "weight_decay": 0, "max_grad_norm": 1, "seed": 42, "quantization": "int4", "lora_r": 16, "lora_alpha": 32, "lora_dropout": 0.05 }
- Downloads last month
- 32
Model tree for ai-aerospace/Mistral-7B-Instruct-v0.1_asm_60e4dc58
Base model
mistralai/Mistral-7B-v0.1