imdatta0 commited on
Commit
eca7b2f
1 Parent(s): bbcd982

End of training

Browse files
Files changed (2) hide show
  1. README.md +100 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/mistral-7b-v0.3-bnb-4bit
3
+ library_name: peft
4
+ license: apache-2.0
5
+ tags:
6
+ - unsloth
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: mistralai_mistral_7b_v0.3_imdatta0_Magiccoder_evol_10k_defaule
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # mistralai_mistral_7b_v0.3_imdatta0_Magiccoder_evol_10k_defaule
17
+
18
+ This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.1508
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0003
40
+ - train_batch_size: 16
41
+ - eval_batch_size: 16
42
+ - seed: 42
43
+ - gradient_accumulation_steps: 4
44
+ - total_train_batch_size: 64
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - lr_scheduler_warmup_ratio: 0.02
48
+ - num_epochs: 1
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:------:|:----:|:---------------:|
54
+ | 1.1667 | 0.0261 | 4 | 1.1657 |
55
+ | 1.168 | 0.0523 | 8 | 1.1853 |
56
+ | 1.1834 | 0.0784 | 12 | 1.1752 |
57
+ | 1.0949 | 0.1046 | 16 | 1.1765 |
58
+ | 1.1669 | 0.1307 | 20 | 1.1847 |
59
+ | 1.06 | 0.1569 | 24 | 1.1693 |
60
+ | 1.1873 | 0.1830 | 28 | 1.1557 |
61
+ | 1.124 | 0.2092 | 32 | 1.1566 |
62
+ | 1.0828 | 0.2353 | 36 | 1.1538 |
63
+ | 1.1584 | 0.2614 | 40 | 1.1528 |
64
+ | 1.1773 | 0.2876 | 44 | 1.1493 |
65
+ | 1.1151 | 0.3137 | 48 | 1.1615 |
66
+ | 1.1327 | 0.3399 | 52 | 1.1592 |
67
+ | 1.094 | 0.3660 | 56 | 1.1487 |
68
+ | 1.1477 | 0.3922 | 60 | 1.1672 |
69
+ | 1.156 | 0.4183 | 64 | 1.1475 |
70
+ | 1.0724 | 0.4444 | 68 | 1.1658 |
71
+ | 1.0879 | 0.4706 | 72 | 1.1466 |
72
+ | 1.0652 | 0.4967 | 76 | 1.1522 |
73
+ | 1.1747 | 0.5229 | 80 | 1.1557 |
74
+ | 1.0867 | 0.5490 | 84 | 1.1524 |
75
+ | 1.1416 | 0.5752 | 88 | 1.1699 |
76
+ | 1.1987 | 0.6013 | 92 | 1.1498 |
77
+ | 1.1849 | 0.6275 | 96 | 1.1516 |
78
+ | 1.1133 | 0.6536 | 100 | 1.1447 |
79
+ | 1.136 | 0.6797 | 104 | 1.1526 |
80
+ | 1.1579 | 0.7059 | 108 | 1.1694 |
81
+ | 1.0263 | 0.7320 | 112 | 1.1502 |
82
+ | 1.093 | 0.7582 | 116 | 1.1325 |
83
+ | 1.0904 | 0.7843 | 120 | 1.1447 |
84
+ | 1.1481 | 0.8105 | 124 | 1.1550 |
85
+ | 1.1437 | 0.8366 | 128 | 1.1556 |
86
+ | 1.1645 | 0.8627 | 132 | 1.1541 |
87
+ | 1.0964 | 0.8889 | 136 | 1.1502 |
88
+ | 1.1825 | 0.9150 | 140 | 1.1487 |
89
+ | 1.0579 | 0.9412 | 144 | 1.1495 |
90
+ | 1.0728 | 0.9673 | 148 | 1.1504 |
91
+ | 1.2134 | 0.9935 | 152 | 1.1508 |
92
+
93
+
94
+ ### Framework versions
95
+
96
+ - PEFT 0.12.0
97
+ - Transformers 4.44.0
98
+ - Pytorch 2.4.0+cu121
99
+ - Datasets 2.20.0
100
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:168447604909b64b67db24600926907c2c6ad46891686e76e78b11dcc46ad061
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc9ae6ccc62b5d0ad942ede1c7cd070d86b8bf7a7097f97822454d657a935acf
3
  size 167832240