xshubhamx commited on
Commit
ef29d7d
1 Parent(s): 9cc1c64

End of training

Browse files
Files changed (2) hide show
  1. README.md +89 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - precision
9
+ - recall
10
+ model-index:
11
+ - name: tiny-llama-lora-no-grad
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # tiny-llama-lora-no-grad
19
+
20
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.4401
23
+ - Accuracy: 0.8226
24
+ - Precision: 0.8178
25
+ - Recall: 0.8226
26
+ - Precision Macro: 0.7396
27
+ - Recall Macro: 0.7117
28
+ - Macro Fpr: 0.0159
29
+ - Weighted Fpr: 0.0152
30
+ - Weighted Specificity: 0.9752
31
+ - Macro Specificity: 0.9865
32
+ - Weighted Sensitivity: 0.8226
33
+ - Macro Sensitivity: 0.7117
34
+ - F1 Micro: 0.8226
35
+ - F1 Macro: 0.7177
36
+ - F1 Weighted: 0.8190
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
+ - train_batch_size: 8
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - num_epochs: 15
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
66
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
67
+ | 1.1276 | 1.0 | 643 | 0.6705 | 0.8087 | 0.8055 | 0.8087 | 0.7053 | 0.6853 | 0.0172 | 0.0166 | 0.9742 | 0.9855 | 0.8087 | 0.6853 | 0.8087 | 0.6806 | 0.8034 |
68
+ | 0.503 | 2.0 | 1286 | 0.7206 | 0.8164 | 0.8231 | 0.8164 | 0.7746 | 0.7641 | 0.0163 | 0.0158 | 0.9773 | 0.9862 | 0.8164 | 0.7641 | 0.8164 | 0.7610 | 0.8154 |
69
+ | 0.3617 | 3.0 | 1929 | 0.8819 | 0.8164 | 0.8137 | 0.8164 | 0.7499 | 0.7170 | 0.0164 | 0.0158 | 0.9752 | 0.9861 | 0.8164 | 0.7170 | 0.8164 | 0.7242 | 0.8124 |
70
+ | 0.0618 | 4.0 | 2572 | 1.1434 | 0.8087 | 0.8107 | 0.8087 | 0.7673 | 0.7293 | 0.0173 | 0.0166 | 0.9727 | 0.9854 | 0.8087 | 0.7293 | 0.8087 | 0.7401 | 0.8074 |
71
+ | 0.0243 | 5.0 | 3215 | 1.2966 | 0.8110 | 0.8112 | 0.8110 | 0.7489 | 0.7164 | 0.0171 | 0.0164 | 0.9754 | 0.9858 | 0.8110 | 0.7164 | 0.8110 | 0.7228 | 0.8086 |
72
+ | 0.0121 | 6.0 | 3858 | 1.2965 | 0.8195 | 0.8175 | 0.8195 | 0.7312 | 0.7077 | 0.0162 | 0.0155 | 0.9752 | 0.9863 | 0.8195 | 0.7077 | 0.8195 | 0.7143 | 0.8170 |
73
+ | 0.0021 | 7.0 | 4501 | 1.3710 | 0.8187 | 0.8168 | 0.8187 | 0.7519 | 0.7112 | 0.0162 | 0.0156 | 0.9756 | 0.9863 | 0.8187 | 0.7112 | 0.8187 | 0.7165 | 0.8152 |
74
+ | 0.003 | 8.0 | 5144 | 1.3348 | 0.8203 | 0.8171 | 0.8203 | 0.7417 | 0.7073 | 0.0162 | 0.0154 | 0.9749 | 0.9863 | 0.8203 | 0.7073 | 0.8203 | 0.7159 | 0.8173 |
75
+ | 0.0023 | 9.0 | 5787 | 1.4038 | 0.8187 | 0.8149 | 0.8187 | 0.7548 | 0.7030 | 0.0163 | 0.0156 | 0.9742 | 0.9862 | 0.8187 | 0.7030 | 0.8187 | 0.7121 | 0.8141 |
76
+ | 0.0033 | 10.0 | 6430 | 1.4021 | 0.8203 | 0.8151 | 0.8203 | 0.7330 | 0.7110 | 0.0162 | 0.0154 | 0.9746 | 0.9863 | 0.8203 | 0.7110 | 0.8203 | 0.7152 | 0.8163 |
77
+ | 0.0017 | 11.0 | 7073 | 1.4001 | 0.8211 | 0.8178 | 0.8211 | 0.7361 | 0.7110 | 0.0160 | 0.0153 | 0.9753 | 0.9864 | 0.8211 | 0.7110 | 0.8211 | 0.7155 | 0.8179 |
78
+ | 0.0023 | 12.0 | 7716 | 1.4100 | 0.8226 | 0.8189 | 0.8226 | 0.7386 | 0.7127 | 0.0158 | 0.0152 | 0.9754 | 0.9865 | 0.8226 | 0.7127 | 0.8226 | 0.7177 | 0.8195 |
79
+ | 0.0034 | 13.0 | 8359 | 1.4273 | 0.8234 | 0.8192 | 0.8234 | 0.7385 | 0.7115 | 0.0158 | 0.0151 | 0.9757 | 0.9866 | 0.8234 | 0.7115 | 0.8234 | 0.7171 | 0.8201 |
80
+ | 0.0016 | 14.0 | 9002 | 1.4322 | 0.8226 | 0.8183 | 0.8226 | 0.7382 | 0.7111 | 0.0159 | 0.0152 | 0.9754 | 0.9865 | 0.8226 | 0.7111 | 0.8226 | 0.7168 | 0.8192 |
81
+ | 0.0006 | 15.0 | 9645 | 1.4401 | 0.8226 | 0.8178 | 0.8226 | 0.7396 | 0.7117 | 0.0159 | 0.0152 | 0.9752 | 0.9865 | 0.8226 | 0.7117 | 0.8226 | 0.7177 | 0.8190 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.35.2
87
+ - Pytorch 2.1.0+cu121
88
+ - Datasets 2.18.0
89
+ - Tokenizers 0.15.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5b63a48d0625ff9e869aa3a15e7236ea55112e0b016fdc1ee20b887b720cbed
3
  size 50626520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9aef3dc49ee00082e199967df4cb2048b34b25187826b182247cce118ce0f14d
3
  size 50626520