sakshirokhade commited on
Commit
71123ce
1 Parent(s): 784e2f6

sakshirokhade/shawgpt-ft

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.7458
20
 
21
  ## Model description
22
 
@@ -51,16 +51,16 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
- | 4.5916 | 0.9231 | 3 | 3.9616 |
55
- | 4.0451 | 1.8462 | 6 | 3.4409 |
56
- | 3.4718 | 2.7692 | 9 | 2.9871 |
57
- | 2.2557 | 4.0 | 13 | 2.5523 |
58
- | 2.6548 | 4.9231 | 16 | 2.2841 |
59
- | 2.3019 | 5.8462 | 19 | 2.0751 |
60
- | 2.0414 | 6.7692 | 22 | 1.9058 |
61
- | 1.4308 | 8.0 | 26 | 1.7913 |
62
- | 1.8194 | 8.9231 | 29 | 1.7505 |
63
- | 1.2703 | 9.2308 | 30 | 1.7458 |
64
 
65
 
66
  ### Framework versions
@@ -68,5 +68,5 @@ The following hyperparameters were used during training:
68
  - PEFT 0.12.0
69
  - Transformers 4.42.4
70
  - Pytorch 2.3.1+cu121
71
- - Datasets 2.20.0
72
  - Tokenizers 0.19.1
 
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.7028
20
 
21
  ## Model description
22
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
+ | 4.5895 | 0.9231 | 3 | 3.9505 |
55
+ | 4.0243 | 1.8462 | 6 | 3.4089 |
56
+ | 3.4342 | 2.7692 | 9 | 2.9450 |
57
+ | 2.2216 | 4.0 | 13 | 2.5108 |
58
+ | 2.607 | 4.9231 | 16 | 2.2443 |
59
+ | 2.2645 | 5.8462 | 19 | 2.0377 |
60
+ | 1.9991 | 6.7692 | 22 | 1.8645 |
61
+ | 1.3964 | 8.0 | 26 | 1.7485 |
62
+ | 1.7734 | 8.9231 | 29 | 1.7078 |
63
+ | 1.2377 | 9.2308 | 30 | 1.7028 |
64
 
65
 
66
  ### Framework versions
 
68
  - PEFT 0.12.0
69
  - Transformers 4.42.4
70
  - Pytorch 2.3.1+cu121
71
+ - Datasets 2.21.0
72
  - Tokenizers 0.19.1
runs/Aug26_11-39-18_1a19b9af824a/events.out.tfevents.1724672360.1a19b9af824a.252.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86c0c4c5fc1296638ba208a978a873c7cbb8b7daa9b40dd072f4014281e39359
3
+ size 10558
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3bf4fb985e977789f19bc89c7fc1398d98d1e37eea671e8c282d449168c61aa1
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fcbeef3ec9d2e36b43c427219a1bc946c892041ed325fdfe4e676662c4e627a
3
  size 5112