ColleenMacklin commited on
Commit
474b56f
1 Parent(s): 1cc80a1

gpt-neo-125m-finetuned-philosopher_rave_20

Browse files
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: EleutherAI/gpt-neo-125m
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: gpt-neo-125m-finetuned-philosopher_rave_20
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # gpt-neo-125m-finetuned-philosopher_rave_20
15
+
16
+ This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 2.7097
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 3e-07
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 20.0
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:----:|:---------------:|
49
+ | No log | 1.0 | 155 | 2.8834 |
50
+ | No log | 2.0 | 310 | 2.8606 |
51
+ | No log | 3.0 | 465 | 2.8407 |
52
+ | 2.8695 | 4.0 | 620 | 2.8228 |
53
+ | 2.8695 | 5.0 | 775 | 2.8063 |
54
+ | 2.8695 | 6.0 | 930 | 2.7911 |
55
+ | 2.8122 | 7.0 | 1085 | 2.7772 |
56
+ | 2.8122 | 8.0 | 1240 | 2.7650 |
57
+ | 2.8122 | 9.0 | 1395 | 2.7544 |
58
+ | 2.7613 | 10.0 | 1550 | 2.7454 |
59
+ | 2.7613 | 11.0 | 1705 | 2.7378 |
60
+ | 2.7613 | 12.0 | 1860 | 2.7313 |
61
+ | 2.7397 | 13.0 | 2015 | 2.7258 |
62
+ | 2.7397 | 14.0 | 2170 | 2.7211 |
63
+ | 2.7397 | 15.0 | 2325 | 2.7173 |
64
+ | 2.7397 | 16.0 | 2480 | 2.7143 |
65
+ | 2.7214 | 17.0 | 2635 | 2.7121 |
66
+ | 2.7214 | 18.0 | 2790 | 2.7106 |
67
+ | 2.7214 | 19.0 | 2945 | 2.7098 |
68
+ | 2.7104 | 20.0 | 3100 | 2.7097 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.39.3
74
+ - Pytorch 2.2.1+cu121
75
+ - Datasets 2.18.0
76
+ - Tokenizers 0.15.2
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.39.3"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:82d75935072e711e667c35e3ab883d8e98f43f94ebdb57ac16bd8499133a6bdc
3
  size 500811336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:581de4257d66e19ade2169c1f2908c45d21dbfb8997837f069e3e70eb50b9cb4
3
  size 500811336
runs/Apr04_15-35-51_590c786a3313/events.out.tfevents.1712244984.590c786a3313.4137.2 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a56276e0d57f239f1c0aaa225cb9c8bb04b9099d0d35a314e177e88f7726d2c6
3
- size 11551
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f875ff8bf7fd5d97dc1ee90a3610564280838043f8913c2f07a44622a7ce0b6
3
+ size 12176
runs/Apr04_15-35-51_590c786a3313/events.out.tfevents.1712245882.590c786a3313.4137.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da4d602315a63cf90126329bcf4dfb068b4bd8b83413ef8d962484622ecb6362
3
+ size 359