pretrain
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ tags:
|
|
44 |
- reason
|
45 |
---
|
46 |
|
47 |
-
# tangled-alpha-0.
|
48 |
|
49 |

|
50 |
|
@@ -63,89 +63,7 @@ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable
|
|
63 |
```
|
64 |
|
65 |
```
|
66 |
-
Seed set to 23
|
67 |
-
Time to instantiate model: 0.31 seconds.
|
68 |
-
Total parameters: 201,359,872
|
69 |
-
Verifying settings ...
|
70 |
-
Measured TFLOPs: 7072.06
|
71 |
-
Epoch 1 | iter 256 step 1 | loss train: 11.961, val: n/a | iter time: 406.23 ms (step) remaining time: 3 days, 13:55:33
|
72 |
-
Epoch 1 | iter 512 step 2 | loss train: 11.953, val: n/a | iter time: 358.84 ms (step) remaining time: 3 days, 0:49:32
|
73 |
-
Epoch 1 | iter 768 step 3 | loss train: 11.943, val: n/a | iter time: 357.16 ms (step) remaining time: 2 days, 20:38:36
|
74 |
-
Epoch 1 | iter 1024 step 4 | loss train: 11.907, val: n/a | iter time: 355.69 ms (step) remaining time: 2 days, 18:31:54
|
75 |
-
Epoch 1 | iter 1280 step 5 | loss train: 11.854, val: n/a | iter time: 358.32 ms (step) remaining time: 2 days, 17:13:13
|
76 |
-
Epoch 1 | iter 1536 step 6 | loss train: 11.789, val: n/a | iter time: 355.59 ms (step) remaining time: 2 days, 16:18:25
|
77 |
-
Epoch 1 | iter 1792 step 7 | loss train: 11.703, val: n/a | iter time: 354.88 ms (step) remaining time: 2 days, 15:37:56
|
78 |
-
Epoch 1 | iter 2048 step 8 | loss train: 11.586, val: n/a | iter time: 354.07 ms (step) remaining time: 2 days, 15:06:45
|
79 |
-
Epoch 1 | iter 2304 step 9 | loss train: 11.451, val: n/a | iter time: 352.89 ms (step) remaining time: 2 days, 14:41:54
|
80 |
-
Epoch 1 | iter 2560 step 10 | loss train: 11.347, val: n/a | iter time: 355.58 ms (step) remaining time: 2 days, 14:21:38
|
81 |
-
Epoch 1 | iter 2816 step 11 | loss train: 11.271, val: n/a | iter time: 351.01 ms (step) remaining time: 2 days, 14:04:43
|
82 |
-
Epoch 1 | iter 3072 step 12 | loss train: 11.194, val: n/a | iter time: 351.91 ms (step) remaining time: 2 days, 13:50:26
|
83 |
-
Epoch 1 | iter 3328 step 13 | loss train: 11.151, val: n/a | iter time: 353.02 ms (step) remaining time: 2 days, 13:38:04
|
84 |
-
Epoch 1 | iter 3584 step 14 | loss train: 11.097, val: n/a | iter time: 353.75 ms (step) remaining time: 2 days, 13:27:21
|
85 |
-
Epoch 1 | iter 3840 step 15 | loss train: 11.064, val: n/a | iter time: 358.31 ms (step) remaining time: 2 days, 13:17:48
|
86 |
-
Epoch 1 | iter 4096 step 16 | loss train: 11.008, val: n/a | iter time: 351.95 ms (step) remaining time: 2 days, 13:09:17
|
87 |
-
Epoch 1 | iter 4352 step 17 | loss train: 10.997, val: n/a | iter time: 352.26 ms (step) remaining time: 2 days, 13:01:35
|
88 |
-
Epoch 1 | iter 4608 step 18 | loss train: 10.951, val: n/a | iter time: 352.57 ms (step) remaining time: 2 days, 12:54:35
|
89 |
-
Epoch 1 | iter 4864 step 19 | loss train: 10.902, val: n/a | iter time: 354.73 ms (step) remaining time: 2 days, 12:48:13
|
90 |
-
Epoch 1 | iter 5120 step 20 | loss train: 10.877, val: n/a | iter time: 354.47 ms (step) remaining time: 2 days, 12:43:19
|
91 |
-
Epoch 1 | iter 5376 step 21 | loss train: 10.830, val: n/a | iter time: 353.78 ms (step) remaining time: 2 days, 12:37:49
|
92 |
-
Epoch 1 | iter 5632 step 22 | loss train: 10.809, val: n/a | iter time: 355.03 ms (step) remaining time: 2 days, 12:32:44
|
93 |
-
Epoch 1 | iter 5888 step 23 | loss train: 10.727, val: n/a | iter time: 351.49 ms (step) remaining time: 2 days, 12:27:56
|
94 |
-
Epoch 1 | iter 6144 step 24 | loss train: 10.707, val: n/a | iter time: 351.58 ms (step) remaining time: 2 days, 12:23:24
|
95 |
-
Epoch 1 | iter 6400 step 25 | loss train: 10.643, val: n/a | iter time: 350.84 ms (step) remaining time: 2 days, 12:19:10
|
96 |
-
Epoch 1 | iter 6656 step 26 | loss train: 10.649, val: n/a | iter time: 355.14 ms (step) remaining time: 2 days, 12:15:07
|
97 |
-
Epoch 1 | iter 6912 step 27 | loss train: 10.580, val: n/a | iter time: 352.60 ms (step) remaining time: 2 days, 12:11:12
|
98 |
-
Epoch 1 | iter 7168 step 28 | loss train: 10.554, val: n/a | iter time: 351.57 ms (step) remaining time: 2 days, 12:07:27
|
99 |
-
Epoch 1 | iter 7424 step 29 | loss train: 10.526, val: n/a | iter time: 350.36 ms (step) remaining time: 2 days, 12:03:55
|
100 |
-
Epoch 1 | iter 7680 step 30 | loss train: 10.496, val: n/a | iter time: 353.19 ms (step) remaining time: 2 days, 12:00:34
|
101 |
-
Epoch 1 | iter 7936 step 31 | loss train: 10.496, val: n/a | iter time: 350.95 ms (step) remaining time: 2 days, 11:57:21
|
102 |
-
Epoch 1 | iter 8192 step 32 | loss train: 10.421, val: n/a | iter time: 352.71 ms (step) remaining time: 2 days, 11:54:18
|
103 |
-
Epoch 1 | iter 8448 step 33 | loss train: 10.379, val: n/a | iter time: 354.15 ms (step) remaining time: 2 days, 11:51:21
|
104 |
-
Epoch 1 | iter 8704 step 34 | loss train: 10.343, val: n/a | iter time: 353.95 ms (step) remaining time: 2 days, 11:48:29
|
105 |
-
Epoch 1 | iter 8960 step 35 | loss train: 10.353, val: n/a | iter time: 351.04 ms (step) remaining time: 2 days, 11:45:44
|
106 |
-
Epoch 1 | iter 9216 step 36 | loss train: 10.323, val: n/a | iter time: 354.76 ms (step) remaining time: 2 days, 11:43:05
|
107 |
-
Epoch 1 | iter 9472 step 37 | loss train: 10.258, val: n/a | iter time: 353.18 ms (step) remaining time: 2 days, 11:40:29
|
108 |
-
Epoch 1 | iter 9728 step 38 | loss train: 10.260, val: n/a | iter time: 353.86 ms (step) remaining time: 2 days, 11:37:57
|
109 |
-
Epoch 1 | iter 9984 step 39 | loss train: 10.257, val: n/a | iter time: 356.14 ms (step) remaining time: 2 days, 11:35:50
|
110 |
-
Epoch 1 | iter 10240 step 40 | loss train: 10.179, val: n/a | iter time: 353.73 ms (step) remaining time: 2 days, 11:33:23
|
111 |
-
Epoch 1 | iter 10496 step 41 | loss train: 10.163, val: n/a | iter time: 350.49 ms (step) remaining time: 2 days, 11:30:59
|
112 |
-
Epoch 1 | iter 10752 step 42 | loss train: 10.156, val: n/a | iter time: 354.15 ms (step) remaining time: 2 days, 11:28:40
|
113 |
-
Epoch 1 | iter 11008 step 43 | loss train: 10.150, val: n/a | iter time: 350.99 ms (step) remaining time: 2 days, 11:26:24
|
114 |
-
Epoch 1 | iter 11264 step 44 | loss train: 10.089, val: n/a | iter time: 354.28 ms (step) remaining time: 2 days, 11:24:09
|
115 |
-
Epoch 1 | iter 11520 step 45 | loss train: 10.096, val: n/a | iter time: 352.46 ms (step) remaining time: 2 days, 11:21:56
|
116 |
-
Epoch 1 | iter 11776 step 46 | loss train: 10.021, val: n/a | iter time: 356.80 ms (step) remaining time: 2 days, 11:19:45
|
117 |
-
Epoch 1 | iter 12032 step 47 | loss train: 10.002, val: n/a | iter time: 355.30 ms (step) remaining time: 2 days, 11:17:36
|
118 |
-
Epoch 1 | iter 12288 step 48 | loss train: 10.021, val: n/a | iter time: 355.12 ms (step) remaining time: 2 days, 11:15:32
|
119 |
-
Epoch 1 | iter 12544 step 49 | loss train: 10.017, val: n/a | iter time: 353.81 ms (step) remaining time: 2 days, 11:13:29
|
120 |
-
Epoch 1 | iter 12800 step 50 | loss train: 9.966, val: n/a | iter time: 354.70 ms (step) remaining time: 2 days, 11:11:26
|
121 |
# ...
|
122 |
-
Epoch 1 | iter 640256 step 2501 | loss train: 2.875, val: 2.786 | iter time: 348.10 ms (step) remaining time: 0:19:39
|
123 |
-
Epoch 1 | iter 640512 step 2502 | loss train: 2.885, val: 2.786 | iter time: 349.50 ms (step) remaining time: 0:18:15
|
124 |
-
Epoch 1 | iter 640768 step 2503 | loss train: 2.857, val: 2.786 | iter time: 347.05 ms (step) remaining time: 0:16:52
|
125 |
-
Epoch 1 | iter 641024 step 2504 | loss train: 2.925, val: 2.786 | iter time: 347.38 ms (step) remaining time: 0:15:28
|
126 |
-
Epoch 1 | iter 641280 step 2505 | loss train: 2.882, val: 2.786 | iter time: 346.76 ms (step) remaining time: 0:14:04
|
127 |
-
Epoch 1 | iter 641536 step 2506 | loss train: 2.875, val: 2.786 | iter time: 348.08 ms (step) remaining time: 0:12:40
|
128 |
-
Epoch 1 | iter 641792 step 2507 | loss train: 2.979, val: 2.786 | iter time: 349.34 ms (step) remaining time: 0:11:16
|
129 |
-
Epoch 1 | iter 642048 step 2508 | loss train: 2.971, val: 2.786 | iter time: 348.34 ms (step) remaining time: 0:09:52
|
130 |
-
Epoch 1 | iter 642304 step 2509 | loss train: 2.991, val: 2.786 | iter time: 347.89 ms (step) remaining time: 0:08:28
|
131 |
-
Epoch 1 | iter 642560 step 2510 | loss train: 2.999, val: 2.786 | iter time: 349.61 ms (step) remaining time: 0:07:05
|
132 |
-
Epoch 1 | iter 642816 step 2511 | loss train: 3.013, val: 2.786 | iter time: 349.54 ms (step) remaining time: 0:05:41
|
133 |
-
Epoch 1 | iter 643072 step 2512 | loss train: 2.923, val: 2.786 | iter time: 348.39 ms (step) remaining time: 0:04:17
|
134 |
-
Epoch 1 | iter 643328 step 2513 | loss train: 2.986, val: 2.786 | iter time: 347.26 ms (step) remaining time: 0:02:53
|
135 |
-
Epoch 1 | iter 643584 step 2514 | loss train: 2.939, val: 2.786 | iter time: 348.31 ms (step) remaining time: 0:01:29
|
136 |
-
Epoch 2 | iter 643840 step 2515 | loss train: 2.835, val: 2.786 | iter time: 349.08 ms (step) remaining time: 0:00:05
|
137 |
-
Validating ...
|
138 |
-
Final evaluation | val loss: 2.786 | val ppl: 16.208
|
139 |
-
Saving checkpoint to '../out/pretrain-core/final/lit_model.pth'
|
140 |
-
----------------------------------------
|
141 |
-
| Performance
|
142 |
-
| - Total tokens : 5,274,484,736
|
143 |
-
| - Training Time : 210925.61 s
|
144 |
-
| - Tok/sec : 8533.19 tok/s
|
145 |
-
| ----------------------------------------
|
146 |
-
| Memory Usage
|
147 |
-
| - Memory Used : 20.44 GB
|
148 |
-
----------------------------------------
|
149 |
```
|
150 |
|
151 |
Backup `wandb`:
|
|
|
44 |
- reason
|
45 |
---
|
46 |
|
47 |
+
# tangled-alpha-0.8-core
|
48 |
|
49 |

|
50 |
|
|
|
63 |
```
|
64 |
|
65 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
# ...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
```
|
68 |
|
69 |
Backup `wandb`:
|