readme
Browse files
README.md
CHANGED
@@ -55,8 +55,10 @@ time python -B prepare_core_datasets.py
|
|
55 |
```
|
56 |
i=0, min_len=0, max_len=1048576, block_size=2049, chunk_size=16392000, len(dataset)=3134311, len(dataset) * block_size=6422203239
|
57 |
Total number of tokens in the optimized dataset '../core-data-0-0-1048576-2049-8000' is 6422203239
|
|
|
58 |
i=1, min_len=2049, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=179944, len(dataset) * block_size=1474281192
|
59 |
Total number of tokens in the optimized dataset '../core-data-1-2049-8193-8193-2000' is 1474281192
|
|
|
60 |
i=2, min_len=8193, max_len=1048577, block_size=32769, chunk_size=16384500, len(dataset)=48261, len(dataset) * block_size=1581464709
|
61 |
Total number of tokens in the optimized dataset '../core-data-2-8193-1048577-32769-500' is 1581464709
|
62 |
```
|
@@ -66,7 +68,55 @@ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable
|
|
66 |
```
|
67 |
|
68 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
# ...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
```
|
71 |
|
72 |
Backup `wandb`:
|
|
|
55 |
```
|
56 |
i=0, min_len=0, max_len=1048576, block_size=2049, chunk_size=16392000, len(dataset)=3134311, len(dataset) * block_size=6422203239
|
57 |
Total number of tokens in the optimized dataset '../core-data-0-0-1048576-2049-8000' is 6422203239
|
58 |
+
|
59 |
i=1, min_len=2049, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=179944, len(dataset) * block_size=1474281192
|
60 |
Total number of tokens in the optimized dataset '../core-data-1-2049-8193-8193-2000' is 1474281192
|
61 |
+
|
62 |
i=2, min_len=8193, max_len=1048577, block_size=32769, chunk_size=16384500, len(dataset)=48261, len(dataset) * block_size=1581464709
|
63 |
Total number of tokens in the optimized dataset '../core-data-2-8193-1048577-32769-500' is 1581464709
|
64 |
```
|
|
|
68 |
```
|
69 |
|
70 |
```
|
71 |
+
Seed set to 23
|
72 |
+
Time to instantiate model: 0.30 seconds.
|
73 |
+
Total parameters: 185,631,232
|
74 |
+
Verifying settings ...
|
75 |
+
Measured TFLOPs: 14094.64
|
76 |
+
Epoch 1 | iter 128 step 1 | loss train: 11.709, val: n/a | iter time: 341.75 ms (step) remaining time: 3 days, 20:04:36
|
77 |
+
Epoch 1 | iter 256 step 2 | loss train: 11.716, val: n/a | iter time: 287.55 ms (step) remaining time: 3 days, 3:29:34
|
78 |
+
Epoch 1 | iter 384 step 3 | loss train: 11.711, val: n/a | iter time: 290.88 ms (step) remaining time: 2 days, 22:16:53
|
79 |
+
Epoch 1 | iter 512 step 4 | loss train: 11.706, val: n/a | iter time: 291.81 ms (step) remaining time: 2 days, 19:34:34
|
80 |
+
Epoch 1 | iter 640 step 5 | loss train: 11.696, val: n/a | iter time: 291.37 ms (step) remaining time: 2 days, 17:59:17
|
81 |
+
Epoch 1 | iter 768 step 6 | loss train: 11.687, val: n/a | iter time: 290.50 ms (step) remaining time: 2 days, 16:55:49
|
82 |
+
Epoch 1 | iter 896 step 7 | loss train: 11.675, val: n/a | iter time: 291.08 ms (step) remaining time: 2 days, 16:10:38
|
83 |
+
Epoch 1 | iter 1024 step 8 | loss train: 11.660, val: n/a | iter time: 294.46 ms (step) remaining time: 2 days, 15:36:26
|
84 |
+
Epoch 1 | iter 1152 step 9 | loss train: 11.640, val: n/a | iter time: 292.26 ms (step) remaining time: 2 days, 15:09:28
|
85 |
+
Epoch 1 | iter 1280 step 10 | loss train: 11.626, val: n/a | iter time: 289.93 ms (step) remaining time: 2 days, 14:47:34
|
86 |
+
Epoch 1 | iter 1408 step 11 | loss train: 11.584, val: n/a | iter time: 292.15 ms (step) remaining time: 2 days, 14:29:19
|
87 |
+
Epoch 1 | iter 1536 step 12 | loss train: 11.526, val: n/a | iter time: 291.24 ms (step) remaining time: 2 days, 14:13:54
|
88 |
+
Epoch 1 | iter 1664 step 13 | loss train: 11.483, val: n/a | iter time: 291.11 ms (step) remaining time: 2 days, 14:00:48
|
89 |
+
Epoch 1 | iter 1792 step 14 | loss train: 11.430, val: n/a | iter time: 290.68 ms (step) remaining time: 2 days, 13:49:24
|
90 |
+
Epoch 1 | iter 1920 step 15 | loss train: 11.392, val: n/a | iter time: 290.37 ms (step) remaining time: 2 days, 13:39:22
|
91 |
+
Epoch 1 | iter 2048 step 16 | loss train: 11.326, val: n/a | iter time: 290.31 ms (step) remaining time: 2 days, 13:30:34
|
92 |
+
Epoch 1 | iter 2176 step 17 | loss train: 11.279, val: n/a | iter time: 290.33 ms (step) remaining time: 2 days, 13:22:34
|
93 |
+
Epoch 1 | iter 2304 step 18 | loss train: 11.222, val: n/a | iter time: 290.50 ms (step) remaining time: 2 days, 13:15:27
|
94 |
+
Epoch 1 | iter 2432 step 19 | loss train: 11.163, val: n/a | iter time: 290.39 ms (step) remaining time: 2 days, 13:09:11
|
95 |
+
Epoch 1 | iter 2560 step 20 | loss train: 11.094, val: n/a | iter time: 290.00 ms (step) remaining time: 2 days, 13:03:21
|
96 |
# ...
|
97 |
+
Epoch 1 | iter 782592 step 6114 | loss train: 3.080, val: 3.255 | iter time: 288.91 ms (step) remaining time: 0:06:14
|
98 |
+
Epoch 1 | iter 782720 step 6115 | loss train: 3.096, val: 3.255 | iter time: 289.11 ms (step) remaining time: 0:05:39
|
99 |
+
Epoch 1 | iter 782848 step 6116 | loss train: 2.977, val: 3.255 | iter time: 289.28 ms (step) remaining time: 0:05:04
|
100 |
+
Epoch 1 | iter 782976 step 6117 | loss train: 3.040, val: 3.255 | iter time: 289.24 ms (step) remaining time: 0:04:29
|
101 |
+
Epoch 1 | iter 783104 step 6118 | loss train: 3.062, val: 3.255 | iter time: 290.49 ms (step) remaining time: 0:03:54
|
102 |
+
Epoch 1 | iter 783232 step 6119 | loss train: 3.037, val: 3.255 | iter time: 289.91 ms (step) remaining time: 0:03:19
|
103 |
+
Epoch 1 | iter 783360 step 6120 | loss train: 3.028, val: 3.255 | iter time: 289.49 ms (step) remaining time: 0:02:44
|
104 |
+
Epoch 1 | iter 783488 step 6121 | loss train: 3.007, val: 3.255 | iter time: 289.81 ms (step) remaining time: 0:02:09
|
105 |
+
Epoch 2 | iter 783616 step 6122 | loss train: 3.007, val: 3.255 | iter time: 289.34 ms (step) remaining time: 0:01:34
|
106 |
+
Epoch 2 | iter 783744 step 6123 | loss train: 3.046, val: 3.255 | iter time: 288.52 ms (step) remaining time: 0:00:59
|
107 |
+
Epoch 2 | iter 783872 step 6124 | loss train: 3.140, val: 3.255 | iter time: 288.66 ms (step) remaining time: 0:00:24
|
108 |
+
Validating ...
|
109 |
+
Final evaluation | val loss: 3.254 | val ppl: 25.904
|
110 |
+
Saving checkpoint to '../out/pretrain-core-0/final/lit_model.pth'
|
111 |
+
----------------------------------------
|
112 |
+
| Performance
|
113 |
+
| - Total tokens : 6,422,200,320
|
114 |
+
| - Training Time : 214857.29 s
|
115 |
+
| - Tok/sec : 109674.70 tok/s
|
116 |
+
| ----------------------------------------
|
117 |
+
| Memory Usage
|
118 |
+
| - Memory Used : 17.30 GB
|
119 |
+
----------------------------------------
|
120 |
```
|
121 |
|
122 |
Backup `wandb`:
|