mtasic85 commited on
Commit
6f41c4a
·
1 Parent(s): 8ded95f

git config

Browse files
Files changed (3) hide show
  1. .gitattributes +3 -0
  2. .gitignore +171 -0
  3. README.md +166 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.png filter=lfs diff=lfs merge=lfs -text
37
+ results.json filter=lfs diff=lfs merge=lfs -text
38
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ---> Python
2
+ # Byte-compiled / optimized / DLL files
3
+ __pycache__/
4
+ *.py[cod]
5
+ *$py.class
6
+
7
+ # C extensions
8
+ *.so
9
+
10
+ # Distribution / packaging
11
+ .Python
12
+ build/
13
+ develop-eggs/
14
+ dist/
15
+ downloads/
16
+ eggs/
17
+ .eggs/
18
+ lib/
19
+ lib64/
20
+ parts/
21
+ sdist/
22
+ var/
23
+ wheels/
24
+ share/python-wheels/
25
+ *.egg-info/
26
+ .installed.cfg
27
+ *.egg
28
+ MANIFEST
29
+
30
+ # PyInstaller
31
+ # Usually these files are written by a python script from a template
32
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
33
+ *.manifest
34
+ *.spec
35
+
36
+ # Installer logs
37
+ pip-log.txt
38
+ pip-delete-this-directory.txt
39
+
40
+ # Unit test / coverage reports
41
+ htmlcov/
42
+ .tox/
43
+ .nox/
44
+ .coverage
45
+ .coverage.*
46
+ .cache
47
+ nosetests.xml
48
+ coverage.xml
49
+ *.cover
50
+ *.py,cover
51
+ .hypothesis/
52
+ .pytest_cache/
53
+ cover/
54
+
55
+ # Translations
56
+ *.mo
57
+ *.pot
58
+
59
+ # Django stuff:
60
+ *.log
61
+ local_settings.py
62
+ db.sqlite3
63
+ db.sqlite3-journal
64
+
65
+ # Flask stuff:
66
+ instance/
67
+ .webassets-cache
68
+
69
+ # Scrapy stuff:
70
+ .scrapy
71
+
72
+ # Sphinx documentation
73
+ docs/_build/
74
+
75
+ # PyBuilder
76
+ .pybuilder/
77
+ target/
78
+
79
+ # Jupyter Notebook
80
+ .ipynb_checkpoints
81
+
82
+ # IPython
83
+ profile_default/
84
+ ipython_config.py
85
+
86
+ # pyenv
87
+ # For a library or package, you might want to ignore these files since the code is
88
+ # intended to run in multiple environments; otherwise, check them in:
89
+ # .python-version
90
+
91
+ # pipenv
92
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
93
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
94
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
95
+ # install all needed dependencies.
96
+ #Pipfile.lock
97
+
98
+ # poetry
99
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
100
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
101
+ # commonly ignored for libraries.
102
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
103
+ #poetry.lock
104
+
105
+ # pdm
106
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
107
+ #pdm.lock
108
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
109
+ # in version control.
110
+ # https://pdm.fming.dev/#use-with-ide
111
+ .pdm.toml
112
+
113
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
114
+ __pypackages__/
115
+
116
+ # Celery stuff
117
+ celerybeat-schedule
118
+ celerybeat.pid
119
+
120
+ # SageMath parsed files
121
+ *.sage.py
122
+
123
+ # Environments
124
+ .env
125
+ .venv
126
+ env/
127
+ venv/
128
+ ENV/
129
+ env.bak/
130
+ venv.bak/
131
+
132
+ # Spyder project settings
133
+ .spyderproject
134
+ .spyproject
135
+
136
+ # Rope project settings
137
+ .ropeproject
138
+
139
+ # mkdocs documentation
140
+ /site
141
+
142
+ # mypy
143
+ .mypy_cache/
144
+ .dmypy.json
145
+ dmypy.json
146
+
147
+ # Pyre type checker
148
+ .pyre/
149
+
150
+ # pytype static type analyzer
151
+ .pytype/
152
+
153
+ # Cython debug symbols
154
+ cython_debug/
155
+
156
+ # PyCharm
157
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
158
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
159
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
160
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
161
+ .idea/
162
+
163
+ .DS_Store
164
+ .ruff_cache
165
+ venv*/
166
+ wandb*/
167
+ data/
168
+ pretrain-data/
169
+ contrain-data/
170
+ core-data-*/
171
+ out/pretrain-core/step-*/
README.md CHANGED
@@ -1,3 +1,169 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ language: [
6
+ 'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el',
7
+ 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he',
8
+ 'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko',
9
+ 'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my',
10
+ 'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si',
11
+ 'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn',
12
+ 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu',
13
+ ]
14
+ datasets:
15
+ # core - base
16
+ - ontocord/fineweb-permissive-multilingual-2m
17
+ - distily/c4_multilingual_1M
18
+ - data-silence/sumnews
19
+ - xu-song/cc100-samples
20
+ - badrex/llm-emoji-dataset
21
+ - fblgit/simple-math
22
+ - Gusarich/math-expressions-1m
23
+ - neuralwork/arxiver
24
+ - christopher/rosetta-code
25
+ - nampdn-ai/tiny-codes
26
+ - JeanKaddour/minipile
27
+ # core - instruct
28
+ - NousResearch/hermes-function-calling-v1
29
+ - simplescaling/s1K-1.1
30
+ # base - instruct
31
+ - mlabonne/open-perfectblend
32
+ - allenai/tulu-3-sft-mixture
33
+ - rombodawg/Everything_Instruct_Multilingual
34
+ # base - reason
35
+ - open-r1/OpenR1-Math-220k
36
+ - open-thoughts/OpenThoughts-114k
37
+ - cognitivecomputations/dolphin-r1
38
+ - simplescaling/s1K-1.1
39
+ tags:
40
+ - chat
41
+ - core
42
+ - base
43
+ - instruct
44
+ - reason
45
  ---
46
+
47
+ # tangled-alpha-0.6-core
48
+
49
+ ![logo](./misc/logo.jpg)
50
+
51
+ ```bash
52
+ time python -B prepare_core_datasets.py
53
+ ```
54
+
55
+ ```
56
+ i=0, min_len=0, max_len=1073741824, block_size=4097, chunk_size=16388000, len(dataset)=1287403, len(dataset) * block_size=5274490091
57
+ Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-4097-4000' is 5274490091
58
+
59
+ ```
60
+
61
+ ```bash
62
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model.yaml
63
+ ```
64
+
65
+ ```
66
+ Seed set to 23
67
+ Time to instantiate model: 0.31 seconds.
68
+ Total parameters: 201,359,872
69
+ Verifying settings ...
70
+ Measured TFLOPs: 7072.06
71
+ Epoch 1 | iter 256 step 1 | loss train: 11.961, val: n/a | iter time: 406.23 ms (step) remaining time: 3 days, 13:55:33
72
+ Epoch 1 | iter 512 step 2 | loss train: 11.953, val: n/a | iter time: 358.84 ms (step) remaining time: 3 days, 0:49:32
73
+ Epoch 1 | iter 768 step 3 | loss train: 11.943, val: n/a | iter time: 357.16 ms (step) remaining time: 2 days, 20:38:36
74
+ Epoch 1 | iter 1024 step 4 | loss train: 11.907, val: n/a | iter time: 355.69 ms (step) remaining time: 2 days, 18:31:54
75
+ Epoch 1 | iter 1280 step 5 | loss train: 11.854, val: n/a | iter time: 358.32 ms (step) remaining time: 2 days, 17:13:13
76
+ Epoch 1 | iter 1536 step 6 | loss train: 11.789, val: n/a | iter time: 355.59 ms (step) remaining time: 2 days, 16:18:25
77
+ Epoch 1 | iter 1792 step 7 | loss train: 11.703, val: n/a | iter time: 354.88 ms (step) remaining time: 2 days, 15:37:56
78
+ Epoch 1 | iter 2048 step 8 | loss train: 11.586, val: n/a | iter time: 354.07 ms (step) remaining time: 2 days, 15:06:45
79
+ Epoch 1 | iter 2304 step 9 | loss train: 11.451, val: n/a | iter time: 352.89 ms (step) remaining time: 2 days, 14:41:54
80
+ Epoch 1 | iter 2560 step 10 | loss train: 11.347, val: n/a | iter time: 355.58 ms (step) remaining time: 2 days, 14:21:38
81
+ Epoch 1 | iter 2816 step 11 | loss train: 11.271, val: n/a | iter time: 351.01 ms (step) remaining time: 2 days, 14:04:43
82
+ Epoch 1 | iter 3072 step 12 | loss train: 11.194, val: n/a | iter time: 351.91 ms (step) remaining time: 2 days, 13:50:26
83
+ Epoch 1 | iter 3328 step 13 | loss train: 11.151, val: n/a | iter time: 353.02 ms (step) remaining time: 2 days, 13:38:04
84
+ Epoch 1 | iter 3584 step 14 | loss train: 11.097, val: n/a | iter time: 353.75 ms (step) remaining time: 2 days, 13:27:21
85
+ Epoch 1 | iter 3840 step 15 | loss train: 11.064, val: n/a | iter time: 358.31 ms (step) remaining time: 2 days, 13:17:48
86
+ Epoch 1 | iter 4096 step 16 | loss train: 11.008, val: n/a | iter time: 351.95 ms (step) remaining time: 2 days, 13:09:17
87
+ Epoch 1 | iter 4352 step 17 | loss train: 10.997, val: n/a | iter time: 352.26 ms (step) remaining time: 2 days, 13:01:35
88
+ Epoch 1 | iter 4608 step 18 | loss train: 10.951, val: n/a | iter time: 352.57 ms (step) remaining time: 2 days, 12:54:35
89
+ Epoch 1 | iter 4864 step 19 | loss train: 10.902, val: n/a | iter time: 354.73 ms (step) remaining time: 2 days, 12:48:13
90
+ Epoch 1 | iter 5120 step 20 | loss train: 10.877, val: n/a | iter time: 354.47 ms (step) remaining time: 2 days, 12:43:19
91
+ Epoch 1 | iter 5376 step 21 | loss train: 10.830, val: n/a | iter time: 353.78 ms (step) remaining time: 2 days, 12:37:49
92
+ Epoch 1 | iter 5632 step 22 | loss train: 10.809, val: n/a | iter time: 355.03 ms (step) remaining time: 2 days, 12:32:44
93
+ Epoch 1 | iter 5888 step 23 | loss train: 10.727, val: n/a | iter time: 351.49 ms (step) remaining time: 2 days, 12:27:56
94
+ Epoch 1 | iter 6144 step 24 | loss train: 10.707, val: n/a | iter time: 351.58 ms (step) remaining time: 2 days, 12:23:24
95
+ Epoch 1 | iter 6400 step 25 | loss train: 10.643, val: n/a | iter time: 350.84 ms (step) remaining time: 2 days, 12:19:10
96
+ Epoch 1 | iter 6656 step 26 | loss train: 10.649, val: n/a | iter time: 355.14 ms (step) remaining time: 2 days, 12:15:07
97
+ Epoch 1 | iter 6912 step 27 | loss train: 10.580, val: n/a | iter time: 352.60 ms (step) remaining time: 2 days, 12:11:12
98
+ Epoch 1 | iter 7168 step 28 | loss train: 10.554, val: n/a | iter time: 351.57 ms (step) remaining time: 2 days, 12:07:27
99
+ Epoch 1 | iter 7424 step 29 | loss train: 10.526, val: n/a | iter time: 350.36 ms (step) remaining time: 2 days, 12:03:55
100
+ Epoch 1 | iter 7680 step 30 | loss train: 10.496, val: n/a | iter time: 353.19 ms (step) remaining time: 2 days, 12:00:34
101
+ Epoch 1 | iter 7936 step 31 | loss train: 10.496, val: n/a | iter time: 350.95 ms (step) remaining time: 2 days, 11:57:21
102
+ Epoch 1 | iter 8192 step 32 | loss train: 10.421, val: n/a | iter time: 352.71 ms (step) remaining time: 2 days, 11:54:18
103
+ Epoch 1 | iter 8448 step 33 | loss train: 10.379, val: n/a | iter time: 354.15 ms (step) remaining time: 2 days, 11:51:21
104
+ Epoch 1 | iter 8704 step 34 | loss train: 10.343, val: n/a | iter time: 353.95 ms (step) remaining time: 2 days, 11:48:29
105
+ Epoch 1 | iter 8960 step 35 | loss train: 10.353, val: n/a | iter time: 351.04 ms (step) remaining time: 2 days, 11:45:44
106
+ Epoch 1 | iter 9216 step 36 | loss train: 10.323, val: n/a | iter time: 354.76 ms (step) remaining time: 2 days, 11:43:05
107
+ Epoch 1 | iter 9472 step 37 | loss train: 10.258, val: n/a | iter time: 353.18 ms (step) remaining time: 2 days, 11:40:29
108
+ Epoch 1 | iter 9728 step 38 | loss train: 10.260, val: n/a | iter time: 353.86 ms (step) remaining time: 2 days, 11:37:57
109
+ Epoch 1 | iter 9984 step 39 | loss train: 10.257, val: n/a | iter time: 356.14 ms (step) remaining time: 2 days, 11:35:50
110
+ Epoch 1 | iter 10240 step 40 | loss train: 10.179, val: n/a | iter time: 353.73 ms (step) remaining time: 2 days, 11:33:23
111
+ Epoch 1 | iter 10496 step 41 | loss train: 10.163, val: n/a | iter time: 350.49 ms (step) remaining time: 2 days, 11:30:59
112
+ Epoch 1 | iter 10752 step 42 | loss train: 10.156, val: n/a | iter time: 354.15 ms (step) remaining time: 2 days, 11:28:40
113
+ Epoch 1 | iter 11008 step 43 | loss train: 10.150, val: n/a | iter time: 350.99 ms (step) remaining time: 2 days, 11:26:24
114
+ Epoch 1 | iter 11264 step 44 | loss train: 10.089, val: n/a | iter time: 354.28 ms (step) remaining time: 2 days, 11:24:09
115
+ Epoch 1 | iter 11520 step 45 | loss train: 10.096, val: n/a | iter time: 352.46 ms (step) remaining time: 2 days, 11:21:56
116
+ Epoch 1 | iter 11776 step 46 | loss train: 10.021, val: n/a | iter time: 356.80 ms (step) remaining time: 2 days, 11:19:45
117
+ Epoch 1 | iter 12032 step 47 | loss train: 10.002, val: n/a | iter time: 355.30 ms (step) remaining time: 2 days, 11:17:36
118
+ Epoch 1 | iter 12288 step 48 | loss train: 10.021, val: n/a | iter time: 355.12 ms (step) remaining time: 2 days, 11:15:32
119
+ Epoch 1 | iter 12544 step 49 | loss train: 10.017, val: n/a | iter time: 353.81 ms (step) remaining time: 2 days, 11:13:29
120
+ Epoch 1 | iter 12800 step 50 | loss train: 9.966, val: n/a | iter time: 354.70 ms (step) remaining time: 2 days, 11:11:26
121
+ # ...
122
+ Epoch 1 | iter 640256 step 2501 | loss train: 2.875, val: 2.786 | iter time: 348.10 ms (step) remaining time: 0:19:39
123
+ Epoch 1 | iter 640512 step 2502 | loss train: 2.885, val: 2.786 | iter time: 349.50 ms (step) remaining time: 0:18:15
124
+ Epoch 1 | iter 640768 step 2503 | loss train: 2.857, val: 2.786 | iter time: 347.05 ms (step) remaining time: 0:16:52
125
+ Epoch 1 | iter 641024 step 2504 | loss train: 2.925, val: 2.786 | iter time: 347.38 ms (step) remaining time: 0:15:28
126
+ Epoch 1 | iter 641280 step 2505 | loss train: 2.882, val: 2.786 | iter time: 346.76 ms (step) remaining time: 0:14:04
127
+ Epoch 1 | iter 641536 step 2506 | loss train: 2.875, val: 2.786 | iter time: 348.08 ms (step) remaining time: 0:12:40
128
+ Epoch 1 | iter 641792 step 2507 | loss train: 2.979, val: 2.786 | iter time: 349.34 ms (step) remaining time: 0:11:16
129
+ Epoch 1 | iter 642048 step 2508 | loss train: 2.971, val: 2.786 | iter time: 348.34 ms (step) remaining time: 0:09:52
130
+ Epoch 1 | iter 642304 step 2509 | loss train: 2.991, val: 2.786 | iter time: 347.89 ms (step) remaining time: 0:08:28
131
+ Epoch 1 | iter 642560 step 2510 | loss train: 2.999, val: 2.786 | iter time: 349.61 ms (step) remaining time: 0:07:05
132
+ Epoch 1 | iter 642816 step 2511 | loss train: 3.013, val: 2.786 | iter time: 349.54 ms (step) remaining time: 0:05:41
133
+ Epoch 1 | iter 643072 step 2512 | loss train: 2.923, val: 2.786 | iter time: 348.39 ms (step) remaining time: 0:04:17
134
+ Epoch 1 | iter 643328 step 2513 | loss train: 2.986, val: 2.786 | iter time: 347.26 ms (step) remaining time: 0:02:53
135
+ Epoch 1 | iter 643584 step 2514 | loss train: 2.939, val: 2.786 | iter time: 348.31 ms (step) remaining time: 0:01:29
136
+ Epoch 2 | iter 643840 step 2515 | loss train: 2.835, val: 2.786 | iter time: 349.08 ms (step) remaining time: 0:00:05
137
+ Validating ...
138
+ Final evaluation | val loss: 2.786 | val ppl: 16.208
139
+ Saving checkpoint to '../out/pretrain-core/final/lit_model.pth'
140
+ ----------------------------------------
141
+ | Performance
142
+ | - Total tokens : 5,274,484,736
143
+ | - Training Time : 210925.61 s
144
+ | - Tok/sec : 8533.19 tok/s
145
+ | ----------------------------------------
146
+ | Memory Usage
147
+ | - Memory Used : 20.44 GB
148
+ ----------------------------------------
149
+ ```
150
+
151
+ Backup `wandb`:
152
+
153
+ ```bash
154
+ mv wandb wandb-pretrain-core
155
+ ```
156
+
157
+ Chat with model:
158
+
159
+ ```bash
160
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core/final
161
+ ```
162
+
163
+ ```bash
164
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size 1 --dtype 'bfloat16' '../out/pretrain-core/final'
165
+ ```
166
+
167
+ ```
168
+ # ...
169
+ ```