zhouliang commited on
Commit
8ec225c
1 Parent(s): 7bb0ded

Upload 13 files

Browse files
README.md CHANGED
@@ -1,3 +1,207 @@
1
  ---
2
- license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ base_model: /DATA4T/text-generation-webui/models/Yi-34B
4
  ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Shared by [optional]:** [More Information Needed]
22
+ - **Model type:** [More Information Needed]
23
+ - **Language(s) (NLP):** [More Information Needed]
24
+ - **License:** [More Information Needed]
25
+ - **Finetuned from model [optional]:** [More Information Needed]
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [More Information Needed]
32
+ - **Paper [optional]:** [More Information Needed]
33
+ - **Demo [optional]:** [More Information Needed]
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+
39
+ ### Direct Use
40
+
41
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
+
43
+ [More Information Needed]
44
+
45
+ ### Downstream Use [optional]
46
+
47
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Out-of-Scope Use
52
+
53
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
+
55
+ [More Information Needed]
56
+
57
+ ## Bias, Risks, and Limitations
58
+
59
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ### Recommendations
64
+
65
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
+
67
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
+
69
+ ## How to Get Started with the Model
70
+
71
+ Use the code below to get started with the model.
72
+
73
+ [More Information Needed]
74
+
75
+ ## Training Details
76
+
77
+ ### Training Data
78
+
79
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
+
81
+ [More Information Needed]
82
+
83
+ ### Training Procedure
84
+
85
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
+
87
+ #### Preprocessing [optional]
88
+
89
+ [More Information Needed]
90
+
91
+
92
+ #### Training Hyperparameters
93
+
94
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
+
96
+ #### Speeds, Sizes, Times [optional]
97
+
98
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
+
100
+ [More Information Needed]
101
+
102
+ ## Evaluation
103
+
104
+ <!-- This section describes the evaluation protocols and provides the results. -->
105
+
106
+ ### Testing Data, Factors & Metrics
107
+
108
+ #### Testing Data
109
+
110
+ <!-- This should link to a Data Card if possible. -->
111
+
112
+ [More Information Needed]
113
+
114
+ #### Factors
115
+
116
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Metrics
121
+
122
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
+
124
+ [More Information Needed]
125
+
126
+ ### Results
127
+
128
+ [More Information Needed]
129
+
130
+ #### Summary
131
+
132
+
133
+
134
+ ## Model Examination [optional]
135
+
136
+ <!-- Relevant interpretability work for the model goes here -->
137
+
138
+ [More Information Needed]
139
+
140
+ ## Environmental Impact
141
+
142
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
+
144
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
+
146
+ - **Hardware Type:** [More Information Needed]
147
+ - **Hours used:** [More Information Needed]
148
+ - **Cloud Provider:** [More Information Needed]
149
+ - **Compute Region:** [More Information Needed]
150
+ - **Carbon Emitted:** [More Information Needed]
151
+
152
+ ## Technical Specifications [optional]
153
+
154
+ ### Model Architecture and Objective
155
+
156
+ [More Information Needed]
157
+
158
+ ### Compute Infrastructure
159
+
160
+ [More Information Needed]
161
+
162
+ #### Hardware
163
+
164
+ [More Information Needed]
165
+
166
+ #### Software
167
+
168
+ [More Information Needed]
169
+
170
+ ## Citation [optional]
171
+
172
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
+
174
+ **BibTeX:**
175
+
176
+ [More Information Needed]
177
+
178
+ **APA:**
179
+
180
+ [More Information Needed]
181
+
182
+ ## Glossary [optional]
183
+
184
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
+
186
+ [More Information Needed]
187
+
188
+ ## More Information [optional]
189
+
190
+ [More Information Needed]
191
+
192
+ ## Model Card Authors [optional]
193
+
194
+ [More Information Needed]
195
+
196
+ ## Model Card Contact
197
+
198
+ [More Information Needed]
199
+
200
+
201
+ ## Training procedure
202
+
203
+
204
+ ### Framework versions
205
+
206
+
207
+ - PEFT 0.7.0.dev0
adapter_config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/DATA4T/text-generation-webui/models/Yi-34B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "lora_alpha": 32.0,
12
+ "lora_dropout": 0.1,
13
+ "modules_to_save": null,
14
+ "peft_type": "LORA",
15
+ "r": 8,
16
+ "rank_pattern": {},
17
+ "revision": null,
18
+ "target_modules": [
19
+ "down_proj",
20
+ "up_proj",
21
+ "k_proj",
22
+ "v_proj",
23
+ "o_proj",
24
+ "q_proj",
25
+ "gate_proj"
26
+ ],
27
+ "task_type": "CAUSAL_LM"
28
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02f40a159b1882104fcba402e99ee114ef7b44964b5b86e2b9ebd5205f42c70e
3
+ size 123170349
all_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "train_loss": 1.0705227502962438,
4
+ "train_runtime": 220133.6189,
5
+ "train_samples_per_second": 1.019,
6
+ "train_steps_per_second": 0.011
7
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenization_yi.py ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from shutil import copyfile
3
+ from typing import Any, Dict, List, Optional, Tuple
4
+
5
+ import sentencepiece as spm
6
+ from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
7
+ from transformers.utils import logging
8
+
9
+ logger = logging.get_logger(__name__)
10
+
11
+ VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"}
12
+
13
+ PRETRAINED_VOCAB_FILES_MAP = {
14
+ "vocab_file": {},
15
+ "tokenizer_file": {},
16
+ }
17
+ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {}
18
+
19
+
20
+ class YiTokenizer(PreTrainedTokenizer):
21
+ """
22
+ Construct a Yi tokenizer. Based on byte-level Byte-Pair-Encoding.
23
+
24
+ Args:
25
+ vocab_file (`str`):
26
+ Path to the vocabulary file.
27
+ """
28
+
29
+ vocab_files_names = VOCAB_FILES_NAMES
30
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
31
+ max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
32
+ model_input_names = ["input_ids", "attention_mask"]
33
+
34
+ def __init__(
35
+ self,
36
+ vocab_file,
37
+ unk_token="<unk>",
38
+ bos_token="<|startoftext|>",
39
+ eos_token="<|endoftext|>",
40
+ pad_token="<unk>",
41
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
42
+ add_bos_token=True,
43
+ add_eos_token=False,
44
+ clean_up_tokenization_spaces=False,
45
+ **kwargs,
46
+ ):
47
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
48
+ bos_token = (
49
+ AddedToken(bos_token, lstrip=False, rstrip=False)
50
+ if isinstance(bos_token, str)
51
+ else bos_token
52
+ )
53
+ eos_token = (
54
+ AddedToken(eos_token, lstrip=False, rstrip=False)
55
+ if isinstance(eos_token, str)
56
+ else eos_token
57
+ )
58
+ unk_token = (
59
+ AddedToken(unk_token, lstrip=False, rstrip=False)
60
+ if isinstance(unk_token, str)
61
+ else unk_token
62
+ )
63
+ pad_token = (
64
+ AddedToken(pad_token, lstrip=False, rstrip=False)
65
+ if isinstance(pad_token, str)
66
+ else pad_token
67
+ )
68
+ self.vocab_file = vocab_file
69
+ self.add_bos_token = add_bos_token
70
+ self.add_eos_token = add_eos_token
71
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
72
+ self.sp_model.Load(vocab_file)
73
+ super().__init__(
74
+ bos_token=bos_token,
75
+ eos_token=eos_token,
76
+ unk_token=unk_token,
77
+ pad_token=pad_token,
78
+ add_bos_token=add_bos_token,
79
+ add_eos_token=add_eos_token,
80
+ sp_model_kwargs=self.sp_model_kwargs,
81
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
82
+ **kwargs,
83
+ )
84
+
85
+ def __getstate__(self):
86
+ state = self.__dict__.copy()
87
+ state["sp_model"] = None
88
+ return state
89
+
90
+ def __setstate__(self, d):
91
+ self.__dict__ = d
92
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
93
+ self.sp_model.Load(self.vocab_file)
94
+
95
+ @property
96
+ def vocab_size(self):
97
+ """Returns vocab size"""
98
+ return self.sp_model.get_piece_size()
99
+
100
+ def get_vocab(self):
101
+ """Returns vocab as a dict"""
102
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
103
+ vocab.update(self.added_tokens_encoder)
104
+ return vocab
105
+
106
+ def _tokenize(self, text):
107
+ """Returns a tokenized string."""
108
+ return self.sp_model.encode(text, out_type=str)
109
+
110
+ def _convert_token_to_id(self, token):
111
+ """Converts a token (str) in an id using the vocab."""
112
+ return self.sp_model.piece_to_id(token)
113
+
114
+ def _convert_id_to_token(self, index):
115
+ """Converts an index (integer) in a token (str) using the vocab."""
116
+ token = self.sp_model.IdToPiece(index)
117
+ return token
118
+
119
+ def convert_tokens_to_string(self, tokens):
120
+ """Converts a sequence of tokens (string) in a single string."""
121
+ current_sub_tokens = []
122
+ out_string = ""
123
+ prev_is_special = False
124
+ for i, token in enumerate(tokens):
125
+ # make sure that special tokens are not decoded using sentencepiece model
126
+ if token in self.all_special_tokens:
127
+ if not prev_is_special and i != 0:
128
+ out_string += " "
129
+ out_string += self.sp_model.decode(current_sub_tokens) + token
130
+ prev_is_special = True
131
+ current_sub_tokens = []
132
+ else:
133
+ current_sub_tokens.append(token)
134
+ prev_is_special = False
135
+ out_string += self.sp_model.decode(current_sub_tokens)
136
+ return out_string
137
+
138
+ def save_vocabulary(
139
+ self, save_directory, filename_prefix: Optional[str] = None
140
+ ) -> Tuple[str]:
141
+ """
142
+ Save the vocabulary and special tokens file to a directory.
143
+
144
+ Args:
145
+ save_directory (`str`):
146
+ The directory in which to save the vocabulary.
147
+
148
+ Returns:
149
+ `Tuple(str)`: Paths to the files saved.
150
+ """
151
+ if not os.path.isdir(save_directory):
152
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
153
+ return
154
+ out_vocab_file = os.path.join(
155
+ save_directory,
156
+ (filename_prefix + "-" if filename_prefix else "")
157
+ + VOCAB_FILES_NAMES["vocab_file"],
158
+ )
159
+
160
+ if os.path.abspath(self.vocab_file) != os.path.abspath(
161
+ out_vocab_file
162
+ ) and os.path.isfile(self.vocab_file):
163
+ copyfile(self.vocab_file, out_vocab_file)
164
+ elif not os.path.isfile(self.vocab_file):
165
+ with open(out_vocab_file, "wb") as fi:
166
+ content_spiece_model = self.sp_model.serialized_model_proto()
167
+ fi.write(content_spiece_model)
168
+
169
+ return (out_vocab_file,)
170
+
171
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
172
+ bos_token_id = [self.bos_token_id] if self.add_bos_token else []
173
+ eos_token_id = [self.eos_token_id] if self.add_eos_token else []
174
+
175
+ output = bos_token_id + token_ids_0 + eos_token_id
176
+
177
+ if token_ids_1 is not None:
178
+ output = output + bos_token_id + token_ids_1 + eos_token_id
179
+
180
+ return output
181
+
182
+ def get_special_tokens_mask(
183
+ self,
184
+ token_ids_0: List[int],
185
+ token_ids_1: Optional[List[int]] = None,
186
+ already_has_special_tokens: bool = False,
187
+ ) -> List[int]:
188
+ """
189
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
190
+ special tokens using the tokenizer `prepare_for_model` method.
191
+
192
+ Args:
193
+ token_ids_0 (`List[int]`):
194
+ List of IDs.
195
+ token_ids_1 (`List[int]`, *optional*):
196
+ Optional second list of IDs for sequence pairs.
197
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
198
+ Whether or not the token list is already formatted with special tokens for the model.
199
+
200
+ Returns:
201
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
202
+ """
203
+ if already_has_special_tokens:
204
+ return super().get_special_tokens_mask(
205
+ token_ids_0=token_ids_0,
206
+ token_ids_1=token_ids_1,
207
+ already_has_special_tokens=True,
208
+ )
209
+
210
+ bos_token_id = [1] if self.add_bos_token else []
211
+ eos_token_id = [1] if self.add_eos_token else []
212
+
213
+ if token_ids_1 is None:
214
+ return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id
215
+ return (
216
+ bos_token_id
217
+ + ([0] * len(token_ids_0))
218
+ + eos_token_id
219
+ + bos_token_id
220
+ + ([0] * len(token_ids_1))
221
+ + eos_token_id
222
+ )
223
+
224
+ def create_token_type_ids_from_sequences(
225
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
226
+ ) -> List[int]:
227
+ """
228
+ Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
229
+ sequence pair mask has the following format:
230
+
231
+ ```
232
+ 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
233
+ | first sequence | second sequence |
234
+ ```
235
+
236
+ if token_ids_1 is None, only returns the first portion of the mask (0s).
237
+
238
+ Args:
239
+ token_ids_0 (`List[int]`):
240
+ List of ids.
241
+ token_ids_1 (`List[int]`, *optional*):
242
+ Optional second list of IDs for sequence pairs.
243
+
244
+ Returns:
245
+ `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
246
+ """
247
+ bos_token_id = [self.bos_token_id] if self.add_bos_token else []
248
+ eos_token_id = [self.eos_token_id] if self.add_eos_token else []
249
+
250
+ output = [0] * len(bos_token_id + token_ids_0 + eos_token_id)
251
+
252
+ if token_ids_1 is not None:
253
+ output += [1] * len(bos_token_id + token_ids_1 + eos_token_id)
254
+
255
+ return output
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
3
+ size 1033105
tokenizer_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<|startoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": true,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "auto_map": {
31
+ "AutoTokenizer": [
32
+ "tokenization_yi.YiTokenizer",
33
+ null
34
+ ]
35
+ },
36
+ "bos_token": "<|startoftext|>",
37
+ "clean_up_tokenization_spaces": false,
38
+ "eos_token": "<|endoftext|>",
39
+ "model_max_length": 4096,
40
+ "pad_token": "<unk>",
41
+ "padding_side": "right",
42
+ "sp_model_kwargs": {},
43
+ "split_special_tokens": false,
44
+ "tokenizer_class": "YiTokenizer",
45
+ "unk_token": "<unk>"
46
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "train_loss": 1.0705227502962438,
4
+ "train_runtime": 220133.6189,
5
+ "train_samples_per_second": 1.019,
6
+ "train_steps_per_second": 0.011
7
+ }
trainer_log.jsonl ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"current_steps": 10, "total_steps": 2337, "loss": 1.9904, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9999096463326385e-06, "epoch": 0.01, "percentage": 0.43, "elapsed_time": "0:16:01", "remaining_time": "2 days, 14:10:39"}
2
+ {"current_steps": 20, "total_steps": 2337, "loss": 1.9027, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9996386016581256e-06, "epoch": 0.03, "percentage": 0.86, "elapsed_time": "0:31:44", "remaining_time": "2 days, 13:17:40"}
3
+ {"current_steps": 30, "total_steps": 2337, "loss": 1.8885, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9991869149562214e-06, "epoch": 0.04, "percentage": 1.28, "elapsed_time": "0:47:26", "remaining_time": "2 days, 12:47:53"}
4
+ {"current_steps": 40, "total_steps": 2337, "loss": 1.9943, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9985546678500257e-06, "epoch": 0.05, "percentage": 1.71, "elapsed_time": "1:03:06", "remaining_time": "2 days, 12:24:28"}
5
+ {"current_steps": 50, "total_steps": 2337, "loss": 1.9145, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.997741974591229e-06, "epoch": 0.06, "percentage": 2.14, "elapsed_time": "1:18:47", "remaining_time": "2 days, 12:03:39"}
6
+ {"current_steps": 60, "total_steps": 2337, "loss": 1.9079, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9967489820394627e-06, "epoch": 0.08, "percentage": 2.57, "elapsed_time": "1:34:27", "remaining_time": "2 days, 11:44:49"}
7
+ {"current_steps": 70, "total_steps": 2337, "loss": 1.8246, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.995575869635765e-06, "epoch": 0.09, "percentage": 3.0, "elapsed_time": "1:50:08", "remaining_time": "2 days, 11:26:51"}
8
+ {"current_steps": 80, "total_steps": 2337, "loss": 1.759, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.994222849370152e-06, "epoch": 0.1, "percentage": 3.42, "elapsed_time": "2:05:48", "remaining_time": "2 days, 11:09:13"}
9
+ {"current_steps": 90, "total_steps": 2337, "loss": 1.6747, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9926901657433085e-06, "epoch": 0.12, "percentage": 3.85, "elapsed_time": "2:21:28", "remaining_time": "2 days, 10:52:19"}
10
+ {"current_steps": 100, "total_steps": 2337, "loss": 1.5992, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.990978095722409e-06, "epoch": 0.13, "percentage": 4.28, "elapsed_time": "2:37:10", "remaining_time": "2 days, 10:35:57"}
11
+ {"current_steps": 110, "total_steps": 2337, "loss": 1.5288, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9890869486910627e-06, "epoch": 0.14, "percentage": 4.71, "elapsed_time": "2:52:52", "remaining_time": "2 days, 10:19:47"}
12
+ {"current_steps": 120, "total_steps": 2337, "loss": 1.5143, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9870170663934104e-06, "epoch": 0.15, "percentage": 5.13, "elapsed_time": "3:08:35", "remaining_time": "2 days, 10:04:08"}
13
+ {"current_steps": 130, "total_steps": 2337, "loss": 1.4542, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9847688228723645e-06, "epoch": 0.17, "percentage": 5.56, "elapsed_time": "3:24:15", "remaining_time": "2 days, 9:47:45"}
14
+ {"current_steps": 140, "total_steps": 2337, "loss": 1.4081, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9823426244020197e-06, "epoch": 0.18, "percentage": 5.99, "elapsed_time": "3:39:59", "remaining_time": "2 days, 9:32:17"}
15
+ {"current_steps": 150, "total_steps": 2337, "loss": 1.377, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.979738909414235e-06, "epoch": 0.19, "percentage": 6.42, "elapsed_time": "3:55:41", "remaining_time": "2 days, 9:16:21"}
16
+ {"current_steps": 160, "total_steps": 2337, "loss": 1.3373, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9769581484194063e-06, "epoch": 0.21, "percentage": 6.85, "elapsed_time": "4:11:23", "remaining_time": "2 days, 9:00:28"}
17
+ {"current_steps": 170, "total_steps": 2337, "loss": 1.3031, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9740008439214417e-06, "epoch": 0.22, "percentage": 7.27, "elapsed_time": "4:27:04", "remaining_time": "2 days, 8:44:25"}
18
+ {"current_steps": 180, "total_steps": 2337, "loss": 1.3004, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9708675303269545e-06, "epoch": 0.23, "percentage": 7.7, "elapsed_time": "4:42:45", "remaining_time": "2 days, 8:28:21"}
19
+ {"current_steps": 190, "total_steps": 2337, "loss": 1.2555, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9675587738486934e-06, "epoch": 0.24, "percentage": 8.13, "elapsed_time": "4:58:26", "remaining_time": "2 days, 8:12:26"}
20
+ {"current_steps": 200, "total_steps": 2337, "loss": 1.277, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9640751724032234e-06, "epoch": 0.26, "percentage": 8.56, "elapsed_time": "5:14:08", "remaining_time": "2 days, 7:56:40"}
21
+ {"current_steps": 210, "total_steps": 2337, "loss": 1.2142, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.960417355502876e-06, "epoch": 0.27, "percentage": 8.99, "elapsed_time": "5:29:52", "remaining_time": "2 days, 7:41:08"}
22
+ {"current_steps": 220, "total_steps": 2337, "loss": 1.2023, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9565859841419945e-06, "epoch": 0.28, "percentage": 9.41, "elapsed_time": "5:45:32", "remaining_time": "2 days, 7:25:07"}
23
+ {"current_steps": 230, "total_steps": 2337, "loss": 1.2301, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9525817506774864e-06, "epoch": 0.3, "percentage": 9.84, "elapsed_time": "6:01:16", "remaining_time": "2 days, 7:09:31"}
24
+ {"current_steps": 240, "total_steps": 2337, "loss": 1.217, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.948405378703708e-06, "epoch": 0.31, "percentage": 10.27, "elapsed_time": "6:16:56", "remaining_time": "2 days, 6:53:32"}
25
+ {"current_steps": 250, "total_steps": 2337, "loss": 1.1637, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9440576229217078e-06, "epoch": 0.32, "percentage": 10.7, "elapsed_time": "6:32:36", "remaining_time": "2 days, 6:37:32"}
26
+ {"current_steps": 260, "total_steps": 2337, "loss": 1.214, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.939539269002845e-06, "epoch": 0.33, "percentage": 11.13, "elapsed_time": "6:48:16", "remaining_time": "2 days, 6:21:29"}
27
+ {"current_steps": 270, "total_steps": 2337, "loss": 1.1571, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9348511334468137e-06, "epoch": 0.35, "percentage": 11.55, "elapsed_time": "7:03:56", "remaining_time": "2 days, 6:05:28"}
28
+ {"current_steps": 280, "total_steps": 2337, "loss": 1.1329, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9299940634340954e-06, "epoch": 0.36, "percentage": 11.98, "elapsed_time": "7:19:37", "remaining_time": "2 days, 5:49:37"}
29
+ {"current_steps": 290, "total_steps": 2337, "loss": 1.1524, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9249689366728658e-06, "epoch": 0.37, "percentage": 12.41, "elapsed_time": "7:35:17", "remaining_time": "2 days, 5:33:46"}
30
+ {"current_steps": 300, "total_steps": 2337, "loss": 1.1853, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.91977666124039e-06, "epoch": 0.39, "percentage": 12.84, "elapsed_time": "7:50:57", "remaining_time": "2 days, 5:17:50"}
31
+ {"current_steps": 310, "total_steps": 2337, "loss": 1.1614, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9144181754189207e-06, "epoch": 0.4, "percentage": 13.26, "elapsed_time": "8:06:38", "remaining_time": "2 days, 5:02:01"}
32
+ {"current_steps": 320, "total_steps": 2337, "loss": 1.1166, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.90889444752615e-06, "epoch": 0.41, "percentage": 13.69, "elapsed_time": "8:22:20", "remaining_time": "2 days, 4:46:19"}
33
+ {"current_steps": 330, "total_steps": 2337, "loss": 1.1298, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.903206475740223e-06, "epoch": 0.42, "percentage": 14.12, "elapsed_time": "8:38:00", "remaining_time": "2 days, 4:30:27"}
34
+ {"current_steps": 340, "total_steps": 2337, "loss": 1.1577, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8973552879193612e-06, "epoch": 0.44, "percentage": 14.55, "elapsed_time": "8:53:41", "remaining_time": "2 days, 4:14:36"}
35
+ {"current_steps": 350, "total_steps": 2337, "loss": 1.1421, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8913419414161202e-06, "epoch": 0.45, "percentage": 14.98, "elapsed_time": "9:09:23", "remaining_time": "2 days, 3:58:59"}
36
+ {"current_steps": 360, "total_steps": 2337, "loss": 1.1569, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.88516752288632e-06, "epoch": 0.46, "percentage": 15.4, "elapsed_time": "9:25:04", "remaining_time": "2 days, 3:43:09"}
37
+ {"current_steps": 370, "total_steps": 2337, "loss": 1.0961, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8788331480926763e-06, "epoch": 0.47, "percentage": 15.83, "elapsed_time": "9:40:44", "remaining_time": "2 days, 3:27:18"}
38
+ {"current_steps": 380, "total_steps": 2337, "loss": 1.1173, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.872339961703175e-06, "epoch": 0.49, "percentage": 16.26, "elapsed_time": "9:56:24", "remaining_time": "2 days, 3:11:29"}
39
+ {"current_steps": 390, "total_steps": 2337, "loss": 1.0712, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8656891370842223e-06, "epoch": 0.5, "percentage": 16.69, "elapsed_time": "10:12:07", "remaining_time": "2 days, 2:55:54"}
40
+ {"current_steps": 400, "total_steps": 2337, "loss": 1.1055, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8588818760886094e-06, "epoch": 0.51, "percentage": 17.12, "elapsed_time": "10:27:47", "remaining_time": "2 days, 2:40:05"}
41
+ {"current_steps": 410, "total_steps": 2337, "loss": 1.1006, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.851919408838327e-06, "epoch": 0.53, "percentage": 17.54, "elapsed_time": "10:43:30", "remaining_time": "2 days, 2:24:29"}
42
+ {"current_steps": 420, "total_steps": 2337, "loss": 1.083, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8448029935022754e-06, "epoch": 0.54, "percentage": 17.97, "elapsed_time": "10:59:11", "remaining_time": "2 days, 2:08:43"}
43
+ {"current_steps": 430, "total_steps": 2337, "loss": 1.0588, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8375339160689023e-06, "epoch": 0.55, "percentage": 18.4, "elapsed_time": "11:14:51", "remaining_time": "2 days, 1:52:53"}
44
+ {"current_steps": 440, "total_steps": 2337, "loss": 1.0791, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.830113490113817e-06, "epoch": 0.56, "percentage": 18.83, "elapsed_time": "11:30:31", "remaining_time": "2 days, 1:37:08"}
45
+ {"current_steps": 450, "total_steps": 2337, "loss": 1.0623, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.822543056562416e-06, "epoch": 0.58, "percentage": 19.26, "elapsed_time": "11:46:12", "remaining_time": "2 days, 1:21:20"}
46
+ {"current_steps": 460, "total_steps": 2337, "loss": 1.1126, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8148239834475695e-06, "epoch": 0.59, "percentage": 19.68, "elapsed_time": "12:01:52", "remaining_time": "2 days, 1:05:34"}
47
+ {"current_steps": 470, "total_steps": 2337, "loss": 1.0696, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.806957665662406e-06, "epoch": 0.6, "percentage": 20.11, "elapsed_time": "12:17:32", "remaining_time": "2 days, 0:49:44"}
48
+ {"current_steps": 480, "total_steps": 2337, "loss": 1.0573, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7989455247082472e-06, "epoch": 0.62, "percentage": 20.54, "elapsed_time": "12:33:11", "remaining_time": "2 days, 0:33:54"}
49
+ {"current_steps": 490, "total_steps": 2337, "loss": 1.0789, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7907890084377301e-06, "epoch": 0.63, "percentage": 20.97, "elapsed_time": "12:48:51", "remaining_time": "2 days, 0:18:07"}
50
+ {"current_steps": 500, "total_steps": 2337, "loss": 1.0964, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7824895907931706e-06, "epoch": 0.64, "percentage": 21.39, "elapsed_time": "13:04:33", "remaining_time": "2 days, 0:02:26"}
51
+ {"current_steps": 510, "total_steps": 2337, "loss": 1.0551, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7740487715402106e-06, "epoch": 0.65, "percentage": 21.82, "elapsed_time": "13:20:12", "remaining_time": "1 day, 23:46:39"}
52
+ {"current_steps": 520, "total_steps": 2337, "loss": 1.0563, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7654680759968007e-06, "epoch": 0.67, "percentage": 22.25, "elapsed_time": "13:35:53", "remaining_time": "1 day, 23:30:55"}
53
+ {"current_steps": 530, "total_steps": 2337, "loss": 1.0522, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.756749054757562e-06, "epoch": 0.68, "percentage": 22.68, "elapsed_time": "13:51:33", "remaining_time": "1 day, 23:15:08"}
54
+ {"current_steps": 540, "total_steps": 2337, "loss": 1.0446, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.747893283413585e-06, "epoch": 0.69, "percentage": 23.11, "elapsed_time": "14:07:13", "remaining_time": "1 day, 22:59:23"}
55
+ {"current_steps": 550, "total_steps": 2337, "loss": 1.0667, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.738902362267705e-06, "epoch": 0.71, "percentage": 23.53, "elapsed_time": "14:22:54", "remaining_time": "1 day, 22:43:39"}
56
+ {"current_steps": 560, "total_steps": 2337, "loss": 1.0395, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.72977791604532e-06, "epoch": 0.72, "percentage": 23.96, "elapsed_time": "14:38:33", "remaining_time": "1 day, 22:27:51"}
57
+ {"current_steps": 570, "total_steps": 2337, "loss": 1.039, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7205215936007869e-06, "epoch": 0.73, "percentage": 24.39, "elapsed_time": "14:54:12", "remaining_time": "1 day, 22:12:03"}
58
+ {"current_steps": 580, "total_steps": 2337, "loss": 1.0987, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7111350676194647e-06, "epoch": 0.74, "percentage": 24.82, "elapsed_time": "15:09:53", "remaining_time": "1 day, 21:56:19"}
59
+ {"current_steps": 590, "total_steps": 2337, "loss": 1.0748, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.701620034315445e-06, "epoch": 0.76, "percentage": 25.25, "elapsed_time": "15:25:34", "remaining_time": "1 day, 21:40:38"}
60
+ {"current_steps": 600, "total_steps": 2337, "loss": 1.0446, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6919782131250366e-06, "epoch": 0.77, "percentage": 25.67, "elapsed_time": "15:41:16", "remaining_time": "1 day, 21:24:58"}
61
+ {"current_steps": 610, "total_steps": 2337, "loss": 1.0279, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6822113463960483e-06, "epoch": 0.78, "percentage": 26.1, "elapsed_time": "15:56:58", "remaining_time": "1 day, 21:09:19"}
62
+ {"current_steps": 620, "total_steps": 2337, "loss": 1.054, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6723211990729355e-06, "epoch": 0.8, "percentage": 26.53, "elapsed_time": "16:12:38", "remaining_time": "1 day, 20:53:36"}
63
+ {"current_steps": 630, "total_steps": 2337, "loss": 1.0391, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6623095583778613e-06, "epoch": 0.81, "percentage": 26.96, "elapsed_time": "16:28:18", "remaining_time": "1 day, 20:37:51"}
64
+ {"current_steps": 640, "total_steps": 2337, "loss": 1.0812, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.652178233487731e-06, "epoch": 0.82, "percentage": 27.39, "elapsed_time": "16:43:58", "remaining_time": "1 day, 20:22:07"}
65
+ {"current_steps": 650, "total_steps": 2337, "loss": 1.0826, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6419290552072634e-06, "epoch": 0.83, "percentage": 27.81, "elapsed_time": "16:59:39", "remaining_time": "1 day, 20:06:23"}
66
+ {"current_steps": 660, "total_steps": 2337, "loss": 1.0497, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6315638756381484e-06, "epoch": 0.85, "percentage": 28.24, "elapsed_time": "17:15:18", "remaining_time": "1 day, 19:50:37"}
67
+ {"current_steps": 670, "total_steps": 2337, "loss": 1.0329, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6210845678443602e-06, "epoch": 0.86, "percentage": 28.67, "elapsed_time": "17:30:57", "remaining_time": "1 day, 19:34:51"}
68
+ {"current_steps": 680, "total_steps": 2337, "loss": 1.0663, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6104930255136794e-06, "epoch": 0.87, "percentage": 29.1, "elapsed_time": "17:46:37", "remaining_time": "1 day, 19:19:07"}
69
+ {"current_steps": 690, "total_steps": 2337, "loss": 1.059, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5997911626154914e-06, "epoch": 0.89, "percentage": 29.53, "elapsed_time": "18:02:18", "remaining_time": "1 day, 19:03:24"}
70
+ {"current_steps": 700, "total_steps": 2337, "loss": 1.0123, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5889809130549174e-06, "epoch": 0.9, "percentage": 29.95, "elapsed_time": "18:17:57", "remaining_time": "1 day, 18:47:39"}
71
+ {"current_steps": 710, "total_steps": 2337, "loss": 1.0264, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.578064230323343e-06, "epoch": 0.91, "percentage": 30.38, "elapsed_time": "18:33:38", "remaining_time": "1 day, 18:31:57"}
72
+ {"current_steps": 720, "total_steps": 2337, "loss": 1.0411, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5670430871454081e-06, "epoch": 0.92, "percentage": 30.81, "elapsed_time": "18:49:18", "remaining_time": "1 day, 18:16:13"}
73
+ {"current_steps": 730, "total_steps": 2337, "loss": 1.0428, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.555919475122523e-06, "epoch": 0.94, "percentage": 31.24, "elapsed_time": "19:04:58", "remaining_time": "1 day, 18:00:30"}
74
+ {"current_steps": 740, "total_steps": 2337, "loss": 0.9994, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.544695404372968e-06, "epoch": 0.95, "percentage": 31.66, "elapsed_time": "19:20:38", "remaining_time": "1 day, 17:44:46"}
75
+ {"current_steps": 750, "total_steps": 2337, "loss": 1.0452, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.533372903168653e-06, "epoch": 0.96, "percentage": 32.09, "elapsed_time": "19:36:18", "remaining_time": "1 day, 17:29:03"}
76
+ {"current_steps": 760, "total_steps": 2337, "loss": 1.02, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5219540175685937e-06, "epoch": 0.98, "percentage": 32.52, "elapsed_time": "19:51:58", "remaining_time": "1 day, 17:13:20"}
77
+ {"current_steps": 770, "total_steps": 2337, "loss": 1.0564, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5104408110491716e-06, "epoch": 0.99, "percentage": 32.95, "elapsed_time": "20:07:39", "remaining_time": "1 day, 16:57:39"}
78
+ {"current_steps": 780, "total_steps": 2337, "loss": 0.9889, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4988353641312515e-06, "epoch": 1.0, "percentage": 33.38, "elapsed_time": "20:23:19", "remaining_time": "1 day, 16:41:56"}
79
+ {"current_steps": 790, "total_steps": 2337, "loss": 1.0452, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.487139774004214e-06, "epoch": 1.01, "percentage": 33.8, "elapsed_time": "20:38:59", "remaining_time": "1 day, 16:26:13"}
80
+ {"current_steps": 800, "total_steps": 2337, "loss": 1.0399, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4753561541469787e-06, "epoch": 1.03, "percentage": 34.23, "elapsed_time": "20:54:40", "remaining_time": "1 day, 16:10:31"}
81
+ {"current_steps": 810, "total_steps": 2337, "loss": 1.0438, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.463486633946084e-06, "epoch": 1.04, "percentage": 34.66, "elapsed_time": "21:10:20", "remaining_time": "1 day, 15:54:50"}
82
+ {"current_steps": 820, "total_steps": 2337, "loss": 1.0134, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4515333583108893e-06, "epoch": 1.05, "percentage": 35.09, "elapsed_time": "21:26:01", "remaining_time": "1 day, 15:39:08"}
83
+ {"current_steps": 830, "total_steps": 2337, "loss": 0.9949, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.439498487285976e-06, "epoch": 1.07, "percentage": 35.52, "elapsed_time": "21:41:41", "remaining_time": "1 day, 15:23:26"}
84
+ {"current_steps": 840, "total_steps": 2337, "loss": 1.0184, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.427384195660811e-06, "epoch": 1.08, "percentage": 35.94, "elapsed_time": "21:57:22", "remaining_time": "1 day, 15:07:44"}
85
+ {"current_steps": 850, "total_steps": 2337, "loss": 1.0246, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4151926725767455e-06, "epoch": 1.09, "percentage": 36.37, "elapsed_time": "22:13:05", "remaining_time": "1 day, 14:52:06"}
86
+ {"current_steps": 860, "total_steps": 2337, "loss": 1.0428, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4029261211314222e-06, "epoch": 1.1, "percentage": 36.8, "elapsed_time": "22:28:44", "remaining_time": "1 day, 14:36:22"}
87
+ {"current_steps": 870, "total_steps": 2337, "loss": 1.0078, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3905867579806596e-06, "epoch": 1.12, "percentage": 37.23, "elapsed_time": "22:44:24", "remaining_time": "1 day, 14:20:40"}
88
+ {"current_steps": 880, "total_steps": 2337, "loss": 1.0223, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3781768129378844e-06, "epoch": 1.13, "percentage": 37.66, "elapsed_time": "23:00:04", "remaining_time": "1 day, 14:04:58"}
89
+ {"current_steps": 890, "total_steps": 2337, "loss": 0.9788, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3656985285711895e-06, "epoch": 1.14, "percentage": 38.08, "elapsed_time": "23:15:44", "remaining_time": "1 day, 13:49:16"}
90
+ {"current_steps": 900, "total_steps": 2337, "loss": 1.0417, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3531541597980845e-06, "epoch": 1.16, "percentage": 38.51, "elapsed_time": "23:31:25", "remaining_time": "1 day, 13:33:33"}
91
+ {"current_steps": 910, "total_steps": 2337, "loss": 1.0315, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.340545973478016e-06, "epoch": 1.17, "percentage": 38.94, "elapsed_time": "23:47:05", "remaining_time": "1 day, 13:17:52"}
92
+ {"current_steps": 920, "total_steps": 2337, "loss": 0.9775, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.32787624800273e-06, "epoch": 1.18, "percentage": 39.37, "elapsed_time": "1 day, 0:02:46", "remaining_time": "1 day, 13:02:10"}
93
+ {"current_steps": 930, "total_steps": 2337, "loss": 0.9992, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3151472728845492e-06, "epoch": 1.19, "percentage": 39.79, "elapsed_time": "1 day, 0:18:27", "remaining_time": "1 day, 12:46:30"}
94
+ {"current_steps": 940, "total_steps": 2337, "loss": 0.957, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3023613483426399e-06, "epoch": 1.21, "percentage": 40.22, "elapsed_time": "1 day, 0:34:07", "remaining_time": "1 day, 12:30:48"}
95
+ {"current_steps": 950, "total_steps": 2337, "loss": 1.0227, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2895207848873487e-06, "epoch": 1.22, "percentage": 40.65, "elapsed_time": "1 day, 0:49:48", "remaining_time": "1 day, 12:15:07"}
96
+ {"current_steps": 960, "total_steps": 2337, "loss": 1.0311, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2766279029026735e-06, "epoch": 1.23, "percentage": 41.08, "elapsed_time": "1 day, 1:05:29", "remaining_time": "1 day, 11:59:26"}
97
+ {"current_steps": 970, "total_steps": 2337, "loss": 1.0237, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2636850322269553e-06, "epoch": 1.25, "percentage": 41.51, "elapsed_time": "1 day, 1:21:10", "remaining_time": "1 day, 11:43:44"}
98
+ {"current_steps": 980, "total_steps": 2337, "loss": 1.0338, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.250694511731857e-06, "epoch": 1.26, "percentage": 41.93, "elapsed_time": "1 day, 1:36:50", "remaining_time": "1 day, 11:28:03"}
99
+ {"current_steps": 990, "total_steps": 2337, "loss": 1.0259, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2376586888997145e-06, "epoch": 1.27, "percentage": 42.36, "elapsed_time": "1 day, 1:52:31", "remaining_time": "1 day, 11:12:22"}
100
+ {"current_steps": 1000, "total_steps": 2337, "loss": 1.0058, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.224579919399327e-06, "epoch": 1.28, "percentage": 42.79, "elapsed_time": "1 day, 2:08:12", "remaining_time": "1 day, 10:56:41"}
101
+ {"current_steps": 1010, "total_steps": 2337, "loss": 0.9968, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2114605666602728e-06, "epoch": 1.3, "percentage": 43.22, "elapsed_time": "1 day, 2:25:11", "remaining_time": "1 day, 10:42:43"}
102
+ {"current_steps": 1020, "total_steps": 2337, "loss": 0.9984, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1983030014458184e-06, "epoch": 1.31, "percentage": 43.65, "elapsed_time": "1 day, 2:40:52", "remaining_time": "1 day, 10:27:00"}
103
+ {"current_steps": 1030, "total_steps": 2337, "loss": 1.0126, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1851096014245055e-06, "epoch": 1.32, "percentage": 44.07, "elapsed_time": "1 day, 2:56:31", "remaining_time": "1 day, 10:11:16"}
104
+ {"current_steps": 1040, "total_steps": 2337, "loss": 1.0064, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1718827507404873e-06, "epoch": 1.34, "percentage": 44.5, "elapsed_time": "1 day, 3:12:11", "remaining_time": "1 day, 9:55:32"}
105
+ {"current_steps": 1050, "total_steps": 2337, "loss": 0.9816, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1586248395826983e-06, "epoch": 1.35, "percentage": 44.93, "elapsed_time": "1 day, 3:27:51", "remaining_time": "1 day, 9:39:48"}
106
+ {"current_steps": 1060, "total_steps": 2337, "loss": 1.0116, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1453382637529276e-06, "epoch": 1.36, "percentage": 45.36, "elapsed_time": "1 day, 3:43:31", "remaining_time": "1 day, 9:24:04"}
107
+ {"current_steps": 1070, "total_steps": 2337, "loss": 0.9933, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1320254242328805e-06, "epoch": 1.37, "percentage": 45.79, "elapsed_time": "1 day, 3:59:13", "remaining_time": "1 day, 9:08:23"}
108
+ {"current_steps": 1080, "total_steps": 2337, "loss": 1.0558, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1186887267503053e-06, "epoch": 1.39, "percentage": 46.21, "elapsed_time": "1 day, 4:14:53", "remaining_time": "1 day, 8:52:40"}
109
+ {"current_steps": 1090, "total_steps": 2337, "loss": 0.9552, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1053305813442574e-06, "epoch": 1.4, "percentage": 46.64, "elapsed_time": "1 day, 4:30:33", "remaining_time": "1 day, 8:36:56"}
110
+ {"current_steps": 1100, "total_steps": 2337, "loss": 0.9877, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0919534019295898e-06, "epoch": 1.41, "percentage": 47.07, "elapsed_time": "1 day, 4:46:12", "remaining_time": "1 day, 8:21:11"}
111
+ {"current_steps": 1110, "total_steps": 2337, "loss": 1.0232, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.078559605860741e-06, "epoch": 1.42, "percentage": 47.5, "elapsed_time": "1 day, 5:01:51", "remaining_time": "1 day, 8:05:27"}
112
+ {"current_steps": 1120, "total_steps": 2337, "loss": 1.0201, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0651516134949003e-06, "epoch": 1.44, "percentage": 47.92, "elapsed_time": "1 day, 5:17:31", "remaining_time": "1 day, 7:49:44"}
113
+ {"current_steps": 1130, "total_steps": 2337, "loss": 1.0047, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0517318477546319e-06, "epoch": 1.45, "percentage": 48.35, "elapsed_time": "1 day, 5:33:11", "remaining_time": "1 day, 7:34:01"}
114
+ {"current_steps": 1140, "total_steps": 2337, "loss": 0.9861, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0383027336900353e-06, "epoch": 1.46, "percentage": 48.78, "elapsed_time": "1 day, 5:48:50", "remaining_time": "1 day, 7:18:17"}
115
+ {"current_steps": 1150, "total_steps": 2337, "loss": 1.0059, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0248666980405212e-06, "epoch": 1.48, "percentage": 49.21, "elapsed_time": "1 day, 6:04:30", "remaining_time": "1 day, 7:02:34"}
116
+ {"current_steps": 1160, "total_steps": 2337, "loss": 0.9714, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.011426168796281e-06, "epoch": 1.49, "percentage": 49.64, "elapsed_time": "1 day, 6:20:12", "remaining_time": "1 day, 6:46:53"}
117
+ {"current_steps": 1170, "total_steps": 2337, "loss": 1.0207, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.979835747595316e-07, "epoch": 1.5, "percentage": 50.06, "elapsed_time": "1 day, 6:35:51", "remaining_time": "1 day, 6:31:09"}
118
+ {"current_steps": 1180, "total_steps": 2337, "loss": 1.0167, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.845413451056125e-07, "epoch": 1.51, "percentage": 50.49, "elapsed_time": "1 day, 6:51:31", "remaining_time": "1 day, 6:15:26"}
119
+ {"current_steps": 1190, "total_steps": 2337, "loss": 0.9979, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.71101908944018e-07, "epoch": 1.53, "percentage": 50.92, "elapsed_time": "1 day, 7:07:13", "remaining_time": "1 day, 5:59:45"}
120
+ {"current_steps": 1200, "total_steps": 2337, "loss": 1.0047, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.576676948794375e-07, "epoch": 1.54, "percentage": 51.35, "elapsed_time": "1 day, 7:22:55", "remaining_time": "1 day, 5:44:03"}
121
+ {"current_steps": 1210, "total_steps": 2337, "loss": 1.0121, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.442411305728873e-07, "epoch": 1.55, "percentage": 51.78, "elapsed_time": "1 day, 7:38:35", "remaining_time": "1 day, 5:28:21"}
122
+ {"current_steps": 1220, "total_steps": 2337, "loss": 1.0015, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.308246423030185e-07, "epoch": 1.57, "percentage": 52.2, "elapsed_time": "1 day, 7:54:19", "remaining_time": "1 day, 5:12:42"}
123
+ {"current_steps": 1230, "total_steps": 2337, "loss": 1.0325, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.174206545276677e-07, "epoch": 1.58, "percentage": 52.63, "elapsed_time": "1 day, 8:10:00", "remaining_time": "1 day, 4:57:00"}
124
+ {"current_steps": 1240, "total_steps": 2337, "loss": 1.019, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.040315894457404e-07, "epoch": 1.59, "percentage": 53.06, "elapsed_time": "1 day, 8:25:41", "remaining_time": "1 day, 4:41:18"}
125
+ {"current_steps": 1250, "total_steps": 2337, "loss": 0.993, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.906598665595016e-07, "epoch": 1.6, "percentage": 53.49, "elapsed_time": "1 day, 8:41:22", "remaining_time": "1 day, 4:25:36"}
126
+ {"current_steps": 1260, "total_steps": 2337, "loss": 0.9917, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.773079022373553e-07, "epoch": 1.62, "percentage": 53.92, "elapsed_time": "1 day, 8:57:03", "remaining_time": "1 day, 4:09:54"}
127
+ {"current_steps": 1270, "total_steps": 2337, "loss": 0.9697, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.63978109277187e-07, "epoch": 1.63, "percentage": 54.34, "elapsed_time": "1 day, 9:12:44", "remaining_time": "1 day, 3:54:12"}
128
+ {"current_steps": 1280, "total_steps": 2337, "loss": 0.973, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.506728964703549e-07, "epoch": 1.64, "percentage": 54.77, "elapsed_time": "1 day, 9:28:25", "remaining_time": "1 day, 3:38:31"}
129
+ {"current_steps": 1290, "total_steps": 2337, "loss": 1.0183, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.37394668166404e-07, "epoch": 1.66, "percentage": 55.2, "elapsed_time": "1 day, 9:44:06", "remaining_time": "1 day, 3:22:49"}
130
+ {"current_steps": 1300, "total_steps": 2337, "loss": 1.0098, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.241458238385798e-07, "epoch": 1.67, "percentage": 55.63, "elapsed_time": "1 day, 9:59:47", "remaining_time": "1 day, 3:07:07"}
131
+ {"current_steps": 1310, "total_steps": 2337, "loss": 0.9765, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.109287576502299e-07, "epoch": 1.68, "percentage": 56.05, "elapsed_time": "1 day, 10:15:28", "remaining_time": "1 day, 2:51:25"}
132
+ {"current_steps": 1320, "total_steps": 2337, "loss": 1.01, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.977458580221578e-07, "epoch": 1.69, "percentage": 56.48, "elapsed_time": "1 day, 10:31:09", "remaining_time": "1 day, 2:35:44"}
133
+ {"current_steps": 1330, "total_steps": 2337, "loss": 1.0378, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.845995072010187e-07, "epoch": 1.71, "percentage": 56.91, "elapsed_time": "1 day, 10:46:50", "remaining_time": "1 day, 2:20:02"}
134
+ {"current_steps": 1340, "total_steps": 2337, "loss": 0.9665, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.714920808288313e-07, "epoch": 1.72, "percentage": 57.34, "elapsed_time": "1 day, 11:02:31", "remaining_time": "1 day, 2:04:20"}
135
+ {"current_steps": 1350, "total_steps": 2337, "loss": 0.9927, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.584259475136804e-07, "epoch": 1.73, "percentage": 57.77, "elapsed_time": "1 day, 11:18:12", "remaining_time": "1 day, 1:48:39"}
136
+ {"current_steps": 1360, "total_steps": 2337, "loss": 0.9571, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.454034684016923e-07, "epoch": 1.75, "percentage": 58.19, "elapsed_time": "1 day, 11:33:54", "remaining_time": "1 day, 1:32:57"}
137
+ {"current_steps": 1370, "total_steps": 2337, "loss": 0.9813, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.324269967503587e-07, "epoch": 1.76, "percentage": 58.62, "elapsed_time": "1 day, 11:49:35", "remaining_time": "1 day, 1:17:15"}
138
+ {"current_steps": 1380, "total_steps": 2337, "loss": 0.9641, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.19498877503286e-07, "epoch": 1.77, "percentage": 59.05, "elapsed_time": "1 day, 12:05:15", "remaining_time": "1 day, 1:01:33"}
139
+ {"current_steps": 1390, "total_steps": 2337, "loss": 0.9696, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.066214468664467e-07, "epoch": 1.78, "percentage": 59.48, "elapsed_time": "1 day, 12:20:57", "remaining_time": "1 day, 0:45:52"}
140
+ {"current_steps": 1400, "total_steps": 2337, "loss": 0.9824, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.937970318860085e-07, "epoch": 1.8, "percentage": 59.91, "elapsed_time": "1 day, 12:36:38", "remaining_time": "1 day, 0:30:10"}
141
+ {"current_steps": 1410, "total_steps": 2337, "loss": 0.9829, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.810279500278223e-07, "epoch": 1.81, "percentage": 60.33, "elapsed_time": "1 day, 12:52:19", "remaining_time": "1 day, 0:14:29"}
142
+ {"current_steps": 1420, "total_steps": 2337, "loss": 1.0099, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.683165087586377e-07, "epoch": 1.82, "percentage": 60.76, "elapsed_time": "1 day, 13:08:00", "remaining_time": "23:58:47"}
143
+ {"current_steps": 1430, "total_steps": 2337, "loss": 0.9737, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.556650051291264e-07, "epoch": 1.84, "percentage": 61.19, "elapsed_time": "1 day, 13:23:41", "remaining_time": "23:43:05"}
144
+ {"current_steps": 1440, "total_steps": 2337, "loss": 0.9962, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.430757253587901e-07, "epoch": 1.85, "percentage": 61.62, "elapsed_time": "1 day, 13:39:22", "remaining_time": "23:27:24"}
145
+ {"current_steps": 1450, "total_steps": 2337, "loss": 0.9736, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.305509444228219e-07, "epoch": 1.86, "percentage": 62.05, "elapsed_time": "1 day, 13:55:03", "remaining_time": "23:11:42"}
146
+ {"current_steps": 1460, "total_steps": 2337, "loss": 0.9813, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.180929256410027e-07, "epoch": 1.87, "percentage": 62.47, "elapsed_time": "1 day, 14:10:44", "remaining_time": "22:56:01"}
147
+ {"current_steps": 1470, "total_steps": 2337, "loss": 1.0075, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.057039202687022e-07, "epoch": 1.89, "percentage": 62.9, "elapsed_time": "1 day, 14:26:25", "remaining_time": "22:40:19"}
148
+ {"current_steps": 1480, "total_steps": 2337, "loss": 0.9858, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.93386167090062e-07, "epoch": 1.9, "percentage": 63.33, "elapsed_time": "1 day, 14:42:08", "remaining_time": "22:24:38"}
149
+ {"current_steps": 1490, "total_steps": 2337, "loss": 0.992, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.811418920134277e-07, "epoch": 1.91, "percentage": 63.76, "elapsed_time": "1 day, 14:57:49", "remaining_time": "22:08:56"}
150
+ {"current_steps": 1500, "total_steps": 2337, "loss": 0.9919, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.689733076691148e-07, "epoch": 1.93, "percentage": 64.18, "elapsed_time": "1 day, 15:13:29", "remaining_time": "21:53:15"}
151
+ {"current_steps": 1510, "total_steps": 2337, "loss": 0.9786, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.56882613009567e-07, "epoch": 1.94, "percentage": 64.61, "elapsed_time": "1 day, 15:29:10", "remaining_time": "21:37:33"}
152
+ {"current_steps": 1520, "total_steps": 2337, "loss": 0.9814, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.448719929119915e-07, "epoch": 1.95, "percentage": 65.04, "elapsed_time": "1 day, 15:44:50", "remaining_time": "21:21:51"}
153
+ {"current_steps": 1530, "total_steps": 2337, "loss": 0.9847, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.329436177835339e-07, "epoch": 1.96, "percentage": 65.47, "elapsed_time": "1 day, 16:00:31", "remaining_time": "21:06:09"}
154
+ {"current_steps": 1540, "total_steps": 2337, "loss": 0.9752, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.210996431690722e-07, "epoch": 1.98, "percentage": 65.9, "elapsed_time": "1 day, 16:16:13", "remaining_time": "20:50:28"}
155
+ {"current_steps": 1550, "total_steps": 2337, "loss": 1.0272, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.093422093616909e-07, "epoch": 1.99, "percentage": 66.32, "elapsed_time": "1 day, 16:31:54", "remaining_time": "20:34:46"}
156
+ {"current_steps": 1560, "total_steps": 2337, "loss": 1.0187, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.976734410159165e-07, "epoch": 2.0, "percentage": 66.75, "elapsed_time": "1 day, 16:47:36", "remaining_time": "20:19:05"}
157
+ {"current_steps": 1570, "total_steps": 2337, "loss": 0.9922, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.860954467637762e-07, "epoch": 2.02, "percentage": 67.18, "elapsed_time": "1 day, 17:03:17", "remaining_time": "20:03:24"}
158
+ {"current_steps": 1580, "total_steps": 2337, "loss": 0.9881, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.7461031883375335e-07, "epoch": 2.03, "percentage": 67.61, "elapsed_time": "1 day, 17:18:58", "remaining_time": "19:47:42"}
159
+ {"current_steps": 1590, "total_steps": 2337, "loss": 0.9972, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.632201326727041e-07, "epoch": 2.04, "percentage": 68.04, "elapsed_time": "1 day, 17:34:40", "remaining_time": "19:32:01"}
160
+ {"current_steps": 1600, "total_steps": 2337, "loss": 0.9841, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.519269465708125e-07, "epoch": 2.05, "percentage": 68.46, "elapsed_time": "1 day, 17:50:21", "remaining_time": "19:16:19"}
161
+ {"current_steps": 1610, "total_steps": 2337, "loss": 1.0147, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.407328012896393e-07, "epoch": 2.07, "percentage": 68.89, "elapsed_time": "1 day, 18:06:01", "remaining_time": "19:00:38"}
162
+ {"current_steps": 1620, "total_steps": 2337, "loss": 0.9832, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.2963971969334254e-07, "epoch": 2.08, "percentage": 69.32, "elapsed_time": "1 day, 18:21:42", "remaining_time": "18:44:56"}
163
+ {"current_steps": 1630, "total_steps": 2337, "loss": 1.0031, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.186497063831316e-07, "epoch": 2.09, "percentage": 69.75, "elapsed_time": "1 day, 18:37:22", "remaining_time": "18:29:14"}
164
+ {"current_steps": 1640, "total_steps": 2337, "loss": 0.9694, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.0776474733502007e-07, "epoch": 2.11, "percentage": 70.18, "elapsed_time": "1 day, 18:53:03", "remaining_time": "18:13:33"}
165
+ {"current_steps": 1650, "total_steps": 2337, "loss": 0.9836, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.9698680954094645e-07, "epoch": 2.12, "percentage": 70.6, "elapsed_time": "1 day, 19:08:45", "remaining_time": "17:57:51"}
166
+ {"current_steps": 1660, "total_steps": 2337, "loss": 0.9895, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.8631784065332253e-07, "epoch": 2.13, "percentage": 71.03, "elapsed_time": "1 day, 19:24:25", "remaining_time": "17:42:10"}
167
+ {"current_steps": 1670, "total_steps": 2337, "loss": 0.9894, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.7575976863308156e-07, "epoch": 2.14, "percentage": 71.46, "elapsed_time": "1 day, 19:40:06", "remaining_time": "17:26:28"}
168
+ {"current_steps": 1680, "total_steps": 2337, "loss": 0.9978, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.653145014012766e-07, "epoch": 2.16, "percentage": 71.89, "elapsed_time": "1 day, 19:55:46", "remaining_time": "17:10:46"}
169
+ {"current_steps": 1690, "total_steps": 2337, "loss": 0.9492, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.5498392649431087e-07, "epoch": 2.17, "percentage": 72.31, "elapsed_time": "1 day, 20:11:27", "remaining_time": "16:55:05"}
170
+ {"current_steps": 1700, "total_steps": 2337, "loss": 0.9666, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.447699107228412e-07, "epoch": 2.18, "percentage": 72.74, "elapsed_time": "1 day, 20:27:08", "remaining_time": "16:39:23"}
171
+ {"current_steps": 1710, "total_steps": 2337, "loss": 0.9963, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.3467429983443476e-07, "epoch": 2.2, "percentage": 73.17, "elapsed_time": "1 day, 20:42:49", "remaining_time": "16:23:42"}
172
+ {"current_steps": 1720, "total_steps": 2337, "loss": 0.9629, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.2469891818002715e-07, "epoch": 2.21, "percentage": 73.6, "elapsed_time": "1 day, 20:58:31", "remaining_time": "16:08:00"}
173
+ {"current_steps": 1730, "total_steps": 2337, "loss": 1.0002, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.148455683842507e-07, "epoch": 2.22, "percentage": 74.03, "elapsed_time": "1 day, 21:14:12", "remaining_time": "15:52:19"}
174
+ {"current_steps": 1740, "total_steps": 2337, "loss": 1.0029, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.0511603101968475e-07, "epoch": 2.23, "percentage": 74.45, "elapsed_time": "1 day, 21:29:53", "remaining_time": "15:36:38"}
175
+ {"current_steps": 1750, "total_steps": 2337, "loss": 0.9456, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9551206428509446e-07, "epoch": 2.25, "percentage": 74.88, "elapsed_time": "1 day, 21:45:34", "remaining_time": "15:20:56"}
176
+ {"current_steps": 1760, "total_steps": 2337, "loss": 0.9792, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.860354036877113e-07, "epoch": 2.26, "percentage": 75.31, "elapsed_time": "1 day, 22:01:16", "remaining_time": "15:05:15"}
177
+ {"current_steps": 1770, "total_steps": 2337, "loss": 0.9614, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.7668776172961375e-07, "epoch": 2.27, "percentage": 75.74, "elapsed_time": "1 day, 22:16:58", "remaining_time": "14:49:34"}
178
+ {"current_steps": 1780, "total_steps": 2337, "loss": 1.0142, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.6747082759826613e-07, "epoch": 2.28, "percentage": 76.17, "elapsed_time": "1 day, 22:32:42", "remaining_time": "14:33:54"}
179
+ {"current_steps": 1790, "total_steps": 2337, "loss": 0.9993, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.583862668612693e-07, "epoch": 2.3, "percentage": 76.59, "elapsed_time": "1 day, 22:48:24", "remaining_time": "14:18:12"}
180
+ {"current_steps": 1800, "total_steps": 2337, "loss": 1.0057, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.4943572116538205e-07, "epoch": 2.31, "percentage": 77.02, "elapsed_time": "1 day, 23:04:09", "remaining_time": "14:02:32"}
181
+ {"current_steps": 1810, "total_steps": 2337, "loss": 0.9717, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.4062080793986004e-07, "epoch": 2.32, "percentage": 77.45, "elapsed_time": "1 day, 23:19:50", "remaining_time": "13:46:50"}
182
+ {"current_steps": 1820, "total_steps": 2337, "loss": 1.0034, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.3194312010417927e-07, "epoch": 2.34, "percentage": 77.88, "elapsed_time": "1 day, 23:35:34", "remaining_time": "13:31:10"}
183
+ {"current_steps": 1830, "total_steps": 2337, "loss": 0.9612, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.2340422578017958e-07, "epoch": 2.35, "percentage": 78.31, "elapsed_time": "1 day, 23:51:16", "remaining_time": "13:15:28"}
184
+ {"current_steps": 1840, "total_steps": 2337, "loss": 0.9932, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.150056680086958e-07, "epoch": 2.36, "percentage": 78.73, "elapsed_time": "2 days, 0:06:58", "remaining_time": "12:59:47"}
185
+ {"current_steps": 1850, "total_steps": 2337, "loss": 1.0122, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.0674896447071833e-07, "epoch": 2.37, "percentage": 79.16, "elapsed_time": "2 days, 0:22:38", "remaining_time": "12:44:06"}
186
+ {"current_steps": 1860, "total_steps": 2337, "loss": 1.0008, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9863560721313698e-07, "epoch": 2.39, "percentage": 79.59, "elapsed_time": "2 days, 0:38:22", "remaining_time": "12:28:25"}
187
+ {"current_steps": 1870, "total_steps": 2337, "loss": 1.0085, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9066706237911756e-07, "epoch": 2.4, "percentage": 80.02, "elapsed_time": "2 days, 0:54:03", "remaining_time": "12:12:43"}
188
+ {"current_steps": 1880, "total_steps": 2337, "loss": 0.9867, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8284476994315835e-07, "epoch": 2.41, "percentage": 80.45, "elapsed_time": "2 days, 1:09:45", "remaining_time": "11:57:02"}
189
+ {"current_steps": 1890, "total_steps": 2337, "loss": 0.987, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7517014345087766e-07, "epoch": 2.43, "percentage": 80.87, "elapsed_time": "2 days, 1:25:27", "remaining_time": "11:41:21"}
190
+ {"current_steps": 1900, "total_steps": 2337, "loss": 0.9703, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6764456976357277e-07, "epoch": 2.44, "percentage": 81.3, "elapsed_time": "2 days, 1:41:09", "remaining_time": "11:25:39"}
191
+ {"current_steps": 1910, "total_steps": 2337, "loss": 0.9949, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6026940880760797e-07, "epoch": 2.45, "percentage": 81.73, "elapsed_time": "2 days, 1:56:50", "remaining_time": "11:09:58"}
192
+ {"current_steps": 1920, "total_steps": 2337, "loss": 1.0132, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5304599332866197e-07, "epoch": 2.46, "percentage": 82.16, "elapsed_time": "2 days, 2:12:30", "remaining_time": "10:54:16"}
193
+ {"current_steps": 1930, "total_steps": 2337, "loss": 1.0076, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.459756286508945e-07, "epoch": 2.48, "percentage": 82.58, "elapsed_time": "2 days, 2:28:10", "remaining_time": "10:38:35"}
194
+ {"current_steps": 1940, "total_steps": 2337, "loss": 0.9801, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.390595924410609e-07, "epoch": 2.49, "percentage": 83.01, "elapsed_time": "2 days, 2:43:52", "remaining_time": "10:22:53"}
195
+ {"current_steps": 1950, "total_steps": 2337, "loss": 0.9947, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.322991344776323e-07, "epoch": 2.5, "percentage": 83.44, "elapsed_time": "2 days, 2:59:36", "remaining_time": "10:07:12"}
196
+ {"current_steps": 1960, "total_steps": 2337, "loss": 0.9898, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.256954764249486e-07, "epoch": 2.52, "percentage": 83.87, "elapsed_time": "2 days, 3:15:17", "remaining_time": "9:51:31"}
197
+ {"current_steps": 1970, "total_steps": 2337, "loss": 1.0, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1924981161245574e-07, "epoch": 2.53, "percentage": 84.3, "elapsed_time": "2 days, 3:30:57", "remaining_time": "9:35:49"}
198
+ {"current_steps": 1980, "total_steps": 2337, "loss": 0.9637, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1296330481906247e-07, "epoch": 2.54, "percentage": 84.72, "elapsed_time": "2 days, 3:46:38", "remaining_time": "9:20:08"}
199
+ {"current_steps": 1990, "total_steps": 2337, "loss": 1.0058, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0683709206265635e-07, "epoch": 2.55, "percentage": 85.15, "elapsed_time": "2 days, 4:02:22", "remaining_time": "9:04:27"}
200
+ {"current_steps": 2000, "total_steps": 2337, "loss": 1.0164, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0087228039481643e-07, "epoch": 2.57, "percentage": 85.58, "elapsed_time": "2 days, 4:18:03", "remaining_time": "8:48:45"}
201
+ {"current_steps": 2010, "total_steps": 2337, "loss": 0.9956, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.506994770076115e-08, "epoch": 2.58, "percentage": 86.01, "elapsed_time": "2 days, 4:35:00", "remaining_time": "8:33:16"}
202
+ {"current_steps": 2020, "total_steps": 2337, "loss": 0.9945, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.94311425045674e-08, "epoch": 2.59, "percentage": 86.44, "elapsed_time": "2 days, 4:50:42", "remaining_time": "8:17:34"}
203
+ {"current_steps": 2030, "total_steps": 2337, "loss": 0.9916, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.395688377969235e-08, "epoch": 2.61, "percentage": 86.86, "elapsed_time": "2 days, 5:06:24", "remaining_time": "8:01:53"}
204
+ {"current_steps": 2040, "total_steps": 2337, "loss": 0.998, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.864816076484049e-08, "epoch": 2.62, "percentage": 87.29, "elapsed_time": "2 days, 5:22:06", "remaining_time": "7:46:11"}
205
+ {"current_steps": 2050, "total_steps": 2337, "loss": 0.9892, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.350593278519823e-08, "epoch": 2.63, "percentage": 87.72, "elapsed_time": "2 days, 5:37:47", "remaining_time": "7:30:29"}
206
+ {"current_steps": 2060, "total_steps": 2337, "loss": 0.9772, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.853112907907854e-08, "epoch": 2.64, "percentage": 88.15, "elapsed_time": "2 days, 5:53:28", "remaining_time": "7:14:47"}
207
+ {"current_steps": 2070, "total_steps": 2337, "loss": 0.9784, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.372464862999949e-08, "epoch": 2.66, "percentage": 88.58, "elapsed_time": "2 days, 6:09:15", "remaining_time": "6:59:06"}
208
+ {"current_steps": 2080, "total_steps": 2337, "loss": 0.9986, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.908736000423309e-08, "epoch": 2.67, "percentage": 89.0, "elapsed_time": "2 days, 6:24:58", "remaining_time": "6:43:24"}
209
+ {"current_steps": 2090, "total_steps": 2337, "loss": 0.978, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.462010119384664e-08, "epoch": 2.68, "percentage": 89.43, "elapsed_time": "2 days, 6:40:40", "remaining_time": "6:27:43"}
210
+ {"current_steps": 2100, "total_steps": 2337, "loss": 0.9783, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.0323679465273605e-08, "epoch": 2.7, "percentage": 89.86, "elapsed_time": "2 days, 6:56:22", "remaining_time": "6:12:01"}
211
+ {"current_steps": 2110, "total_steps": 2337, "loss": 0.9555, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.619887121343324e-08, "epoch": 2.71, "percentage": 90.29, "elapsed_time": "2 days, 7:12:04", "remaining_time": "5:56:19"}
212
+ {"current_steps": 2120, "total_steps": 2337, "loss": 0.9857, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.2246421821431123e-08, "epoch": 2.72, "percentage": 90.71, "elapsed_time": "2 days, 7:27:46", "remaining_time": "5:40:37"}
213
+ {"current_steps": 2130, "total_steps": 2337, "loss": 0.961, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.846704552586244e-08, "epoch": 2.73, "percentage": 91.14, "elapsed_time": "2 days, 7:43:28", "remaining_time": "5:24:55"}
214
+ {"current_steps": 2140, "total_steps": 2337, "loss": 0.9973, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.4861425287744276e-08, "epoch": 2.75, "percentage": 91.57, "elapsed_time": "2 days, 7:59:13", "remaining_time": "5:09:14"}
215
+ {"current_steps": 2150, "total_steps": 2337, "loss": 0.9497, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.143021266910029e-08, "epoch": 2.76, "percentage": 92.0, "elapsed_time": "2 days, 8:14:54", "remaining_time": "4:53:32"}
216
+ {"current_steps": 2160, "total_steps": 2337, "loss": 0.9707, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.8174027715217263e-08, "epoch": 2.77, "percentage": 92.43, "elapsed_time": "2 days, 8:30:37", "remaining_time": "4:37:50"}
217
+ {"current_steps": 2170, "total_steps": 2337, "loss": 0.9564, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5093458842599946e-08, "epoch": 2.79, "percentage": 92.85, "elapsed_time": "2 days, 8:46:19", "remaining_time": "4:22:08"}
218
+ {"current_steps": 2180, "total_steps": 2337, "loss": 0.9395, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.218906273263843e-08, "epoch": 2.8, "percentage": 93.28, "elapsed_time": "2 days, 9:02:01", "remaining_time": "4:06:26"}
219
+ {"current_steps": 2190, "total_steps": 2337, "loss": 0.9569, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9461364231012856e-08, "epoch": 2.81, "percentage": 93.71, "elapsed_time": "2 days, 9:17:44", "remaining_time": "3:50:45"}
220
+ {"current_steps": 2200, "total_steps": 2337, "loss": 1.0239, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6910856252849382e-08, "epoch": 2.82, "percentage": 94.14, "elapsed_time": "2 days, 9:33:26", "remaining_time": "3:35:03"}
221
+ {"current_steps": 2210, "total_steps": 2337, "loss": 0.9657, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4537999693646885e-08, "epoch": 2.84, "percentage": 94.57, "elapsed_time": "2 days, 9:49:07", "remaining_time": "3:19:21"}
222
+ {"current_steps": 2220, "total_steps": 2337, "loss": 0.9456, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2343223345989917e-08, "epoch": 2.85, "percentage": 94.99, "elapsed_time": "2 days, 10:04:49", "remaining_time": "3:03:39"}
223
+ {"current_steps": 2230, "total_steps": 2337, "loss": 1.0241, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0326923822062461e-08, "epoch": 2.86, "percentage": 95.42, "elapsed_time": "2 days, 10:20:32", "remaining_time": "2:47:57"}
224
+ {"current_steps": 2240, "total_steps": 2337, "loss": 1.0016, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.489465481977708e-09, "epoch": 2.88, "percentage": 95.85, "elapsed_time": "2 days, 10:36:17", "remaining_time": "2:32:16"}
225
+ {"current_steps": 2250, "total_steps": 2337, "loss": 0.9713, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.83118036793473e-09, "epoch": 2.89, "percentage": 96.28, "elapsed_time": "2 days, 10:51:59", "remaining_time": "2:16:34"}
226
+ {"current_steps": 2260, "total_steps": 2337, "loss": 0.9896, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.352368144216801e-09, "epoch": 2.9, "percentage": 96.71, "elapsed_time": "2 days, 11:07:43", "remaining_time": "2:00:52"}
227
+ {"current_steps": 2270, "total_steps": 2337, "loss": 0.976, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.053296043039389e-09, "epoch": 2.91, "percentage": 97.13, "elapsed_time": "2 days, 11:23:25", "remaining_time": "1:45:10"}
228
+ {"current_steps": 2280, "total_steps": 2337, "loss": 1.0155, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.934198816259559e-09, "epoch": 2.93, "percentage": 97.56, "elapsed_time": "2 days, 11:39:07", "remaining_time": "1:29:28"}
229
+ {"current_steps": 2290, "total_steps": 2337, "loss": 0.9782, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9952786929543495e-09, "epoch": 2.94, "percentage": 97.99, "elapsed_time": "2 days, 11:54:53", "remaining_time": "1:13:46"}
230
+ {"current_steps": 2300, "total_steps": 2337, "loss": 1.015, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.236705342876898e-09, "epoch": 2.95, "percentage": 98.42, "elapsed_time": "2 days, 12:10:36", "remaining_time": "0:58:05"}
231
+ {"current_steps": 2310, "total_steps": 2337, "loss": 0.9742, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.586158457954072e-10, "epoch": 2.97, "percentage": 98.84, "elapsed_time": "2 days, 12:26:23", "remaining_time": "0:42:23"}
232
+ {"current_steps": 2320, "total_steps": 2337, "loss": 1.0113, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.611146667221842e-10, "epoch": 2.98, "percentage": 99.27, "elapsed_time": "2 days, 12:42:04", "remaining_time": "0:26:41"}
233
+ {"current_steps": 2330, "total_steps": 2337, "loss": 0.9955, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.4273637035852074e-11, "epoch": 2.99, "percentage": 99.7, "elapsed_time": "2 days, 12:57:47", "remaining_time": "0:10:59"}
234
+ {"current_steps": 2337, "total_steps": 2337, "loss": null, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": null, "epoch": 3.0, "percentage": 100.0, "elapsed_time": "2 days, 13:08:47", "remaining_time": "0:00:00"}
trainer_state.json ADDED
@@ -0,0 +1,1426 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 3.0,
5
+ "eval_steps": 500,
6
+ "global_step": 2337,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.01,
13
+ "learning_rate": 1.9999096463326385e-06,
14
+ "loss": 1.9904,
15
+ "step": 10
16
+ },
17
+ {
18
+ "epoch": 0.03,
19
+ "learning_rate": 1.9996386016581256e-06,
20
+ "loss": 1.9027,
21
+ "step": 20
22
+ },
23
+ {
24
+ "epoch": 0.04,
25
+ "learning_rate": 1.9991869149562214e-06,
26
+ "loss": 1.8885,
27
+ "step": 30
28
+ },
29
+ {
30
+ "epoch": 0.05,
31
+ "learning_rate": 1.9985546678500257e-06,
32
+ "loss": 1.9943,
33
+ "step": 40
34
+ },
35
+ {
36
+ "epoch": 0.06,
37
+ "learning_rate": 1.997741974591229e-06,
38
+ "loss": 1.9145,
39
+ "step": 50
40
+ },
41
+ {
42
+ "epoch": 0.08,
43
+ "learning_rate": 1.9967489820394627e-06,
44
+ "loss": 1.9079,
45
+ "step": 60
46
+ },
47
+ {
48
+ "epoch": 0.09,
49
+ "learning_rate": 1.995575869635765e-06,
50
+ "loss": 1.8246,
51
+ "step": 70
52
+ },
53
+ {
54
+ "epoch": 0.1,
55
+ "learning_rate": 1.994222849370152e-06,
56
+ "loss": 1.759,
57
+ "step": 80
58
+ },
59
+ {
60
+ "epoch": 0.12,
61
+ "learning_rate": 1.9926901657433085e-06,
62
+ "loss": 1.6747,
63
+ "step": 90
64
+ },
65
+ {
66
+ "epoch": 0.13,
67
+ "learning_rate": 1.990978095722409e-06,
68
+ "loss": 1.5992,
69
+ "step": 100
70
+ },
71
+ {
72
+ "epoch": 0.14,
73
+ "learning_rate": 1.9890869486910627e-06,
74
+ "loss": 1.5288,
75
+ "step": 110
76
+ },
77
+ {
78
+ "epoch": 0.15,
79
+ "learning_rate": 1.9870170663934104e-06,
80
+ "loss": 1.5143,
81
+ "step": 120
82
+ },
83
+ {
84
+ "epoch": 0.17,
85
+ "learning_rate": 1.9847688228723645e-06,
86
+ "loss": 1.4542,
87
+ "step": 130
88
+ },
89
+ {
90
+ "epoch": 0.18,
91
+ "learning_rate": 1.9823426244020197e-06,
92
+ "loss": 1.4081,
93
+ "step": 140
94
+ },
95
+ {
96
+ "epoch": 0.19,
97
+ "learning_rate": 1.979738909414235e-06,
98
+ "loss": 1.377,
99
+ "step": 150
100
+ },
101
+ {
102
+ "epoch": 0.21,
103
+ "learning_rate": 1.9769581484194063e-06,
104
+ "loss": 1.3373,
105
+ "step": 160
106
+ },
107
+ {
108
+ "epoch": 0.22,
109
+ "learning_rate": 1.9740008439214417e-06,
110
+ "loss": 1.3031,
111
+ "step": 170
112
+ },
113
+ {
114
+ "epoch": 0.23,
115
+ "learning_rate": 1.9708675303269545e-06,
116
+ "loss": 1.3004,
117
+ "step": 180
118
+ },
119
+ {
120
+ "epoch": 0.24,
121
+ "learning_rate": 1.9675587738486934e-06,
122
+ "loss": 1.2555,
123
+ "step": 190
124
+ },
125
+ {
126
+ "epoch": 0.26,
127
+ "learning_rate": 1.9640751724032234e-06,
128
+ "loss": 1.277,
129
+ "step": 200
130
+ },
131
+ {
132
+ "epoch": 0.27,
133
+ "learning_rate": 1.960417355502876e-06,
134
+ "loss": 1.2142,
135
+ "step": 210
136
+ },
137
+ {
138
+ "epoch": 0.28,
139
+ "learning_rate": 1.9565859841419945e-06,
140
+ "loss": 1.2023,
141
+ "step": 220
142
+ },
143
+ {
144
+ "epoch": 0.3,
145
+ "learning_rate": 1.9525817506774864e-06,
146
+ "loss": 1.2301,
147
+ "step": 230
148
+ },
149
+ {
150
+ "epoch": 0.31,
151
+ "learning_rate": 1.948405378703708e-06,
152
+ "loss": 1.217,
153
+ "step": 240
154
+ },
155
+ {
156
+ "epoch": 0.32,
157
+ "learning_rate": 1.9440576229217078e-06,
158
+ "loss": 1.1637,
159
+ "step": 250
160
+ },
161
+ {
162
+ "epoch": 0.33,
163
+ "learning_rate": 1.939539269002845e-06,
164
+ "loss": 1.214,
165
+ "step": 260
166
+ },
167
+ {
168
+ "epoch": 0.35,
169
+ "learning_rate": 1.9348511334468137e-06,
170
+ "loss": 1.1571,
171
+ "step": 270
172
+ },
173
+ {
174
+ "epoch": 0.36,
175
+ "learning_rate": 1.9299940634340954e-06,
176
+ "loss": 1.1329,
177
+ "step": 280
178
+ },
179
+ {
180
+ "epoch": 0.37,
181
+ "learning_rate": 1.9249689366728658e-06,
182
+ "loss": 1.1524,
183
+ "step": 290
184
+ },
185
+ {
186
+ "epoch": 0.39,
187
+ "learning_rate": 1.91977666124039e-06,
188
+ "loss": 1.1853,
189
+ "step": 300
190
+ },
191
+ {
192
+ "epoch": 0.4,
193
+ "learning_rate": 1.9144181754189207e-06,
194
+ "loss": 1.1614,
195
+ "step": 310
196
+ },
197
+ {
198
+ "epoch": 0.41,
199
+ "learning_rate": 1.90889444752615e-06,
200
+ "loss": 1.1166,
201
+ "step": 320
202
+ },
203
+ {
204
+ "epoch": 0.42,
205
+ "learning_rate": 1.903206475740223e-06,
206
+ "loss": 1.1298,
207
+ "step": 330
208
+ },
209
+ {
210
+ "epoch": 0.44,
211
+ "learning_rate": 1.8973552879193612e-06,
212
+ "loss": 1.1577,
213
+ "step": 340
214
+ },
215
+ {
216
+ "epoch": 0.45,
217
+ "learning_rate": 1.8913419414161202e-06,
218
+ "loss": 1.1421,
219
+ "step": 350
220
+ },
221
+ {
222
+ "epoch": 0.46,
223
+ "learning_rate": 1.88516752288632e-06,
224
+ "loss": 1.1569,
225
+ "step": 360
226
+ },
227
+ {
228
+ "epoch": 0.47,
229
+ "learning_rate": 1.8788331480926763e-06,
230
+ "loss": 1.0961,
231
+ "step": 370
232
+ },
233
+ {
234
+ "epoch": 0.49,
235
+ "learning_rate": 1.872339961703175e-06,
236
+ "loss": 1.1173,
237
+ "step": 380
238
+ },
239
+ {
240
+ "epoch": 0.5,
241
+ "learning_rate": 1.8656891370842223e-06,
242
+ "loss": 1.0712,
243
+ "step": 390
244
+ },
245
+ {
246
+ "epoch": 0.51,
247
+ "learning_rate": 1.8588818760886094e-06,
248
+ "loss": 1.1055,
249
+ "step": 400
250
+ },
251
+ {
252
+ "epoch": 0.53,
253
+ "learning_rate": 1.851919408838327e-06,
254
+ "loss": 1.1006,
255
+ "step": 410
256
+ },
257
+ {
258
+ "epoch": 0.54,
259
+ "learning_rate": 1.8448029935022754e-06,
260
+ "loss": 1.083,
261
+ "step": 420
262
+ },
263
+ {
264
+ "epoch": 0.55,
265
+ "learning_rate": 1.8375339160689023e-06,
266
+ "loss": 1.0588,
267
+ "step": 430
268
+ },
269
+ {
270
+ "epoch": 0.56,
271
+ "learning_rate": 1.830113490113817e-06,
272
+ "loss": 1.0791,
273
+ "step": 440
274
+ },
275
+ {
276
+ "epoch": 0.58,
277
+ "learning_rate": 1.822543056562416e-06,
278
+ "loss": 1.0623,
279
+ "step": 450
280
+ },
281
+ {
282
+ "epoch": 0.59,
283
+ "learning_rate": 1.8148239834475695e-06,
284
+ "loss": 1.1126,
285
+ "step": 460
286
+ },
287
+ {
288
+ "epoch": 0.6,
289
+ "learning_rate": 1.806957665662406e-06,
290
+ "loss": 1.0696,
291
+ "step": 470
292
+ },
293
+ {
294
+ "epoch": 0.62,
295
+ "learning_rate": 1.7989455247082472e-06,
296
+ "loss": 1.0573,
297
+ "step": 480
298
+ },
299
+ {
300
+ "epoch": 0.63,
301
+ "learning_rate": 1.7907890084377301e-06,
302
+ "loss": 1.0789,
303
+ "step": 490
304
+ },
305
+ {
306
+ "epoch": 0.64,
307
+ "learning_rate": 1.7824895907931706e-06,
308
+ "loss": 1.0964,
309
+ "step": 500
310
+ },
311
+ {
312
+ "epoch": 0.65,
313
+ "learning_rate": 1.7740487715402106e-06,
314
+ "loss": 1.0551,
315
+ "step": 510
316
+ },
317
+ {
318
+ "epoch": 0.67,
319
+ "learning_rate": 1.7654680759968007e-06,
320
+ "loss": 1.0563,
321
+ "step": 520
322
+ },
323
+ {
324
+ "epoch": 0.68,
325
+ "learning_rate": 1.756749054757562e-06,
326
+ "loss": 1.0522,
327
+ "step": 530
328
+ },
329
+ {
330
+ "epoch": 0.69,
331
+ "learning_rate": 1.747893283413585e-06,
332
+ "loss": 1.0446,
333
+ "step": 540
334
+ },
335
+ {
336
+ "epoch": 0.71,
337
+ "learning_rate": 1.738902362267705e-06,
338
+ "loss": 1.0667,
339
+ "step": 550
340
+ },
341
+ {
342
+ "epoch": 0.72,
343
+ "learning_rate": 1.72977791604532e-06,
344
+ "loss": 1.0395,
345
+ "step": 560
346
+ },
347
+ {
348
+ "epoch": 0.73,
349
+ "learning_rate": 1.7205215936007869e-06,
350
+ "loss": 1.039,
351
+ "step": 570
352
+ },
353
+ {
354
+ "epoch": 0.74,
355
+ "learning_rate": 1.7111350676194647e-06,
356
+ "loss": 1.0987,
357
+ "step": 580
358
+ },
359
+ {
360
+ "epoch": 0.76,
361
+ "learning_rate": 1.701620034315445e-06,
362
+ "loss": 1.0748,
363
+ "step": 590
364
+ },
365
+ {
366
+ "epoch": 0.77,
367
+ "learning_rate": 1.6919782131250366e-06,
368
+ "loss": 1.0446,
369
+ "step": 600
370
+ },
371
+ {
372
+ "epoch": 0.78,
373
+ "learning_rate": 1.6822113463960483e-06,
374
+ "loss": 1.0279,
375
+ "step": 610
376
+ },
377
+ {
378
+ "epoch": 0.8,
379
+ "learning_rate": 1.6723211990729355e-06,
380
+ "loss": 1.054,
381
+ "step": 620
382
+ },
383
+ {
384
+ "epoch": 0.81,
385
+ "learning_rate": 1.6623095583778613e-06,
386
+ "loss": 1.0391,
387
+ "step": 630
388
+ },
389
+ {
390
+ "epoch": 0.82,
391
+ "learning_rate": 1.652178233487731e-06,
392
+ "loss": 1.0812,
393
+ "step": 640
394
+ },
395
+ {
396
+ "epoch": 0.83,
397
+ "learning_rate": 1.6419290552072634e-06,
398
+ "loss": 1.0826,
399
+ "step": 650
400
+ },
401
+ {
402
+ "epoch": 0.85,
403
+ "learning_rate": 1.6315638756381484e-06,
404
+ "loss": 1.0497,
405
+ "step": 660
406
+ },
407
+ {
408
+ "epoch": 0.86,
409
+ "learning_rate": 1.6210845678443602e-06,
410
+ "loss": 1.0329,
411
+ "step": 670
412
+ },
413
+ {
414
+ "epoch": 0.87,
415
+ "learning_rate": 1.6104930255136794e-06,
416
+ "loss": 1.0663,
417
+ "step": 680
418
+ },
419
+ {
420
+ "epoch": 0.89,
421
+ "learning_rate": 1.5997911626154914e-06,
422
+ "loss": 1.059,
423
+ "step": 690
424
+ },
425
+ {
426
+ "epoch": 0.9,
427
+ "learning_rate": 1.5889809130549174e-06,
428
+ "loss": 1.0123,
429
+ "step": 700
430
+ },
431
+ {
432
+ "epoch": 0.91,
433
+ "learning_rate": 1.578064230323343e-06,
434
+ "loss": 1.0264,
435
+ "step": 710
436
+ },
437
+ {
438
+ "epoch": 0.92,
439
+ "learning_rate": 1.5670430871454081e-06,
440
+ "loss": 1.0411,
441
+ "step": 720
442
+ },
443
+ {
444
+ "epoch": 0.94,
445
+ "learning_rate": 1.555919475122523e-06,
446
+ "loss": 1.0428,
447
+ "step": 730
448
+ },
449
+ {
450
+ "epoch": 0.95,
451
+ "learning_rate": 1.544695404372968e-06,
452
+ "loss": 0.9994,
453
+ "step": 740
454
+ },
455
+ {
456
+ "epoch": 0.96,
457
+ "learning_rate": 1.533372903168653e-06,
458
+ "loss": 1.0452,
459
+ "step": 750
460
+ },
461
+ {
462
+ "epoch": 0.98,
463
+ "learning_rate": 1.5219540175685937e-06,
464
+ "loss": 1.02,
465
+ "step": 760
466
+ },
467
+ {
468
+ "epoch": 0.99,
469
+ "learning_rate": 1.5104408110491716e-06,
470
+ "loss": 1.0564,
471
+ "step": 770
472
+ },
473
+ {
474
+ "epoch": 1.0,
475
+ "learning_rate": 1.4988353641312515e-06,
476
+ "loss": 0.9889,
477
+ "step": 780
478
+ },
479
+ {
480
+ "epoch": 1.01,
481
+ "learning_rate": 1.487139774004214e-06,
482
+ "loss": 1.0452,
483
+ "step": 790
484
+ },
485
+ {
486
+ "epoch": 1.03,
487
+ "learning_rate": 1.4753561541469787e-06,
488
+ "loss": 1.0399,
489
+ "step": 800
490
+ },
491
+ {
492
+ "epoch": 1.04,
493
+ "learning_rate": 1.463486633946084e-06,
494
+ "loss": 1.0438,
495
+ "step": 810
496
+ },
497
+ {
498
+ "epoch": 1.05,
499
+ "learning_rate": 1.4515333583108893e-06,
500
+ "loss": 1.0134,
501
+ "step": 820
502
+ },
503
+ {
504
+ "epoch": 1.07,
505
+ "learning_rate": 1.439498487285976e-06,
506
+ "loss": 0.9949,
507
+ "step": 830
508
+ },
509
+ {
510
+ "epoch": 1.08,
511
+ "learning_rate": 1.427384195660811e-06,
512
+ "loss": 1.0184,
513
+ "step": 840
514
+ },
515
+ {
516
+ "epoch": 1.09,
517
+ "learning_rate": 1.4151926725767455e-06,
518
+ "loss": 1.0246,
519
+ "step": 850
520
+ },
521
+ {
522
+ "epoch": 1.1,
523
+ "learning_rate": 1.4029261211314222e-06,
524
+ "loss": 1.0428,
525
+ "step": 860
526
+ },
527
+ {
528
+ "epoch": 1.12,
529
+ "learning_rate": 1.3905867579806596e-06,
530
+ "loss": 1.0078,
531
+ "step": 870
532
+ },
533
+ {
534
+ "epoch": 1.13,
535
+ "learning_rate": 1.3781768129378844e-06,
536
+ "loss": 1.0223,
537
+ "step": 880
538
+ },
539
+ {
540
+ "epoch": 1.14,
541
+ "learning_rate": 1.3656985285711895e-06,
542
+ "loss": 0.9788,
543
+ "step": 890
544
+ },
545
+ {
546
+ "epoch": 1.16,
547
+ "learning_rate": 1.3531541597980845e-06,
548
+ "loss": 1.0417,
549
+ "step": 900
550
+ },
551
+ {
552
+ "epoch": 1.17,
553
+ "learning_rate": 1.340545973478016e-06,
554
+ "loss": 1.0315,
555
+ "step": 910
556
+ },
557
+ {
558
+ "epoch": 1.18,
559
+ "learning_rate": 1.32787624800273e-06,
560
+ "loss": 0.9775,
561
+ "step": 920
562
+ },
563
+ {
564
+ "epoch": 1.19,
565
+ "learning_rate": 1.3151472728845492e-06,
566
+ "loss": 0.9992,
567
+ "step": 930
568
+ },
569
+ {
570
+ "epoch": 1.21,
571
+ "learning_rate": 1.3023613483426399e-06,
572
+ "loss": 0.957,
573
+ "step": 940
574
+ },
575
+ {
576
+ "epoch": 1.22,
577
+ "learning_rate": 1.2895207848873487e-06,
578
+ "loss": 1.0227,
579
+ "step": 950
580
+ },
581
+ {
582
+ "epoch": 1.23,
583
+ "learning_rate": 1.2766279029026735e-06,
584
+ "loss": 1.0311,
585
+ "step": 960
586
+ },
587
+ {
588
+ "epoch": 1.25,
589
+ "learning_rate": 1.2636850322269553e-06,
590
+ "loss": 1.0237,
591
+ "step": 970
592
+ },
593
+ {
594
+ "epoch": 1.26,
595
+ "learning_rate": 1.250694511731857e-06,
596
+ "loss": 1.0338,
597
+ "step": 980
598
+ },
599
+ {
600
+ "epoch": 1.27,
601
+ "learning_rate": 1.2376586888997145e-06,
602
+ "loss": 1.0259,
603
+ "step": 990
604
+ },
605
+ {
606
+ "epoch": 1.28,
607
+ "learning_rate": 1.224579919399327e-06,
608
+ "loss": 1.0058,
609
+ "step": 1000
610
+ },
611
+ {
612
+ "epoch": 1.3,
613
+ "learning_rate": 1.2114605666602728e-06,
614
+ "loss": 0.9968,
615
+ "step": 1010
616
+ },
617
+ {
618
+ "epoch": 1.31,
619
+ "learning_rate": 1.1983030014458184e-06,
620
+ "loss": 0.9984,
621
+ "step": 1020
622
+ },
623
+ {
624
+ "epoch": 1.32,
625
+ "learning_rate": 1.1851096014245055e-06,
626
+ "loss": 1.0126,
627
+ "step": 1030
628
+ },
629
+ {
630
+ "epoch": 1.34,
631
+ "learning_rate": 1.1718827507404873e-06,
632
+ "loss": 1.0064,
633
+ "step": 1040
634
+ },
635
+ {
636
+ "epoch": 1.35,
637
+ "learning_rate": 1.1586248395826983e-06,
638
+ "loss": 0.9816,
639
+ "step": 1050
640
+ },
641
+ {
642
+ "epoch": 1.36,
643
+ "learning_rate": 1.1453382637529276e-06,
644
+ "loss": 1.0116,
645
+ "step": 1060
646
+ },
647
+ {
648
+ "epoch": 1.37,
649
+ "learning_rate": 1.1320254242328805e-06,
650
+ "loss": 0.9933,
651
+ "step": 1070
652
+ },
653
+ {
654
+ "epoch": 1.39,
655
+ "learning_rate": 1.1186887267503053e-06,
656
+ "loss": 1.0558,
657
+ "step": 1080
658
+ },
659
+ {
660
+ "epoch": 1.4,
661
+ "learning_rate": 1.1053305813442574e-06,
662
+ "loss": 0.9552,
663
+ "step": 1090
664
+ },
665
+ {
666
+ "epoch": 1.41,
667
+ "learning_rate": 1.0919534019295898e-06,
668
+ "loss": 0.9877,
669
+ "step": 1100
670
+ },
671
+ {
672
+ "epoch": 1.42,
673
+ "learning_rate": 1.078559605860741e-06,
674
+ "loss": 1.0232,
675
+ "step": 1110
676
+ },
677
+ {
678
+ "epoch": 1.44,
679
+ "learning_rate": 1.0651516134949003e-06,
680
+ "loss": 1.0201,
681
+ "step": 1120
682
+ },
683
+ {
684
+ "epoch": 1.45,
685
+ "learning_rate": 1.0517318477546319e-06,
686
+ "loss": 1.0047,
687
+ "step": 1130
688
+ },
689
+ {
690
+ "epoch": 1.46,
691
+ "learning_rate": 1.0383027336900353e-06,
692
+ "loss": 0.9861,
693
+ "step": 1140
694
+ },
695
+ {
696
+ "epoch": 1.48,
697
+ "learning_rate": 1.0248666980405212e-06,
698
+ "loss": 1.0059,
699
+ "step": 1150
700
+ },
701
+ {
702
+ "epoch": 1.49,
703
+ "learning_rate": 1.011426168796281e-06,
704
+ "loss": 0.9714,
705
+ "step": 1160
706
+ },
707
+ {
708
+ "epoch": 1.5,
709
+ "learning_rate": 9.979835747595316e-07,
710
+ "loss": 1.0207,
711
+ "step": 1170
712
+ },
713
+ {
714
+ "epoch": 1.51,
715
+ "learning_rate": 9.845413451056125e-07,
716
+ "loss": 1.0167,
717
+ "step": 1180
718
+ },
719
+ {
720
+ "epoch": 1.53,
721
+ "learning_rate": 9.71101908944018e-07,
722
+ "loss": 0.9979,
723
+ "step": 1190
724
+ },
725
+ {
726
+ "epoch": 1.54,
727
+ "learning_rate": 9.576676948794375e-07,
728
+ "loss": 1.0047,
729
+ "step": 1200
730
+ },
731
+ {
732
+ "epoch": 1.55,
733
+ "learning_rate": 9.442411305728873e-07,
734
+ "loss": 1.0121,
735
+ "step": 1210
736
+ },
737
+ {
738
+ "epoch": 1.57,
739
+ "learning_rate": 9.308246423030185e-07,
740
+ "loss": 1.0015,
741
+ "step": 1220
742
+ },
743
+ {
744
+ "epoch": 1.58,
745
+ "learning_rate": 9.174206545276677e-07,
746
+ "loss": 1.0325,
747
+ "step": 1230
748
+ },
749
+ {
750
+ "epoch": 1.59,
751
+ "learning_rate": 9.040315894457404e-07,
752
+ "loss": 1.019,
753
+ "step": 1240
754
+ },
755
+ {
756
+ "epoch": 1.6,
757
+ "learning_rate": 8.906598665595016e-07,
758
+ "loss": 0.993,
759
+ "step": 1250
760
+ },
761
+ {
762
+ "epoch": 1.62,
763
+ "learning_rate": 8.773079022373553e-07,
764
+ "loss": 0.9917,
765
+ "step": 1260
766
+ },
767
+ {
768
+ "epoch": 1.63,
769
+ "learning_rate": 8.63978109277187e-07,
770
+ "loss": 0.9697,
771
+ "step": 1270
772
+ },
773
+ {
774
+ "epoch": 1.64,
775
+ "learning_rate": 8.506728964703549e-07,
776
+ "loss": 0.973,
777
+ "step": 1280
778
+ },
779
+ {
780
+ "epoch": 1.66,
781
+ "learning_rate": 8.37394668166404e-07,
782
+ "loss": 1.0183,
783
+ "step": 1290
784
+ },
785
+ {
786
+ "epoch": 1.67,
787
+ "learning_rate": 8.241458238385798e-07,
788
+ "loss": 1.0098,
789
+ "step": 1300
790
+ },
791
+ {
792
+ "epoch": 1.68,
793
+ "learning_rate": 8.109287576502299e-07,
794
+ "loss": 0.9765,
795
+ "step": 1310
796
+ },
797
+ {
798
+ "epoch": 1.69,
799
+ "learning_rate": 7.977458580221578e-07,
800
+ "loss": 1.01,
801
+ "step": 1320
802
+ },
803
+ {
804
+ "epoch": 1.71,
805
+ "learning_rate": 7.845995072010187e-07,
806
+ "loss": 1.0378,
807
+ "step": 1330
808
+ },
809
+ {
810
+ "epoch": 1.72,
811
+ "learning_rate": 7.714920808288313e-07,
812
+ "loss": 0.9665,
813
+ "step": 1340
814
+ },
815
+ {
816
+ "epoch": 1.73,
817
+ "learning_rate": 7.584259475136804e-07,
818
+ "loss": 0.9927,
819
+ "step": 1350
820
+ },
821
+ {
822
+ "epoch": 1.75,
823
+ "learning_rate": 7.454034684016923e-07,
824
+ "loss": 0.9571,
825
+ "step": 1360
826
+ },
827
+ {
828
+ "epoch": 1.76,
829
+ "learning_rate": 7.324269967503587e-07,
830
+ "loss": 0.9813,
831
+ "step": 1370
832
+ },
833
+ {
834
+ "epoch": 1.77,
835
+ "learning_rate": 7.19498877503286e-07,
836
+ "loss": 0.9641,
837
+ "step": 1380
838
+ },
839
+ {
840
+ "epoch": 1.78,
841
+ "learning_rate": 7.066214468664467e-07,
842
+ "loss": 0.9696,
843
+ "step": 1390
844
+ },
845
+ {
846
+ "epoch": 1.8,
847
+ "learning_rate": 6.937970318860085e-07,
848
+ "loss": 0.9824,
849
+ "step": 1400
850
+ },
851
+ {
852
+ "epoch": 1.81,
853
+ "learning_rate": 6.810279500278223e-07,
854
+ "loss": 0.9829,
855
+ "step": 1410
856
+ },
857
+ {
858
+ "epoch": 1.82,
859
+ "learning_rate": 6.683165087586377e-07,
860
+ "loss": 1.0099,
861
+ "step": 1420
862
+ },
863
+ {
864
+ "epoch": 1.84,
865
+ "learning_rate": 6.556650051291264e-07,
866
+ "loss": 0.9737,
867
+ "step": 1430
868
+ },
869
+ {
870
+ "epoch": 1.85,
871
+ "learning_rate": 6.430757253587901e-07,
872
+ "loss": 0.9962,
873
+ "step": 1440
874
+ },
875
+ {
876
+ "epoch": 1.86,
877
+ "learning_rate": 6.305509444228219e-07,
878
+ "loss": 0.9736,
879
+ "step": 1450
880
+ },
881
+ {
882
+ "epoch": 1.87,
883
+ "learning_rate": 6.180929256410027e-07,
884
+ "loss": 0.9813,
885
+ "step": 1460
886
+ },
887
+ {
888
+ "epoch": 1.89,
889
+ "learning_rate": 6.057039202687022e-07,
890
+ "loss": 1.0075,
891
+ "step": 1470
892
+ },
893
+ {
894
+ "epoch": 1.9,
895
+ "learning_rate": 5.93386167090062e-07,
896
+ "loss": 0.9858,
897
+ "step": 1480
898
+ },
899
+ {
900
+ "epoch": 1.91,
901
+ "learning_rate": 5.811418920134277e-07,
902
+ "loss": 0.992,
903
+ "step": 1490
904
+ },
905
+ {
906
+ "epoch": 1.93,
907
+ "learning_rate": 5.689733076691148e-07,
908
+ "loss": 0.9919,
909
+ "step": 1500
910
+ },
911
+ {
912
+ "epoch": 1.94,
913
+ "learning_rate": 5.56882613009567e-07,
914
+ "loss": 0.9786,
915
+ "step": 1510
916
+ },
917
+ {
918
+ "epoch": 1.95,
919
+ "learning_rate": 5.448719929119915e-07,
920
+ "loss": 0.9814,
921
+ "step": 1520
922
+ },
923
+ {
924
+ "epoch": 1.96,
925
+ "learning_rate": 5.329436177835339e-07,
926
+ "loss": 0.9847,
927
+ "step": 1530
928
+ },
929
+ {
930
+ "epoch": 1.98,
931
+ "learning_rate": 5.210996431690722e-07,
932
+ "loss": 0.9752,
933
+ "step": 1540
934
+ },
935
+ {
936
+ "epoch": 1.99,
937
+ "learning_rate": 5.093422093616909e-07,
938
+ "loss": 1.0272,
939
+ "step": 1550
940
+ },
941
+ {
942
+ "epoch": 2.0,
943
+ "learning_rate": 4.976734410159165e-07,
944
+ "loss": 1.0187,
945
+ "step": 1560
946
+ },
947
+ {
948
+ "epoch": 2.02,
949
+ "learning_rate": 4.860954467637762e-07,
950
+ "loss": 0.9922,
951
+ "step": 1570
952
+ },
953
+ {
954
+ "epoch": 2.03,
955
+ "learning_rate": 4.7461031883375335e-07,
956
+ "loss": 0.9881,
957
+ "step": 1580
958
+ },
959
+ {
960
+ "epoch": 2.04,
961
+ "learning_rate": 4.632201326727041e-07,
962
+ "loss": 0.9972,
963
+ "step": 1590
964
+ },
965
+ {
966
+ "epoch": 2.05,
967
+ "learning_rate": 4.519269465708125e-07,
968
+ "loss": 0.9841,
969
+ "step": 1600
970
+ },
971
+ {
972
+ "epoch": 2.07,
973
+ "learning_rate": 4.407328012896393e-07,
974
+ "loss": 1.0147,
975
+ "step": 1610
976
+ },
977
+ {
978
+ "epoch": 2.08,
979
+ "learning_rate": 4.2963971969334254e-07,
980
+ "loss": 0.9832,
981
+ "step": 1620
982
+ },
983
+ {
984
+ "epoch": 2.09,
985
+ "learning_rate": 4.186497063831316e-07,
986
+ "loss": 1.0031,
987
+ "step": 1630
988
+ },
989
+ {
990
+ "epoch": 2.11,
991
+ "learning_rate": 4.0776474733502007e-07,
992
+ "loss": 0.9694,
993
+ "step": 1640
994
+ },
995
+ {
996
+ "epoch": 2.12,
997
+ "learning_rate": 3.9698680954094645e-07,
998
+ "loss": 0.9836,
999
+ "step": 1650
1000
+ },
1001
+ {
1002
+ "epoch": 2.13,
1003
+ "learning_rate": 3.8631784065332253e-07,
1004
+ "loss": 0.9895,
1005
+ "step": 1660
1006
+ },
1007
+ {
1008
+ "epoch": 2.14,
1009
+ "learning_rate": 3.7575976863308156e-07,
1010
+ "loss": 0.9894,
1011
+ "step": 1670
1012
+ },
1013
+ {
1014
+ "epoch": 2.16,
1015
+ "learning_rate": 3.653145014012766e-07,
1016
+ "loss": 0.9978,
1017
+ "step": 1680
1018
+ },
1019
+ {
1020
+ "epoch": 2.17,
1021
+ "learning_rate": 3.5498392649431087e-07,
1022
+ "loss": 0.9492,
1023
+ "step": 1690
1024
+ },
1025
+ {
1026
+ "epoch": 2.18,
1027
+ "learning_rate": 3.447699107228412e-07,
1028
+ "loss": 0.9666,
1029
+ "step": 1700
1030
+ },
1031
+ {
1032
+ "epoch": 2.2,
1033
+ "learning_rate": 3.3467429983443476e-07,
1034
+ "loss": 0.9963,
1035
+ "step": 1710
1036
+ },
1037
+ {
1038
+ "epoch": 2.21,
1039
+ "learning_rate": 3.2469891818002715e-07,
1040
+ "loss": 0.9629,
1041
+ "step": 1720
1042
+ },
1043
+ {
1044
+ "epoch": 2.22,
1045
+ "learning_rate": 3.148455683842507e-07,
1046
+ "loss": 1.0002,
1047
+ "step": 1730
1048
+ },
1049
+ {
1050
+ "epoch": 2.23,
1051
+ "learning_rate": 3.0511603101968475e-07,
1052
+ "loss": 1.0029,
1053
+ "step": 1740
1054
+ },
1055
+ {
1056
+ "epoch": 2.25,
1057
+ "learning_rate": 2.9551206428509446e-07,
1058
+ "loss": 0.9456,
1059
+ "step": 1750
1060
+ },
1061
+ {
1062
+ "epoch": 2.26,
1063
+ "learning_rate": 2.860354036877113e-07,
1064
+ "loss": 0.9792,
1065
+ "step": 1760
1066
+ },
1067
+ {
1068
+ "epoch": 2.27,
1069
+ "learning_rate": 2.7668776172961375e-07,
1070
+ "loss": 0.9614,
1071
+ "step": 1770
1072
+ },
1073
+ {
1074
+ "epoch": 2.28,
1075
+ "learning_rate": 2.6747082759826613e-07,
1076
+ "loss": 1.0142,
1077
+ "step": 1780
1078
+ },
1079
+ {
1080
+ "epoch": 2.3,
1081
+ "learning_rate": 2.583862668612693e-07,
1082
+ "loss": 0.9993,
1083
+ "step": 1790
1084
+ },
1085
+ {
1086
+ "epoch": 2.31,
1087
+ "learning_rate": 2.4943572116538205e-07,
1088
+ "loss": 1.0057,
1089
+ "step": 1800
1090
+ },
1091
+ {
1092
+ "epoch": 2.32,
1093
+ "learning_rate": 2.4062080793986004e-07,
1094
+ "loss": 0.9717,
1095
+ "step": 1810
1096
+ },
1097
+ {
1098
+ "epoch": 2.34,
1099
+ "learning_rate": 2.3194312010417927e-07,
1100
+ "loss": 1.0034,
1101
+ "step": 1820
1102
+ },
1103
+ {
1104
+ "epoch": 2.35,
1105
+ "learning_rate": 2.2340422578017958e-07,
1106
+ "loss": 0.9612,
1107
+ "step": 1830
1108
+ },
1109
+ {
1110
+ "epoch": 2.36,
1111
+ "learning_rate": 2.150056680086958e-07,
1112
+ "loss": 0.9932,
1113
+ "step": 1840
1114
+ },
1115
+ {
1116
+ "epoch": 2.37,
1117
+ "learning_rate": 2.0674896447071833e-07,
1118
+ "loss": 1.0122,
1119
+ "step": 1850
1120
+ },
1121
+ {
1122
+ "epoch": 2.39,
1123
+ "learning_rate": 1.9863560721313698e-07,
1124
+ "loss": 1.0008,
1125
+ "step": 1860
1126
+ },
1127
+ {
1128
+ "epoch": 2.4,
1129
+ "learning_rate": 1.9066706237911756e-07,
1130
+ "loss": 1.0085,
1131
+ "step": 1870
1132
+ },
1133
+ {
1134
+ "epoch": 2.41,
1135
+ "learning_rate": 1.8284476994315835e-07,
1136
+ "loss": 0.9867,
1137
+ "step": 1880
1138
+ },
1139
+ {
1140
+ "epoch": 2.43,
1141
+ "learning_rate": 1.7517014345087766e-07,
1142
+ "loss": 0.987,
1143
+ "step": 1890
1144
+ },
1145
+ {
1146
+ "epoch": 2.44,
1147
+ "learning_rate": 1.6764456976357277e-07,
1148
+ "loss": 0.9703,
1149
+ "step": 1900
1150
+ },
1151
+ {
1152
+ "epoch": 2.45,
1153
+ "learning_rate": 1.6026940880760797e-07,
1154
+ "loss": 0.9949,
1155
+ "step": 1910
1156
+ },
1157
+ {
1158
+ "epoch": 2.46,
1159
+ "learning_rate": 1.5304599332866197e-07,
1160
+ "loss": 1.0132,
1161
+ "step": 1920
1162
+ },
1163
+ {
1164
+ "epoch": 2.48,
1165
+ "learning_rate": 1.459756286508945e-07,
1166
+ "loss": 1.0076,
1167
+ "step": 1930
1168
+ },
1169
+ {
1170
+ "epoch": 2.49,
1171
+ "learning_rate": 1.390595924410609e-07,
1172
+ "loss": 0.9801,
1173
+ "step": 1940
1174
+ },
1175
+ {
1176
+ "epoch": 2.5,
1177
+ "learning_rate": 1.322991344776323e-07,
1178
+ "loss": 0.9947,
1179
+ "step": 1950
1180
+ },
1181
+ {
1182
+ "epoch": 2.52,
1183
+ "learning_rate": 1.256954764249486e-07,
1184
+ "loss": 0.9898,
1185
+ "step": 1960
1186
+ },
1187
+ {
1188
+ "epoch": 2.53,
1189
+ "learning_rate": 1.1924981161245574e-07,
1190
+ "loss": 1.0,
1191
+ "step": 1970
1192
+ },
1193
+ {
1194
+ "epoch": 2.54,
1195
+ "learning_rate": 1.1296330481906247e-07,
1196
+ "loss": 0.9637,
1197
+ "step": 1980
1198
+ },
1199
+ {
1200
+ "epoch": 2.55,
1201
+ "learning_rate": 1.0683709206265635e-07,
1202
+ "loss": 1.0058,
1203
+ "step": 1990
1204
+ },
1205
+ {
1206
+ "epoch": 2.57,
1207
+ "learning_rate": 1.0087228039481643e-07,
1208
+ "loss": 1.0164,
1209
+ "step": 2000
1210
+ },
1211
+ {
1212
+ "epoch": 2.58,
1213
+ "learning_rate": 9.506994770076115e-08,
1214
+ "loss": 0.9956,
1215
+ "step": 2010
1216
+ },
1217
+ {
1218
+ "epoch": 2.59,
1219
+ "learning_rate": 8.94311425045674e-08,
1220
+ "loss": 0.9945,
1221
+ "step": 2020
1222
+ },
1223
+ {
1224
+ "epoch": 2.61,
1225
+ "learning_rate": 8.395688377969235e-08,
1226
+ "loss": 0.9916,
1227
+ "step": 2030
1228
+ },
1229
+ {
1230
+ "epoch": 2.62,
1231
+ "learning_rate": 7.864816076484049e-08,
1232
+ "loss": 0.998,
1233
+ "step": 2040
1234
+ },
1235
+ {
1236
+ "epoch": 2.63,
1237
+ "learning_rate": 7.350593278519823e-08,
1238
+ "loss": 0.9892,
1239
+ "step": 2050
1240
+ },
1241
+ {
1242
+ "epoch": 2.64,
1243
+ "learning_rate": 6.853112907907854e-08,
1244
+ "loss": 0.9772,
1245
+ "step": 2060
1246
+ },
1247
+ {
1248
+ "epoch": 2.66,
1249
+ "learning_rate": 6.372464862999949e-08,
1250
+ "loss": 0.9784,
1251
+ "step": 2070
1252
+ },
1253
+ {
1254
+ "epoch": 2.67,
1255
+ "learning_rate": 5.908736000423309e-08,
1256
+ "loss": 0.9986,
1257
+ "step": 2080
1258
+ },
1259
+ {
1260
+ "epoch": 2.68,
1261
+ "learning_rate": 5.462010119384664e-08,
1262
+ "loss": 0.978,
1263
+ "step": 2090
1264
+ },
1265
+ {
1266
+ "epoch": 2.7,
1267
+ "learning_rate": 5.0323679465273605e-08,
1268
+ "loss": 0.9783,
1269
+ "step": 2100
1270
+ },
1271
+ {
1272
+ "epoch": 2.71,
1273
+ "learning_rate": 4.619887121343324e-08,
1274
+ "loss": 0.9555,
1275
+ "step": 2110
1276
+ },
1277
+ {
1278
+ "epoch": 2.72,
1279
+ "learning_rate": 4.2246421821431123e-08,
1280
+ "loss": 0.9857,
1281
+ "step": 2120
1282
+ },
1283
+ {
1284
+ "epoch": 2.73,
1285
+ "learning_rate": 3.846704552586244e-08,
1286
+ "loss": 0.961,
1287
+ "step": 2130
1288
+ },
1289
+ {
1290
+ "epoch": 2.75,
1291
+ "learning_rate": 3.4861425287744276e-08,
1292
+ "loss": 0.9973,
1293
+ "step": 2140
1294
+ },
1295
+ {
1296
+ "epoch": 2.76,
1297
+ "learning_rate": 3.143021266910029e-08,
1298
+ "loss": 0.9497,
1299
+ "step": 2150
1300
+ },
1301
+ {
1302
+ "epoch": 2.77,
1303
+ "learning_rate": 2.8174027715217263e-08,
1304
+ "loss": 0.9707,
1305
+ "step": 2160
1306
+ },
1307
+ {
1308
+ "epoch": 2.79,
1309
+ "learning_rate": 2.5093458842599946e-08,
1310
+ "loss": 0.9564,
1311
+ "step": 2170
1312
+ },
1313
+ {
1314
+ "epoch": 2.8,
1315
+ "learning_rate": 2.218906273263843e-08,
1316
+ "loss": 0.9395,
1317
+ "step": 2180
1318
+ },
1319
+ {
1320
+ "epoch": 2.81,
1321
+ "learning_rate": 1.9461364231012856e-08,
1322
+ "loss": 0.9569,
1323
+ "step": 2190
1324
+ },
1325
+ {
1326
+ "epoch": 2.82,
1327
+ "learning_rate": 1.6910856252849382e-08,
1328
+ "loss": 1.0239,
1329
+ "step": 2200
1330
+ },
1331
+ {
1332
+ "epoch": 2.84,
1333
+ "learning_rate": 1.4537999693646885e-08,
1334
+ "loss": 0.9657,
1335
+ "step": 2210
1336
+ },
1337
+ {
1338
+ "epoch": 2.85,
1339
+ "learning_rate": 1.2343223345989917e-08,
1340
+ "loss": 0.9456,
1341
+ "step": 2220
1342
+ },
1343
+ {
1344
+ "epoch": 2.86,
1345
+ "learning_rate": 1.0326923822062461e-08,
1346
+ "loss": 1.0241,
1347
+ "step": 2230
1348
+ },
1349
+ {
1350
+ "epoch": 2.88,
1351
+ "learning_rate": 8.489465481977708e-09,
1352
+ "loss": 1.0016,
1353
+ "step": 2240
1354
+ },
1355
+ {
1356
+ "epoch": 2.89,
1357
+ "learning_rate": 6.83118036793473e-09,
1358
+ "loss": 0.9713,
1359
+ "step": 2250
1360
+ },
1361
+ {
1362
+ "epoch": 2.9,
1363
+ "learning_rate": 5.352368144216801e-09,
1364
+ "loss": 0.9896,
1365
+ "step": 2260
1366
+ },
1367
+ {
1368
+ "epoch": 2.91,
1369
+ "learning_rate": 4.053296043039389e-09,
1370
+ "loss": 0.976,
1371
+ "step": 2270
1372
+ },
1373
+ {
1374
+ "epoch": 2.93,
1375
+ "learning_rate": 2.934198816259559e-09,
1376
+ "loss": 1.0155,
1377
+ "step": 2280
1378
+ },
1379
+ {
1380
+ "epoch": 2.94,
1381
+ "learning_rate": 1.9952786929543495e-09,
1382
+ "loss": 0.9782,
1383
+ "step": 2290
1384
+ },
1385
+ {
1386
+ "epoch": 2.95,
1387
+ "learning_rate": 1.236705342876898e-09,
1388
+ "loss": 1.015,
1389
+ "step": 2300
1390
+ },
1391
+ {
1392
+ "epoch": 2.97,
1393
+ "learning_rate": 6.586158457954072e-10,
1394
+ "loss": 0.9742,
1395
+ "step": 2310
1396
+ },
1397
+ {
1398
+ "epoch": 2.98,
1399
+ "learning_rate": 2.611146667221842e-10,
1400
+ "loss": 1.0113,
1401
+ "step": 2320
1402
+ },
1403
+ {
1404
+ "epoch": 2.99,
1405
+ "learning_rate": 4.4273637035852074e-11,
1406
+ "loss": 0.9955,
1407
+ "step": 2330
1408
+ },
1409
+ {
1410
+ "epoch": 3.0,
1411
+ "step": 2337,
1412
+ "total_flos": 9336786417352704.0,
1413
+ "train_loss": 1.0705227502962438,
1414
+ "train_runtime": 220133.6189,
1415
+ "train_samples_per_second": 1.019,
1416
+ "train_steps_per_second": 0.011
1417
+ }
1418
+ ],
1419
+ "logging_steps": 10,
1420
+ "max_steps": 2337,
1421
+ "num_train_epochs": 3,
1422
+ "save_steps": 1000,
1423
+ "total_flos": 9336786417352704.0,
1424
+ "trial_name": null,
1425
+ "trial_params": null
1426
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf991cf7a99979c2fa7da9c2df58697a3026b1a713dbb83381f91befb8d99814
3
+ size 5435
training_loss.png ADDED