qnguyen3 commited on
Commit
40787b0
·
verified ·
1 Parent(s): c24ba3b

End of training

Browse files
Files changed (4) hide show
  1. README.md +3 -1
  2. all_results.json +7 -0
  3. train_results.json +7 -0
  4. trainer_state.json +0 -0
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
 
2
  base_model: qnguyen3/Mixtral-4x400M
3
  tags:
 
4
  - generated_from_trainer
5
  model-index:
6
  - name: mixtral-4x-400M-pt
@@ -12,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # mixtral-4x-400M-pt
14
 
15
- This model is a fine-tuned version of [qnguyen3/Mixtral-4x400M](https://huggingface.co/qnguyen3/Mixtral-4x400M) on the None dataset.
16
 
17
  ## Model description
18
 
 
1
  ---
2
+ license: other
3
  base_model: qnguyen3/Mixtral-4x400M
4
  tags:
5
+ - llama-factory
6
  - generated_from_trainer
7
  model-index:
8
  - name: mixtral-4x-400M-pt
 
14
 
15
  # mixtral-4x-400M-pt
16
 
17
+ This model is a fine-tuned version of [qnguyen3/Mixtral-4x400M](https://huggingface.co/qnguyen3/Mixtral-4x400M) on the thevault_function_xsmall, the redpajama_v2_small, the tiny_strange_textbooks, the tiny_textbooks, the code_textbook, the the_stack_smol_xl_cleaned, the refinedweb_1m_medium, the minipile, the goodwiki, the wikipedia_vi, the mathpile_arxiv_medium, the mathpile_stackexchange, the mathpile_proofpile, the mathpile_wikipedia, the thevault_class_xsmall, the tiny_stories_envi, the pretrain_instruct_1, the pretrain_instruct_2 and the pretrain_instruct_code datasets.
18
 
19
  ## Model description
20
 
all_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "train_loss": 1.6263817286927247,
4
+ "train_runtime": 649257.4466,
5
+ "train_samples_per_second": 60.135,
6
+ "train_steps_per_second": 0.059
7
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "train_loss": 1.6263817286927247,
4
+ "train_runtime": 649257.4466,
5
+ "train_samples_per_second": 60.135,
6
+ "train_steps_per_second": 0.059
7
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff