xinchen9 commited on
Commit
785be60
1 Parent(s): c280cee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ Introducing xinchen9/Mistral-7B-CoT, an advanced language model comprising 8 bil
8
 
9
  The llama3-b8 model was fine-tuning on dataset [CoT_ollection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
10
 
11
- The training step is 10,000. The batch of each device is 16 and toal GPU is 5.
12
 
13
  ### 2. How to Use
14
  Here give some examples of how to use our model.
 
8
 
9
  The llama3-b8 model was fine-tuning on dataset [CoT_ollection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
10
 
11
+ The training step is 12,000. The batch of each device is 16 and toal GPU is 5.
12
 
13
  ### 2. How to Use
14
  Here give some examples of how to use our model.