Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ Introducing xinchen9/Mistral-7B-CoT, an advanced language model comprising 8 bil
|
|
8 |
|
9 |
The llama3-b8 model was fine-tuning on dataset [CoT_ollection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
|
10 |
|
11 |
-
The training step is
|
12 |
|
13 |
### 2. How to Use
|
14 |
Here give some examples of how to use our model.
|
|
|
8 |
|
9 |
The llama3-b8 model was fine-tuning on dataset [CoT_ollection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
|
10 |
|
11 |
+
The training step is 12,000. The batch of each device is 16 and toal GPU is 5.
|
12 |
|
13 |
### 2. How to Use
|
14 |
Here give some examples of how to use our model.
|