End of training
Browse files- README.md +15 -15
- adapter_model.bin +1 -1
README.md
CHANGED
@@ -17,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
17 |
|
18 |
This model is a fine-tuned version of [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) on the squad dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
-
- Loss: 2.
|
21 |
|
22 |
## Model description
|
23 |
|
@@ -38,7 +38,7 @@ More information needed
|
|
38 |
The following hyperparameters were used during training:
|
39 |
- learning_rate: 2e-05
|
40 |
- train_batch_size: 48
|
41 |
-
- eval_batch_size:
|
42 |
- seed: 42
|
43 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
44 |
- lr_scheduler_type: linear
|
@@ -47,18 +47,18 @@ The following hyperparameters were used during training:
|
|
47 |
|
48 |
### Training results
|
49 |
|
50 |
-
| Training Loss | Epoch | Step
|
51 |
-
|
52 |
-
|
|
53 |
-
| 2.
|
54 |
-
| 2.
|
55 |
-
| 2.
|
56 |
-
| 2.
|
57 |
-
| 2.
|
58 |
-
| 2.
|
59 |
-
| 2.
|
60 |
-
| 2.
|
61 |
-
| 2.
|
62 |
|
63 |
|
64 |
### Framework versions
|
@@ -66,4 +66,4 @@ The following hyperparameters were used during training:
|
|
66 |
- Transformers 4.34.0.dev0
|
67 |
- Pytorch 2.0.1+cu118
|
68 |
- Datasets 2.14.5
|
69 |
-
- Tokenizers 0.
|
|
|
17 |
|
18 |
This model is a fine-tuned version of [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) on the squad dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 2.7859
|
21 |
|
22 |
## Model description
|
23 |
|
|
|
38 |
The following hyperparameters were used during training:
|
39 |
- learning_rate: 2e-05
|
40 |
- train_batch_size: 48
|
41 |
+
- eval_batch_size: 16
|
42 |
- seed: 42
|
43 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
44 |
- lr_scheduler_type: linear
|
|
|
47 |
|
48 |
### Training results
|
49 |
|
50 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
51 |
+
|:-------------:|:-----:|:-----:|:---------------:|
|
52 |
+
| 3.0058 | 1.0 | 1643 | 2.7510 |
|
53 |
+
| 2.7801 | 2.0 | 3286 | 2.7497 |
|
54 |
+
| 2.7284 | 3.0 | 4929 | 2.7536 |
|
55 |
+
| 2.7001 | 4.0 | 6572 | 2.7601 |
|
56 |
+
| 2.6811 | 5.0 | 8215 | 2.7669 |
|
57 |
+
| 2.6811 | 6.0 | 9858 | 2.7722 |
|
58 |
+
| 2.6639 | 7.0 | 11501 | 2.7780 |
|
59 |
+
| 2.6492 | 8.0 | 13144 | 2.7817 |
|
60 |
+
| 2.6414 | 9.0 | 14787 | 2.7841 |
|
61 |
+
| 2.6354 | 10.0 | 16430 | 2.7859 |
|
62 |
|
63 |
|
64 |
### Framework versions
|
|
|
66 |
- Transformers 4.34.0.dev0
|
67 |
- Pytorch 2.0.1+cu118
|
68 |
- Datasets 2.14.5
|
69 |
+
- Tokenizers 0.14.0
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 19683045
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9e9fc51d94ec3f1234324ab816746020a3e5fe81b3c9a60be474d98163590e26
|
3 |
size 19683045
|