readme change about batch size
Browse files
README.md
CHANGED
@@ -65,7 +65,7 @@ The DETR model was trained on [SKU110K Dataset](https://github.com/eg4000/SKU110
|
|
65 |
## Training procedure
|
66 |
### Training
|
67 |
|
68 |
-
The model was trained for
|
69 |
|
70 |
## Evaluation results
|
71 |
|
|
|
65 |
## Training procedure
|
66 |
### Training
|
67 |
|
68 |
+
The model was trained for 60 epochs on 1 RTX 4060 Ti GPU(Finetuning decoder only) with batch size of 1 and gradient_accumulation set to 8 and 60 epochs(finetuning the whole network) with batch size of 1 and accumulating gradients for 8 steps.
|
69 |
|
70 |
## Evaluation results
|
71 |
|