lombardata's picture
Upload README.md
d5b06a4 verified
---
language:
- eng
license: wtfpl
tags:
- multilabel-image-classification
- multilabel
- generated_from_trainer
base_model: facebook/dinov2-large
model-index:
- name: drone-DinoVdeau-large-2024_07_31-batch-size8_epochs100_freeze
results: []
---
DinoVd'eau is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large). It achieves the following results on the test set:
- Explained variance: 0.3552
- Loss: 0.3286
- MAE: 0.1261
- MSE: 0.0374
- R2: 0.3545
- RMSE: 0.1933
---
# Model description
DinoVd'eau is a model built on top of dinov2 model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
The source code for training the model can be found in this [Git repository](https://github.com/SeatizenDOI/DinoVdeau).
- **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc) and [Victor Illien](https://huggingface.co/groderg)
---
# Intended uses & limitations
You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
---
# Training and evaluation data
Details on the number of images for each class are given in the following table:
| Class | train | val | test | Total |
|:------------------------|--------:|------:|-------:|--------:|
| Acropore_branched | 2371 | 782 | 785 | 3938 |
| Acropore_digitised | 1693 | 580 | 579 | 2852 |
| Acropore_sub_massive | 353 | 99 | 97 | 549 |
| Acropore_tabular | 1112 | 420 | 410 | 1942 |
| Algae | 13150 | 4386 | 4405 | 21941 |
| Dead_coral | 6824 | 2242 | 2250 | 11316 |
| Millepore | 1543 | 611 | 631 | 2785 |
| No_acropore_encrusting | 2799 | 1044 | 1041 | 4884 |
| No_acropore_massive | 6578 | 2216 | 2170 | 10964 |
| No_acropore_sub_massive | 5252 | 1802 | 1793 | 8847 |
| Rock | 13532 | 4529 | 4529 | 22590 |
| Rubble | 12641 | 4222 | 4231 | 21094 |
| Sand | 13315 | 4438 | 4438 | 22191 |
---
# Training procedure
## Training hyperparameters
The following hyperparameters were used during training:
- **Number of Epochs**: 100
- **Learning Rate**: 0.001
- **Train Batch Size**: 8
- **Eval Batch Size**: 8
- **Optimizer**: Adam
- **LR Scheduler Type**: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
- **Freeze Encoder**: Yes
- **Data Augmentation**: Yes
## Data Augmentation
Data were augmented using the following transformations :
Train Transforms
- **PreProcess**: No additional parameters
- **Resize**: probability=1.00
- **RandomHorizontalFlip**: probability=0.25
- **RandomVerticalFlip**: probability=0.25
- **ColorJiggle**: probability=0.25
- **RandomPerspective**: probability=0.25
- **Normalize**: probability=1.00
Val Transforms
- **PreProcess**: No additional parameters
- **Resize**: probability=1.00
- **Normalize**: probability=1.00
## Training results
Epoch | Explained Variance | Validation Loss | MAE | MSE | R2 | RMSE | Learning Rate
--- | --- | --- | --- | --- | --- | --- | ---
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001
1 | 0.233 | 0.353 | 0.135 | 0.045 | 0.228 | 0.212 | 0.001
2 | 0.242 | 0.354 | 0.145 | 0.046 | 0.232 | 0.213 | 0.001
3 | 0.264 | 0.35 | 0.141 | 0.044 | 0.26 | 0.209 | 0.001
4 | 0.273 | 0.347 | 0.14 | 0.043 | 0.271 | 0.208 | 0.001
5 | 0.257 | 0.365 | 0.139 | 0.043 | 0.256 | 0.208 | 0.001
6 | 0.228 | 0.37 | 0.14 | 0.044 | 0.226 | 0.209 | 0.001
7 | 0.28 | 0.357 | 0.137 | 0.042 | 0.278 | 0.206 | 0.001
8 | 0.287 | 0.345 | 0.134 | 0.042 | 0.282 | 0.205 | 0.001
9 | 0.28 | 0.345 | 0.139 | 0.043 | 0.278 | 0.206 | 0.001
10 | 0.282 | 0.386 | 0.133 | 0.043 | 0.279 | 0.206 | 0.001
11 | 0.28 | 0.347 | 0.137 | 0.043 | 0.276 | 0.207 | 0.001
12 | 0.285 | 0.351 | 0.137 | 0.042 | 0.28 | 0.206 | 0.001
13 | 0.277 | 0.397 | 0.136 | 0.043 | 0.275 | 0.207 | 0.001
14 | 0.276 | 0.359 | 0.137 | 0.043 | 0.271 | 0.208 | 0.001
15 | 0.276 | 0.349 | 0.136 | 0.042 | 0.274 | 0.206 | 0.001
16 | 0.303 | 0.339 | 0.133 | 0.041 | 0.303 | 0.202 | 0.0001
17 | 0.306 | 0.338 | 0.134 | 0.04 | 0.306 | 0.201 | 0.0001
18 | 0.31 | 0.337 | 0.132 | 0.04 | 0.308 | 0.2 | 0.0001
19 | 0.307 | 0.338 | 0.131 | 0.04 | 0.305 | 0.201 | 0.0001
20 | 0.312 | 0.336 | 0.131 | 0.04 | 0.311 | 0.2 | 0.0001
21 | 0.312 | 0.337 | 0.129 | 0.04 | 0.308 | 0.2 | 0.0001
22 | 0.316 | 0.336 | 0.132 | 0.04 | 0.315 | 0.199 | 0.0001
23 | 0.32 | 0.335 | 0.131 | 0.039 | 0.319 | 0.199 | 0.0001
24 | 0.319 | 0.335 | 0.13 | 0.04 | 0.318 | 0.199 | 0.0001
25 | 0.324 | 0.334 | 0.13 | 0.039 | 0.323 | 0.198 | 0.0001
26 | 0.32 | 0.335 | 0.13 | 0.04 | 0.318 | 0.199 | 0.0001
27 | 0.321 | 0.335 | 0.13 | 0.039 | 0.32 | 0.198 | 0.0001
28 | 0.326 | 0.334 | 0.127 | 0.039 | 0.321 | 0.198 | 0.0001
29 | 0.33 | 0.333 | 0.129 | 0.039 | 0.328 | 0.197 | 0.0001
30 | 0.33 | 0.333 | 0.13 | 0.039 | 0.33 | 0.197 | 0.0001
31 | 0.328 | 0.333 | 0.13 | 0.039 | 0.325 | 0.198 | 0.0001
32 | 0.331 | 0.332 | 0.128 | 0.039 | 0.33 | 0.197 | 0.0001
33 | 0.334 | 0.333 | 0.13 | 0.039 | 0.331 | 0.196 | 0.0001
34 | 0.33 | 0.333 | 0.129 | 0.039 | 0.328 | 0.197 | 0.0001
35 | 0.325 | 0.334 | 0.131 | 0.039 | 0.324 | 0.198 | 0.0001
36 | 0.337 | 0.332 | 0.13 | 0.038 | 0.337 | 0.196 | 0.0001
37 | 0.328 | 0.334 | 0.13 | 0.039 | 0.327 | 0.197 | 0.0001
38 | 0.338 | 0.332 | 0.129 | 0.038 | 0.336 | 0.196 | 0.0001
39 | 0.338 | 0.332 | 0.128 | 0.038 | 0.338 | 0.196 | 0.0001
40 | 0.337 | 0.332 | 0.129 | 0.038 | 0.336 | 0.196 | 0.0001
41 | 0.335 | 0.333 | 0.131 | 0.039 | 0.333 | 0.196 | 0.0001
42 | 0.338 | 0.332 | 0.129 | 0.038 | 0.337 | 0.196 | 0.0001
43 | 0.338 | 0.331 | 0.129 | 0.038 | 0.338 | 0.196 | 0.0001
44 | 0.336 | 0.333 | 0.128 | 0.039 | 0.335 | 0.196 | 0.0001
45 | 0.339 | 0.331 | 0.128 | 0.038 | 0.338 | 0.196 | 0.0001
46 | 0.341 | 0.332 | 0.129 | 0.038 | 0.339 | 0.195 | 0.0001
47 | 0.34 | 0.331 | 0.127 | 0.038 | 0.339 | 0.196 | 0.0001
48 | 0.299 | 0.339 | 0.131 | 0.039 | 0.295 | 0.199 | 0.0001
49 | 0.338 | 0.331 | 0.128 | 0.038 | 0.337 | 0.196 | 0.0001
50 | 0.342 | 0.332 | 0.127 | 0.038 | 0.339 | 0.196 | 0.0001
51 | 0.341 | 0.331 | 0.127 | 0.038 | 0.341 | 0.195 | 0.0001
52 | 0.345 | 0.33 | 0.127 | 0.038 | 0.344 | 0.195 | 0.0001
53 | 0.34 | 0.331 | 0.128 | 0.038 | 0.339 | 0.196 | 0.0001
54 | 0.341 | 0.331 | 0.129 | 0.038 | 0.34 | 0.196 | 0.0001
55 | 0.349 | 0.329 | 0.127 | 0.038 | 0.349 | 0.194 | 0.0001
56 | 0.344 | 0.33 | 0.126 | 0.038 | 0.343 | 0.195 | 0.0001
57 | 0.341 | 0.331 | 0.126 | 0.038 | 0.339 | 0.196 | 0.0001
58 | 0.348 | 0.33 | 0.126 | 0.038 | 0.347 | 0.194 | 0.0001
59 | 0.343 | 0.332 | 0.128 | 0.038 | 0.341 | 0.195 | 0.0001
60 | 0.346 | 0.331 | 0.128 | 0.038 | 0.345 | 0.195 | 0.0001
61 | 0.346 | 0.33 | 0.125 | 0.038 | 0.344 | 0.195 | 0.0001
62 | 0.347 | 0.329 | 0.126 | 0.038 | 0.346 | 0.194 | 1e-05
63 | 0.35 | 0.33 | 0.128 | 0.038 | 0.348 | 0.194 | 1e-05
64 | 0.345 | 0.33 | 0.126 | 0.038 | 0.344 | 0.195 | 1e-05
65 | 0.349 | 0.33 | 0.128 | 0.038 | 0.347 | 0.195 | 1e-05
---
# CO2 Emissions
The estimated CO2 emissions for training this model are documented below:
- **Emissions**: 0.19095786836275294 grams of CO2
- **Source**: Code Carbon
- **Training Type**: fine-tuning
- **Geographical Location**: Brest, France
- **Hardware Used**: NVIDIA Tesla V100 PCIe 32 Go
---
# Framework Versions
- **Transformers**: 4.41.1
- **Pytorch**: 2.3.0+cu121
- **Datasets**: 2.19.1
- **Tokenizers**: 0.19.1