Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,47 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# LoRA-Ensemble: Uncertainty Modelling for Self-attention Networks
|
6 |
+
Michelle Halbheer, Dominik J. Mühlematter, Alexander Becker, Dominik Narnhofer, Helge Aasen, Konrad Schindler and Mehmet Ozgur Turkoglu - 2024
|
7 |
+
|
8 |
+
## Pretrained models
|
9 |
+
This repository contains the pretrained models corresponding to the code we released on [GitHub](https://github.com/prs-eth/LoRA-Ensemble/).
|
10 |
+
The usage of the models with our pipeline is described on GitHub.
|
11 |
+
This repository only contains the models for our final experiments per dataset, not, however, for all intermediate results.
|
12 |
+
|
13 |
+
## Citation
|
14 |
+
If you find our work useful or interesting or use our code, please cite our paper as follows
|
15 |
+
```latex
|
16 |
+
@misc{
|
17 |
+
title = {LoRA-Ensemble: Uncertainty Modelling for Self-attention Networks},
|
18 |
+
author = {Halbheer, Michelle and M\"uhlematter, Dominik Jan and Becker, Alexander and Narnhofer, Dominik and Aasen, Helge and Schindler, Konrad and Turkoglu, Mehmet Ozgur}
|
19 |
+
year = {2024}
|
20 |
+
note = {arXiv: <arxiv code>}
|
21 |
+
}
|
22 |
+
```
|
23 |
+
|
24 |
+
## CIFAR-100
|
25 |
+
The table below shows the evaluation results obtained using different methods. Each method was trained five times with varying random seeds.
|
26 |
+
|
27 |
+
| Method (ViT) | Accuracy | ECE | Settings name* | Model weights* |
|
28 |
+
|----------------------|------------------------|-----------------------|-------------------|------------------------------|
|
29 |
+
| Single Network | \\(76.6\pm0.2\\) | \\(0.144\pm0.001\\) |CIFAR100_settings_explicit|Deep_Ensemble_ViT_base_32_1_members_CIFAR100_settings_explicit\<seed\>.pt|
|
30 |
+
| Single Network with LoRA | \\(79.6\pm0.2\\) | \\(\textbf{0.014}\pm0.003\\) |CIFAR100_settings_LoRA|LoRA_Former_ViT_base_32_1_members_CIFAR100_settings_LoRA\<seed\>.pt|
|
31 |
+
| MC Dropout | \\(77.1\pm0.5\\) | \\(0.055\pm0.002\\) |CIFAR100_settings_MCDropout|MCDropout_ViT_base_32_16_members_CIFAR100_settings_MCDropout\<seed\>.pt|
|
32 |
+
| Explicit Ensemble | \\(\underline{79.8}\pm0.2\\) | \\(0.098\pm0.001\\) |CIFAR100_settings_explicit|Deep_Ensemble_ViT_base_32_16_members_CIFAR100_settings_explicit\<seed\>.pt|
|
33 |
+
| LoRA-Ensemble | \\(\textbf{82.5}\pm0.1\\) | \\(\underline{0.035}\pm0.001\\) |CIFAR100_settings_LoRA|LoRA_Former_ViT_base_32_16_members_CIFAR100_settings_LoRA\<seed\>.pt|
|
34 |
+
|
35 |
+
\* Settings and model names are followed by a number in the range 1-5 indicating the used random seed.
|
36 |
+
|
37 |
+
## HAM10000
|
38 |
+
The table below shows the evaluation results obtained using different methods. Each method was trained five times with varying random seeds.
|
39 |
+
| Method (ViT) | Accuracy| ECE | Settings name* | Model weights* |
|
40 |
+
|----------------------|------------------------|-----------------------|-------------------|------------------------------|
|
41 |
+
| Single Network | \\(84.3\pm0.5\\) | \\(0.136\pm0.006\\) |HAM10000_settings_explicit|Deep_Ensemble_ViT_base_32_1_members_HAM10000_settings_explicit\<seed\>.pt|
|
42 |
+
| Single Network with LoRA | \\(83.2\pm0.7\\) | \\(0.085\pm0.004\\) |HAM10000_settings_LoRA|LoRA_Former_ViT_base_32_1_members_HAM10000_settings_LoRA\<seed\>.pt|
|
43 |
+
| MC Dropout | \\(83.7\pm0.4\\) | \\(\underline{0.099}\pm0.007\\) |HAM10000_settings_MCDropout|MCDropout_ViT_base_32_16_members_HAM10000_settings_MCDropout\<seed\>.pt|
|
44 |
+
| Explicit Ensemble | \\(\underline{85.7}\pm0.3\\) | \\(0.106\pm0.002\\) |HAM10000_settings_explicit|Deep_Ensemble_ViT_base_32_16_members_HAM10000_settings_explicit\<seed\>.pt|
|
45 |
+
| LoRA-Ensemble | \\(\textbf{88.0}\pm0.2\\) | \\(\textbf{0.037}\pm0.002\\) |HAM10000_settings_LoRA|LoRA_Former_ViT_base_32_16_members_HAM10000_settings_LoRA\<seed\>.pt|
|
46 |
+
|
47 |
+
\* Settings and model names are followed by a number in the range 1-5 indicating the used random seed.
|