rmaphoh commited on
Commit
8943654
·
1 Parent(s): 2ac90ad
Files changed (2) hide show
  1. README.md +10 -12
  2. requirement.txt +4 -1
README.md CHANGED
@@ -19,21 +19,17 @@ Keras version implemented by Yuka Kihara can be found [here](https://github.com/
19
 
20
  - A [visualisation demo](https://github.com/rmaphoh/RETFound_MAE/blob/main/RETFound_visualize.ipynb) is added
21
 
22
- ### Install enviroment
23
 
24
- Create enviroment with conda:
25
 
26
  ```
27
  conda create -n retfound python=3.7.5 -y
28
  conda activate retfound
29
  ```
30
 
31
- Install Pytorch 1.81 (cuda 11.1)
32
- ```
33
- pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
34
- ```
35
 
36
- Install others
37
  ```
38
  git clone https://github.com/rmaphoh/RETFound_MAE/
39
  cd RETFound_MAE
@@ -43,7 +39,9 @@ pip install -r requirement.txt
43
 
44
  ### Fine-tuning with RETFound weights
45
 
46
- - RETFound pre-trained weights
 
 
47
  <table><tbody>
48
  <!-- START TABLE -->
49
  <!-- TABLE HEADER -->
@@ -59,14 +57,14 @@ pip install -r requirement.txt
59
  </tr>
60
  </tbody></table>
61
 
62
- - Organise data (using IDRiD as an [example](Example.ipynb))
63
 
64
  <p align="left">
65
  <img src="./pic/file_index.jpg" width="160">
66
  </p>
67
 
68
 
69
- - Start fine-tuning (use IDRiD as example). A fine-tuned checkpoint will be saved during training. Evaluation will be run after training.
70
 
71
 
72
  ```
@@ -85,7 +83,7 @@ python -m torch.distributed.launch --nproc_per_node=1 --master_port=48798 main_f
85
  ```
86
 
87
 
88
- - For evaluation only
89
 
90
 
91
  ```
@@ -106,7 +104,7 @@ python -m torch.distributed.launch --nproc_per_node=1 --master_port=48798 main_f
106
 
107
  ### Load the model and weights (if you want to call the model in your code)
108
 
109
- ```
110
  import torch
111
  import models_vit
112
  from util.pos_embed import interpolate_pos_embed
 
19
 
20
  - A [visualisation demo](https://github.com/rmaphoh/RETFound_MAE/blob/main/RETFound_visualize.ipynb) is added
21
 
22
+ ### Install environment
23
 
24
+ 1. Create environment with conda:
25
 
26
  ```
27
  conda create -n retfound python=3.7.5 -y
28
  conda activate retfound
29
  ```
30
 
31
+ 2. Install dependencies
 
 
 
32
 
 
33
  ```
34
  git clone https://github.com/rmaphoh/RETFound_MAE/
35
  cd RETFound_MAE
 
39
 
40
  ### Fine-tuning with RETFound weights
41
 
42
+ To fine tune RETFound on your own data, follow these steps:
43
+
44
+ 1. Download the RETFound pre-trained weights
45
  <table><tbody>
46
  <!-- START TABLE -->
47
  <!-- TABLE HEADER -->
 
57
  </tr>
58
  </tbody></table>
59
 
60
+ 2. Organise your data into this directory structure (using IDRiD as an [example](Example.ipynb))
61
 
62
  <p align="left">
63
  <img src="./pic/file_index.jpg" width="160">
64
  </p>
65
 
66
 
67
+ 3. Start fine-tuning (use IDRiD as example). A fine-tuned checkpoint will be saved during training. Evaluation will be run after training.
68
 
69
 
70
  ```
 
83
  ```
84
 
85
 
86
+ 4. For evaluation only
87
 
88
 
89
  ```
 
104
 
105
  ### Load the model and weights (if you want to call the model in your code)
106
 
107
+ ```python
108
  import torch
109
  import models_vit
110
  from util.pos_embed import interpolate_pos_embed
requirement.txt CHANGED
@@ -1,3 +1,7 @@
 
 
 
 
1
  opencv-python==4.5.3.56
2
  pandas==0.25.3
3
  Pillow==8.3.1
@@ -12,4 +16,3 @@ tensorboard-data-server==0.6.1
12
  tensorboard-plugin-wit==1.8.0
13
  timm==0.3.2
14
  tqdm==4.62.1
15
-
 
1
+ --find-links https://download.pytorch.org/whl/torch_stable.html
2
+ torch==1.8.1+cu111
3
+ torchvision==0.9.1+cu111
4
+ torchaudio==0.8.1
5
  opencv-python==4.5.3.56
6
  pandas==0.25.3
7
  Pillow==8.3.1
 
16
  tensorboard-plugin-wit==1.8.0
17
  timm==0.3.2
18
  tqdm==4.62.1