monai
medical
katielink commited on
Commit
715e62e
·
1 Parent(s): d38f56f

restructure readme to match updated template

Browse files
Files changed (3) hide show
  1. README.md +36 -30
  2. configs/metadata.json +2 -1
  3. docs/README.md +36 -30
README.md CHANGED
@@ -5,17 +5,21 @@ tags:
5
  library_name: monai
6
  license: apache-2.0
7
  ---
8
- # Description
9
- A pre-trained model for the endoscopic inbody classification task.
10
-
11
  # Model Overview
12
- This model is trained using the SEResNet50 structure, whose details can be found in [1]. All datasets are from private samples of [Activ Surgical](https://www.activsurgical.com/). Samples in training and validation dataset are from the same 4 videos, while test samples are from different two videos.
13
- The [pytorch model](https://drive.google.com/file/d/14CS-s1uv2q6WedYQGeFbZeEWIkoyNa-x/view?usp=sharing) and [torchscript model](https://drive.google.com/file/d/1fOoJ4n5DWKHrt9QXTZ2sXwr9C-YvVGCM/view?usp=sharing) are shared in google drive. Modify the `bundle_root` parameter specified in `configs/train.json` and `configs/inference.json` to reflect where models are downloaded. Expected directory path to place downloaded models is `models/` under `bundle_root`.
 
14
 
15
  ![image](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_workflow.png)
16
 
17
  ## Data
18
- Datasets used in this work were provided by [Activ Surgical](https://www.activsurgical.com/). Here is a [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/inbody_outbody_samples.zip) of 20 samples (10 in-body and 10 out-body) to show what this dataset looks like. After downloading this dataset, python script in `scripts` folder naming `data_process` can be used to get label json files by running the command below and replacing datapath and outpath parameters.
 
 
 
 
 
 
19
  ```
20
  python scripts/data_process.py --datapath /path/to/data/root --outpath /path/to/label/folder
21
  ```
@@ -47,32 +51,35 @@ The input label json should be a list made up by dicts which includes `image` an
47
  ```
48
 
49
  ## Training configuration
50
- The training was performed with an at least 12GB-memory GPU.
51
-
52
- Actual Model Input: 256 x 256 x 3
53
-
54
- ## Input and output formats
55
- Input: 3 channel video frames
56
 
57
- Output: probability vector whose length equals to 2: Label 0: in body; Label 1: out body
 
58
 
59
- ## Scores
60
- This model achieves the following accuracy score on the test dataset:
 
 
61
 
62
- Accuracy = 0.98
 
63
 
64
- ## Training Performance
65
- A graph showing the training loss over 25 epochs.
66
 
67
- ![](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_train_loss.png) <br>
 
68
 
69
- ## Validation Performance
70
- A graph showing the validation accuracy over 25 epochs.
71
 
72
- ![](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_val_accuracy.png) <br>
73
 
74
- ## commands example
75
- Execute training:
76
 
77
  ```
78
  python -m monai.bundle run training \
@@ -81,7 +88,7 @@ python -m monai.bundle run training \
81
  --logging_file configs/logging.conf
82
  ```
83
 
84
- Override the `train` config to execute multi-GPU training:
85
 
86
  ```
87
  torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training \
@@ -90,10 +97,9 @@ torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training
90
  --logging_file configs/logging.conf
91
  ```
92
 
93
- Please note that the distributed training related options depend on the actual running environment, thus you may need to remove `--standalone`, modify `--nnodes` or do some other necessary changes according to the machine you used.
94
- Please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for more details.
95
 
96
- Override the `train` config to execute evaluation with the trained model:
97
 
98
  ```
99
  python -m monai.bundle run evaluating \
@@ -102,7 +108,7 @@ python -m monai.bundle run evaluating \
102
  --logging_file configs/logging.conf
103
  ```
104
 
105
- Execute inference:
106
 
107
  ```
108
  python -m monai.bundle run evaluating \
@@ -111,7 +117,7 @@ python -m monai.bundle run evaluating \
111
  --logging_file configs/logging.conf
112
  ```
113
 
114
- Export checkpoint to TorchScript file:
115
 
116
  ```
117
  python -m monai.bundle ckpt_export network_def \
 
5
  library_name: monai
6
  license: apache-2.0
7
  ---
 
 
 
8
  # Model Overview
9
+ A pre-trained model for the endoscopic inbody classification task and trained using the SEResNet50 structure, whose details can be found in [1]. All datasets are from private samples of [Activ Surgical](https://www.activsurgical.com/). Samples in training and validation dataset are from the same 4 videos, while test samples are from different two videos.
10
+
11
+ The [PyTorch model](https://drive.google.com/file/d/14CS-s1uv2q6WedYQGeFbZeEWIkoyNa-x/view?usp=sharing) and [torchscript model](https://drive.google.com/file/d/1fOoJ4n5DWKHrt9QXTZ2sXwr9C-YvVGCM/view?usp=sharing) are shared in google drive. Modify the `bundle_root` parameter specified in `configs/train.json` and `configs/inference.json` to reflect where models are downloaded. Expected directory path to place downloaded models is `models/` under `bundle_root`.
12
 
13
  ![image](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_workflow.png)
14
 
15
  ## Data
16
+ The datasets used in this work were provided by [Activ Surgical](https://www.activsurgical.com/).
17
+
18
+ We've provided a [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/inbody_outbody_samples.zip) of 20 samples (10 in-body and 10 out-body) to show what this dataset looks like.
19
+
20
+ ### Preprocessing
21
+ After downloading this dataset, python script in `scripts` folder naming `data_process` can be used to get label json files by running the command below and replacing datapath and outpath parameters.
22
+
23
  ```
24
  python scripts/data_process.py --datapath /path/to/data/root --outpath /path/to/label/folder
25
  ```
 
51
  ```
52
 
53
  ## Training configuration
54
+ The training as performed with the following:
55
+ - GPU: At least 12GB of GPU memory
56
+ - Actual Model Input: 256 x 256 x 3
57
+ - Optimizer: Adam
58
+ - Learning Rate: 1e-3
 
59
 
60
+ ### Input
61
+ A three channel video frame
62
 
63
+ ### Output
64
+ Two Channels
65
+ - Label 0: in body
66
+ - Label 1: out body
67
 
68
+ ## Performance
69
+ Accuracy was used for evaluating the performance of the model. This model achieves an accuracy score of 0.98
70
 
71
+ #### Training Loss
72
+ ![A graph showing the training loss over 25 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_train_loss.png)
73
 
74
+ #### Validation Accuracy
75
+ ![A graph showing the validation accuracy over 25 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_val_accuracy.png)
76
 
77
+ ## MONAI Bundle Commands
78
+ In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file.
79
 
80
+ For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html).
81
 
82
+ #### Execute training:
 
83
 
84
  ```
85
  python -m monai.bundle run training \
 
88
  --logging_file configs/logging.conf
89
  ```
90
 
91
+ #### Override the `train` config to execute multi-GPU training:
92
 
93
  ```
94
  torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training \
 
97
  --logging_file configs/logging.conf
98
  ```
99
 
100
+ Please note that the distributed training-related options depend on the actual running environment; thus, users may need to remove `--standalone`, modify `--nnodes`, or do some other necessary changes according to the machine used. For more details, please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
 
101
 
102
+ #### Override the `train` config to execute evaluation with the trained model:
103
 
104
  ```
105
  python -m monai.bundle run evaluating \
 
108
  --logging_file configs/logging.conf
109
  ```
110
 
111
+ #### Execute inference:
112
 
113
  ```
114
  python -m monai.bundle run evaluating \
 
117
  --logging_file configs/logging.conf
118
  ```
119
 
120
+ #### Export checkpoint to TorchScript file:
121
 
122
  ```
123
  python -m monai.bundle ckpt_export network_def \
configs/metadata.json CHANGED
@@ -1,7 +1,8 @@
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
- "version": "0.3.1",
4
  "changelog": {
 
5
  "0.3.1": "add workflow, train loss and validation accuracy figures",
6
  "0.3.0": "update dataset processing",
7
  "0.2.2": "update to use monai 1.0.1",
 
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
+ "version": "0.3.2",
4
  "changelog": {
5
+ "0.3.2": "restructure readme to match updated template",
6
  "0.3.1": "add workflow, train loss and validation accuracy figures",
7
  "0.3.0": "update dataset processing",
8
  "0.2.2": "update to use monai 1.0.1",
docs/README.md CHANGED
@@ -1,14 +1,18 @@
1
- # Description
2
- A pre-trained model for the endoscopic inbody classification task.
3
-
4
  # Model Overview
5
- This model is trained using the SEResNet50 structure, whose details can be found in [1]. All datasets are from private samples of [Activ Surgical](https://www.activsurgical.com/). Samples in training and validation dataset are from the same 4 videos, while test samples are from different two videos.
6
- The [pytorch model](https://drive.google.com/file/d/14CS-s1uv2q6WedYQGeFbZeEWIkoyNa-x/view?usp=sharing) and [torchscript model](https://drive.google.com/file/d/1fOoJ4n5DWKHrt9QXTZ2sXwr9C-YvVGCM/view?usp=sharing) are shared in google drive. Modify the `bundle_root` parameter specified in `configs/train.json` and `configs/inference.json` to reflect where models are downloaded. Expected directory path to place downloaded models is `models/` under `bundle_root`.
 
7
 
8
  ![image](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_workflow.png)
9
 
10
  ## Data
11
- Datasets used in this work were provided by [Activ Surgical](https://www.activsurgical.com/). Here is a [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/inbody_outbody_samples.zip) of 20 samples (10 in-body and 10 out-body) to show what this dataset looks like. After downloading this dataset, python script in `scripts` folder naming `data_process` can be used to get label json files by running the command below and replacing datapath and outpath parameters.
 
 
 
 
 
 
12
  ```
13
  python scripts/data_process.py --datapath /path/to/data/root --outpath /path/to/label/folder
14
  ```
@@ -40,32 +44,35 @@ The input label json should be a list made up by dicts which includes `image` an
40
  ```
41
 
42
  ## Training configuration
43
- The training was performed with an at least 12GB-memory GPU.
44
-
45
- Actual Model Input: 256 x 256 x 3
46
-
47
- ## Input and output formats
48
- Input: 3 channel video frames
49
 
50
- Output: probability vector whose length equals to 2: Label 0: in body; Label 1: out body
 
51
 
52
- ## Scores
53
- This model achieves the following accuracy score on the test dataset:
 
 
54
 
55
- Accuracy = 0.98
 
56
 
57
- ## Training Performance
58
- A graph showing the training loss over 25 epochs.
59
 
60
- ![](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_train_loss.png) <br>
 
61
 
62
- ## Validation Performance
63
- A graph showing the validation accuracy over 25 epochs.
64
 
65
- ![](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_val_accuracy.png) <br>
66
 
67
- ## commands example
68
- Execute training:
69
 
70
  ```
71
  python -m monai.bundle run training \
@@ -74,7 +81,7 @@ python -m monai.bundle run training \
74
  --logging_file configs/logging.conf
75
  ```
76
 
77
- Override the `train` config to execute multi-GPU training:
78
 
79
  ```
80
  torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training \
@@ -83,10 +90,9 @@ torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training
83
  --logging_file configs/logging.conf
84
  ```
85
 
86
- Please note that the distributed training related options depend on the actual running environment, thus you may need to remove `--standalone`, modify `--nnodes` or do some other necessary changes according to the machine you used.
87
- Please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for more details.
88
 
89
- Override the `train` config to execute evaluation with the trained model:
90
 
91
  ```
92
  python -m monai.bundle run evaluating \
@@ -95,7 +101,7 @@ python -m monai.bundle run evaluating \
95
  --logging_file configs/logging.conf
96
  ```
97
 
98
- Execute inference:
99
 
100
  ```
101
  python -m monai.bundle run evaluating \
@@ -104,7 +110,7 @@ python -m monai.bundle run evaluating \
104
  --logging_file configs/logging.conf
105
  ```
106
 
107
- Export checkpoint to TorchScript file:
108
 
109
  ```
110
  python -m monai.bundle ckpt_export network_def \
 
 
 
 
1
  # Model Overview
2
+ A pre-trained model for the endoscopic inbody classification task and trained using the SEResNet50 structure, whose details can be found in [1]. All datasets are from private samples of [Activ Surgical](https://www.activsurgical.com/). Samples in training and validation dataset are from the same 4 videos, while test samples are from different two videos.
3
+
4
+ The [PyTorch model](https://drive.google.com/file/d/14CS-s1uv2q6WedYQGeFbZeEWIkoyNa-x/view?usp=sharing) and [torchscript model](https://drive.google.com/file/d/1fOoJ4n5DWKHrt9QXTZ2sXwr9C-YvVGCM/view?usp=sharing) are shared in google drive. Modify the `bundle_root` parameter specified in `configs/train.json` and `configs/inference.json` to reflect where models are downloaded. Expected directory path to place downloaded models is `models/` under `bundle_root`.
5
 
6
  ![image](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_workflow.png)
7
 
8
  ## Data
9
+ The datasets used in this work were provided by [Activ Surgical](https://www.activsurgical.com/).
10
+
11
+ We've provided a [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/inbody_outbody_samples.zip) of 20 samples (10 in-body and 10 out-body) to show what this dataset looks like.
12
+
13
+ ### Preprocessing
14
+ After downloading this dataset, python script in `scripts` folder naming `data_process` can be used to get label json files by running the command below and replacing datapath and outpath parameters.
15
+
16
  ```
17
  python scripts/data_process.py --datapath /path/to/data/root --outpath /path/to/label/folder
18
  ```
 
44
  ```
45
 
46
  ## Training configuration
47
+ The training as performed with the following:
48
+ - GPU: At least 12GB of GPU memory
49
+ - Actual Model Input: 256 x 256 x 3
50
+ - Optimizer: Adam
51
+ - Learning Rate: 1e-3
 
52
 
53
+ ### Input
54
+ A three channel video frame
55
 
56
+ ### Output
57
+ Two Channels
58
+ - Label 0: in body
59
+ - Label 1: out body
60
 
61
+ ## Performance
62
+ Accuracy was used for evaluating the performance of the model. This model achieves an accuracy score of 0.98
63
 
64
+ #### Training Loss
65
+ ![A graph showing the training loss over 25 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_train_loss.png)
66
 
67
+ #### Validation Accuracy
68
+ ![A graph showing the validation accuracy over 25 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_inbody_classification_val_accuracy.png)
69
 
70
+ ## MONAI Bundle Commands
71
+ In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file.
72
 
73
+ For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html).
74
 
75
+ #### Execute training:
 
76
 
77
  ```
78
  python -m monai.bundle run training \
 
81
  --logging_file configs/logging.conf
82
  ```
83
 
84
+ #### Override the `train` config to execute multi-GPU training:
85
 
86
  ```
87
  torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training \
 
90
  --logging_file configs/logging.conf
91
  ```
92
 
93
+ Please note that the distributed training-related options depend on the actual running environment; thus, users may need to remove `--standalone`, modify `--nnodes`, or do some other necessary changes according to the machine used. For more details, please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
 
94
 
95
+ #### Override the `train` config to execute evaluation with the trained model:
96
 
97
  ```
98
  python -m monai.bundle run evaluating \
 
101
  --logging_file configs/logging.conf
102
  ```
103
 
104
+ #### Execute inference:
105
 
106
  ```
107
  python -m monai.bundle run evaluating \
 
110
  --logging_file configs/logging.conf
111
  ```
112
 
113
+ #### Export checkpoint to TorchScript file:
114
 
115
  ```
116
  python -m monai.bundle ckpt_export network_def \