Update to scripts
Browse files- README.md +6 -6
- configs/inference.json +2 -1
- configs/metadata.json +4 -3
- configs/train.json +14 -12
- docs/README.md +6 -6
- scripts/__init__.py +1 -0
README.md
CHANGED
@@ -46,11 +46,11 @@ The dataset used for training unfortunately cannot be made public, however the t
|
|
46 |
* 200: Tricuspid septal
|
47 |
* 250: Tricuspid free wall
|
48 |
|
49 |
-
The following command will train with the default NPZ filename `./valvelandmarks.npz
|
50 |
|
51 |
```sh
|
52 |
-
|
53 |
-
--
|
54 |
```
|
55 |
|
56 |
## Inference
|
@@ -58,11 +58,11 @@ PYTHONPATH=./scripts python -m monai.bundle run training --meta_file configs/met
|
|
58 |
The included `inference.json` script will run inference on a directory containing Nifti files whose images have shape `(256, 256, 1, N)` for `N` timesteps. For each image the output in the `output_dir` directory will be a npy file containing a result array of shape `(N, 2, 10)` storing the 10 coordinates for each `N` timesteps. Invoking this script can be done as follows, assuming the current directory is the bundle directory:
|
59 |
|
60 |
```sh
|
61 |
-
|
62 |
-
--
|
63 |
```
|
64 |
|
65 |
-
|
66 |
|
67 |
|
68 |
### Reference
|
|
|
46 |
* 200: Tricuspid septal
|
47 |
* 250: Tricuspid free wall
|
48 |
|
49 |
+
The following command will train with the default NPZ filename `./valvelandmarks.npz`, assuming the current directory is the bundle directory:
|
50 |
|
51 |
```sh
|
52 |
+
python -m monai.bundle run training --meta_file configs/metadata.json --config_file "['configs/train.json', 'configs/common.json']" \
|
53 |
+
--bundle_root . --dataset_file ./valvelandmarks.npz --output_dir /path/to/outputs
|
54 |
```
|
55 |
|
56 |
## Inference
|
|
|
58 |
The included `inference.json` script will run inference on a directory containing Nifti files whose images have shape `(256, 256, 1, N)` for `N` timesteps. For each image the output in the `output_dir` directory will be a npy file containing a result array of shape `(N, 2, 10)` storing the 10 coordinates for each `N` timesteps. Invoking this script can be done as follows, assuming the current directory is the bundle directory:
|
59 |
|
60 |
```sh
|
61 |
+
python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file "['configs/inference.json', 'configs/common.json']" \
|
62 |
+
--bundle_root . --dataset_dir /path/to/data --output_dir /path/to/outputs
|
63 |
```
|
64 |
|
65 |
+
The provided test Nifti file can be placed in a directory which is then used as the `dataset_dir` value. This image was derived from [the AMRG Cardiac Atlas dataset](http://www.cardiacatlas.org/studies/amrg-cardiac-atlas) (AMRG Cardiac Atlas, Auckland MRI Research Group, Auckland, New Zealand). The results from this inference can be visualised by changing path values in [view_results.ipynb](./view_results.ipynb).
|
66 |
|
67 |
|
68 |
### Reference
|
configs/inference.json
CHANGED
@@ -2,11 +2,12 @@
|
|
2 |
"imports": [
|
3 |
"$import os",
|
4 |
"$import glob",
|
|
|
5 |
"$import scripts"
|
6 |
],
|
7 |
"bundle_root": ".",
|
8 |
-
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
|
9 |
"ckpt_path": "$@bundle_root + '/models/model.pt'",
|
|
|
10 |
"dataset_dir": "/workspace/data",
|
11 |
"datalist": "$list(sorted(glob.glob(@dataset_dir + '/*.nii*')))",
|
12 |
"output_dir": "./output",
|
|
|
2 |
"imports": [
|
3 |
"$import os",
|
4 |
"$import glob",
|
5 |
+
"$import torch",
|
6 |
"$import scripts"
|
7 |
],
|
8 |
"bundle_root": ".",
|
|
|
9 |
"ckpt_path": "$@bundle_root + '/models/model.pt'",
|
10 |
+
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
|
11 |
"dataset_dir": "/workspace/data",
|
12 |
"datalist": "$list(sorted(glob.glob(@dataset_dir + '/*.nii*')))",
|
13 |
"output_dir": "./output",
|
configs/metadata.json
CHANGED
@@ -1,11 +1,12 @@
|
|
1 |
{
|
2 |
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220729.json",
|
3 |
-
"version": "0.
|
4 |
"changelog": {
|
5 |
-
"0.
|
|
|
6 |
"0.1.0": "Initial version"
|
7 |
},
|
8 |
-
"monai_version": "1.0.
|
9 |
"pytorch_version": "1.10.2",
|
10 |
"numpy_version": "1.21.2",
|
11 |
"optional_packages_version": {},
|
|
|
1 |
{
|
2 |
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220729.json",
|
3 |
+
"version": "0.3.0",
|
4 |
"changelog": {
|
5 |
+
"0.3.0": "Update to scripts",
|
6 |
+
"0.2.0": "Unify naming",
|
7 |
"0.1.0": "Initial version"
|
8 |
},
|
9 |
+
"monai_version": "1.0.0",
|
10 |
"pytorch_version": "1.10.2",
|
11 |
"numpy_version": "1.21.2",
|
12 |
"optional_packages_version": {},
|
configs/train.json
CHANGED
@@ -1,15 +1,21 @@
|
|
1 |
{
|
2 |
"imports": [
|
3 |
"$import datetime",
|
4 |
-
"$import numpy
|
5 |
"$import torch",
|
6 |
"$import ignite",
|
7 |
"$import scripts"
|
8 |
],
|
9 |
"bundle_root": ".",
|
10 |
-
"val_interval": 1,
|
11 |
-
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
|
12 |
"ckpt_path": "$@bundle_root + '/models/model.pt'",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
"dataset_file": "./valvelandmarks.npz",
|
14 |
"output_dir": "$datetime.datetime.now().strftime('./results/output_%y%m%d_%H%M%S')",
|
15 |
"network_def": {
|
@@ -56,7 +62,7 @@
|
|
56 |
"_target_": "EnsureTyped",
|
57 |
"keys": "@both_keys",
|
58 |
"data_type": "numpy",
|
59 |
-
"dtype": "$(
|
60 |
},
|
61 |
{
|
62 |
"_target_": "EnsureTyped",
|
@@ -147,7 +153,7 @@
|
|
147 |
"_target_": "EnsureTyped",
|
148 |
"keys": "@both_keys",
|
149 |
"data_type": "numpy",
|
150 |
-
"dtype": "$(
|
151 |
},
|
152 |
{
|
153 |
"_target_": "EnsureTyped",
|
@@ -186,10 +192,6 @@
|
|
186 |
},
|
187 |
"transform": "@eval_transforms"
|
188 |
},
|
189 |
-
"num_iters": 400,
|
190 |
-
"batch_size": 600,
|
191 |
-
"num_epochs": 100,
|
192 |
-
"num_substeps": 3,
|
193 |
"sampler": {
|
194 |
"_target_": "torch.utils.data.WeightedRandomSampler",
|
195 |
"weights": "$torch.ones(len(@train_dataset))",
|
@@ -201,14 +203,14 @@
|
|
201 |
"dataset": "@train_dataset",
|
202 |
"batch_size": "@batch_size",
|
203 |
"repeats": "@num_substeps",
|
204 |
-
"num_workers":
|
205 |
"sampler": "@sampler"
|
206 |
},
|
207 |
"eval_dataloader": {
|
208 |
"_target_": "DataLoader",
|
209 |
"dataset": "@eval_dataset",
|
210 |
"batch_size": "@batch_size",
|
211 |
-
"num_workers":
|
212 |
},
|
213 |
"lossfn": {
|
214 |
"_target_": "torch.nn.L1Loss"
|
@@ -216,7 +218,7 @@
|
|
216 |
"optimizer": {
|
217 |
"_target_": "torch.optim.Adam",
|
218 |
"params": "[email protected]()",
|
219 |
-
"lr":
|
220 |
},
|
221 |
"evaluator": {
|
222 |
"_target_": "SupervisedEvaluator",
|
|
|
1 |
{
|
2 |
"imports": [
|
3 |
"$import datetime",
|
4 |
+
"$import numpy",
|
5 |
"$import torch",
|
6 |
"$import ignite",
|
7 |
"$import scripts"
|
8 |
],
|
9 |
"bundle_root": ".",
|
|
|
|
|
10 |
"ckpt_path": "$@bundle_root + '/models/model.pt'",
|
11 |
+
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
|
12 |
+
"val_interval": 1,
|
13 |
+
"num_iters": 400,
|
14 |
+
"batch_size": 600,
|
15 |
+
"num_epochs": 100,
|
16 |
+
"num_substeps": 3,
|
17 |
+
"learning_rate": 0.0001,
|
18 |
+
"num_workers": 8,
|
19 |
"dataset_file": "./valvelandmarks.npz",
|
20 |
"output_dir": "$datetime.datetime.now().strftime('./results/output_%y%m%d_%H%M%S')",
|
21 |
"network_def": {
|
|
|
62 |
"_target_": "EnsureTyped",
|
63 |
"keys": "@both_keys",
|
64 |
"data_type": "numpy",
|
65 |
+
"dtype": "$(numpy.float32, numpy.int32)"
|
66 |
},
|
67 |
{
|
68 |
"_target_": "EnsureTyped",
|
|
|
153 |
"_target_": "EnsureTyped",
|
154 |
"keys": "@both_keys",
|
155 |
"data_type": "numpy",
|
156 |
+
"dtype": "$(numpy.float32, numpy.int32)"
|
157 |
},
|
158 |
{
|
159 |
"_target_": "EnsureTyped",
|
|
|
192 |
},
|
193 |
"transform": "@eval_transforms"
|
194 |
},
|
|
|
|
|
|
|
|
|
195 |
"sampler": {
|
196 |
"_target_": "torch.utils.data.WeightedRandomSampler",
|
197 |
"weights": "$torch.ones(len(@train_dataset))",
|
|
|
203 |
"dataset": "@train_dataset",
|
204 |
"batch_size": "@batch_size",
|
205 |
"repeats": "@num_substeps",
|
206 |
+
"num_workers": "@num_workers",
|
207 |
"sampler": "@sampler"
|
208 |
},
|
209 |
"eval_dataloader": {
|
210 |
"_target_": "DataLoader",
|
211 |
"dataset": "@eval_dataset",
|
212 |
"batch_size": "@batch_size",
|
213 |
+
"num_workers": "@num_workers"
|
214 |
},
|
215 |
"lossfn": {
|
216 |
"_target_": "torch.nn.L1Loss"
|
|
|
218 |
"optimizer": {
|
219 |
"_target_": "torch.optim.Adam",
|
220 |
"params": "[email protected]()",
|
221 |
+
"lr": "@learning_rate"
|
222 |
},
|
223 |
"evaluator": {
|
224 |
"_target_": "SupervisedEvaluator",
|
docs/README.md
CHANGED
@@ -39,11 +39,11 @@ The dataset used for training unfortunately cannot be made public, however the t
|
|
39 |
* 200: Tricuspid septal
|
40 |
* 250: Tricuspid free wall
|
41 |
|
42 |
-
The following command will train with the default NPZ filename `./valvelandmarks.npz
|
43 |
|
44 |
```sh
|
45 |
-
|
46 |
-
--
|
47 |
```
|
48 |
|
49 |
## Inference
|
@@ -51,11 +51,11 @@ PYTHONPATH=./scripts python -m monai.bundle run training --meta_file configs/met
|
|
51 |
The included `inference.json` script will run inference on a directory containing Nifti files whose images have shape `(256, 256, 1, N)` for `N` timesteps. For each image the output in the `output_dir` directory will be a npy file containing a result array of shape `(N, 2, 10)` storing the 10 coordinates for each `N` timesteps. Invoking this script can be done as follows, assuming the current directory is the bundle directory:
|
52 |
|
53 |
```sh
|
54 |
-
|
55 |
-
--
|
56 |
```
|
57 |
|
58 |
-
|
59 |
|
60 |
|
61 |
### Reference
|
|
|
39 |
* 200: Tricuspid septal
|
40 |
* 250: Tricuspid free wall
|
41 |
|
42 |
+
The following command will train with the default NPZ filename `./valvelandmarks.npz`, assuming the current directory is the bundle directory:
|
43 |
|
44 |
```sh
|
45 |
+
python -m monai.bundle run training --meta_file configs/metadata.json --config_file "['configs/train.json', 'configs/common.json']" \
|
46 |
+
--bundle_root . --dataset_file ./valvelandmarks.npz --output_dir /path/to/outputs
|
47 |
```
|
48 |
|
49 |
## Inference
|
|
|
51 |
The included `inference.json` script will run inference on a directory containing Nifti files whose images have shape `(256, 256, 1, N)` for `N` timesteps. For each image the output in the `output_dir` directory will be a npy file containing a result array of shape `(N, 2, 10)` storing the 10 coordinates for each `N` timesteps. Invoking this script can be done as follows, assuming the current directory is the bundle directory:
|
52 |
|
53 |
```sh
|
54 |
+
python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file "['configs/inference.json', 'configs/common.json']" \
|
55 |
+
--bundle_root . --dataset_dir /path/to/data --output_dir /path/to/outputs
|
56 |
```
|
57 |
|
58 |
+
The provided test Nifti file can be placed in a directory which is then used as the `dataset_dir` value. This image was derived from [the AMRG Cardiac Atlas dataset](http://www.cardiacatlas.org/studies/amrg-cardiac-atlas) (AMRG Cardiac Atlas, Auckland MRI Research Group, Auckland, New Zealand). The results from this inference can be visualised by changing path values in [view_results.ipynb](./view_results.ipynb).
|
59 |
|
60 |
|
61 |
### Reference
|
scripts/__init__.py
CHANGED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
from . import valve_landmarks
|