Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ from transformers import AutoModel
|
|
19 |
model = AutoModel.from_pretrained("ianpan/bone-age", trust_remote_code=True)
|
20 |
```
|
21 |
The model is a 3-fold ensemble utilizing the `convnextv2_tiny` backbone.
|
22 |
-
The individual models can be accessed through `model.net0`, `model.net1`, `model.net2`.
|
23 |
Originally, it was trained with both a regression and classification head.
|
24 |
However, this model only loads the classification head, as stand-alone performance was slightly better. The classification head also generates better GradCAMs.
|
25 |
The softmax function is applied to the output logits and multiplied by the corresponding class indices, then summed.
|
@@ -29,8 +29,6 @@ In addition to standard data augmentation, additional augmentations were also ap
|
|
29 |
- Using a cropped radiograph (from the model <https://huggingface.co/ianpan/bone-age-crop>) with probability 0.5
|
30 |
- Histogram matching with a reference image (available in this repo under Files, `ref_img.png`) with probability 0.5
|
31 |
|
32 |
-
The model was trained over 20,000 iterations using a batch size of 64 across 2 NVIDIA RTX 3090 GPUs.
|
33 |
-
|
34 |
Note that both of the above augmentations could be applied simultaneously and in conjunction with standard data augamentations. Thus, the model accommodates a large range of variability in the appearance of a hand radiograph.
|
35 |
|
36 |
On the original challenge test set comprising 200 multi-annotated pediatric hand radiographs, this model achieves a **mean absolute error of 4.16 months** (when applying both cropping and histogram matching to the input radiograph), which surpasses the [top solutions](https://pubs.rsna.org/doi/10.1148/radiol.2018180736) from the original challenge.
|
@@ -86,6 +84,8 @@ coords = coords[0].cpu().numpy()
|
|
86 |
x, y, w, h = coords
|
87 |
# coords already rescaled with img_shape
|
88 |
cropped_img = img[y: y + h, x: x + w]
|
|
|
|
|
89 |
ref = cv2.imread("ref_img.png", 0) # download ref_img.png from this repo
|
90 |
cropped_img = match_histograms(cropped_img, ref)
|
91 |
|
|
|
19 |
model = AutoModel.from_pretrained("ianpan/bone-age", trust_remote_code=True)
|
20 |
```
|
21 |
The model is a 3-fold ensemble utilizing the `convnextv2_tiny` backbone.
|
22 |
+
The individual single-fold models can be accessed through `model.net0`, `model.net1`, `model.net2`. Each of these models was trained over 20,000 iterations using a batch size of 64 across 2 NVIDIA RTX 3090 GPUs.
|
23 |
Originally, it was trained with both a regression and classification head.
|
24 |
However, this model only loads the classification head, as stand-alone performance was slightly better. The classification head also generates better GradCAMs.
|
25 |
The softmax function is applied to the output logits and multiplied by the corresponding class indices, then summed.
|
|
|
29 |
- Using a cropped radiograph (from the model <https://huggingface.co/ianpan/bone-age-crop>) with probability 0.5
|
30 |
- Histogram matching with a reference image (available in this repo under Files, `ref_img.png`) with probability 0.5
|
31 |
|
|
|
|
|
32 |
Note that both of the above augmentations could be applied simultaneously and in conjunction with standard data augamentations. Thus, the model accommodates a large range of variability in the appearance of a hand radiograph.
|
33 |
|
34 |
On the original challenge test set comprising 200 multi-annotated pediatric hand radiographs, this model achieves a **mean absolute error of 4.16 months** (when applying both cropping and histogram matching to the input radiograph), which surpasses the [top solutions](https://pubs.rsna.org/doi/10.1148/radiol.2018180736) from the original challenge.
|
|
|
84 |
x, y, w, h = coords
|
85 |
# coords already rescaled with img_shape
|
86 |
cropped_img = img[y: y + h, x: x + w]
|
87 |
+
|
88 |
+
# histogram matching
|
89 |
ref = cv2.imread("ref_img.png", 0) # download ref_img.png from this repo
|
90 |
cropped_img = match_histograms(cropped_img, ref)
|
91 |
|