File size: 2,239 Bytes
3dd3318 975969a 3dd3318 975969a 3dd3318 975969a 3dd3318 975969a c7b823b 975969a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
tags:
- feature-extraction
- image-classification
- timm
- biology
- cancer
- histology
library_name: timm
model-index:
- name: tcga_brca
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-BRCA
type: image-classification
metrics:
- type: accuracy
value: 0.886 ± 0.059
name: AUC
verified: false
license: gpl-3.0
pipeline_tag: feature-extraction
inference: false
---
# Model card for vit_small_patch16_256.tcga_brca_dino
A Vision Transformer (ViT) image classification model. \
Trained on 2M histology patches from TCGA-BRCA.
![](https://github.com/Richarizardd/Self-Supervised-ViT-Path/raw/master/.github/Pathology_DINO.jpg)
## Model Details
- **Model Type:** Feature backbone
- **Model Stats:**
- Params (M): 21.7
- Image size: 256 x 256 x 3
- **Papers:**
- Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology: https://arxiv.org/abs/2203.00585
- **Dataset:** TGCA BRCA: https://portal.gdc.cancer.gov/
- **Original:** https://github.com/Richarizardd/Self-Supervised-ViT-Path/
- **License:** [GPLv3](https://github.com/Richarizardd/Self-Supervised-ViT-Path/blob/master/LICENSE)
## Model Usage
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://github.com/owkin/HistoSSLscaling/raw/main/assets/example.tif"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_small_patch16_256.tcga_brca_dino",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@misc{chen2022selfsupervised,
title = {Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology},
author = {Richard J. Chen and Rahul G. Krishnan},
year = {2022},
eprint = {2203.00585},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
``` |