pretrained model
- https://huggingface.co/nvidia/mit-b0
- SegFormer (b0-sized) encoder pre-trained-only
- SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Xie et al. and first released in this repository.
training-set
- you can find the training-set here : https://codalab.lisn.upsaclay.fr/competitions/8769
training-arguments
- channels : rgb
- batch : 8
- epochs : 8
- learning-rate : 5e-6
- GPU : T4
results on test-set
- Mean IoU : 59.9
- more information here : https://codalab.lisn.upsaclay.fr/competitions/8769
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.