|
Knowledge distillation from multi-modal to |
|
mono-modal segmentation networks |
|
Minhao Hu1;2?, Matthis Maillard2?(), Ya Zhang1(), Tommaso Ciceri2, |
|
Giammarco La Barbera2, Isabelle Bloch2, and Pietro Gori2 |
|
1CMIC, Shanghai Jiao Tong University, Shanghai, China |
|
2LTCI, T el ecom Paris, Institut Polytechnique de Paris, France |
|
[email protected] |
|
[email protected] |
|
Abstract. The joint use of multiple imaging modalities for medical im- |
|
age segmentation has been widely studied in recent years. The fusion of |
|
information from dierent modalities has demonstrated to improve the |
|
segmentation accuracy, with respect to mono-modal segmentations, in |
|
several applications. However, acquiring multiple modalities is usually |
|
not possible in a clinical setting due to a limited number of physicians |
|
and scanners, and to limit costs and scan time. Most of the time, only |
|
one modality is acquired. In this paper, we propose KD-Net, a framework |
|
to transfer knowledge from a trained multi-modal network (teacher) to |
|
a mono-modal one (student). The proposed method is an adaptation |
|
of the generalized distillation framework where the student network is |
|
trained on a subset (1 modality) of the teacher's inputs (n modalities). |
|
We illustrate the eectiveness of the proposed framework in brain tumor |
|
segmentation with the BraTS 2018 dataset. Using dierent architectures, |
|
we show that the student network eectively learns from the teacher and |
|
always outperforms the baseline mono-modal network in terms of seg- |
|
mentation accuracy. |
|
1 Introduction |
|
Using multiple modalities to automatically segment medical images has become |
|
a common practice in several applications, such as brain tumor segmentation [11] |
|
or ischemic stroke lesion segmentation [10]. Since dierent image modalities can |
|
accentuate and better describe dierent tissues, their fusion can improve the seg- |
|
mentation accuracy. Although multi-modal models usually give the best results, |
|
it is often dicult to obtain multiple modalities in a clinical setting due to a |
|
limited number of physicians and scanners, and to limit costs and scan time. In |
|
many cases, especially for patients with pathologies or for emergency, only one |
|
modality is acquired. |
|
Two main strategies have been proposed in the literature to deal with prob- |
|
lems where multiple modalities are available at training time but some or most |
|
?The two rst authors contributed equally to this paper.arXiv:2106.09564v1 [cs.CV] 17 Jun 20212 M.Hu et al. |
|
of them are missing at inference time. The rst one is to train a generative model |
|
to synthesize the missing modalities and then perform multi-modal segmenta- |
|
tion. In [13], the authors have shown that using a synthesized modality helps |
|
improving the accuracy of classication of brain tumors. Ben Cohen et al. [1] |
|
generated PET images from CT scans to reduce the number of false positives in |
|
the detection of malignant lesions in livers. Generating a synthesized modality |
|
has also been shown to improve the quality of the segmentation of white matter |
|
hypointensities [12]. The main drawback of this strategy is that it is compu- |
|
tationally cumbersome, especially when many modalities are missing. In fact, |
|
one needs to train one generative network per missing modality in addition to a |
|
multi-modal segmentation network. |
|
The second strategy consists in learning a modality-invariant feature space |
|
that encodes the multi-modal information during training, and that allows for all |
|
possible combinations of modalities during inference. Within this second strat- |
|
egy, Havaei et al. proposed HeMIS [4], a model that, for each modality, trains a |
|
dierent feature extractor. The rst two moments of the feature maps are then |
|
computed and concatenated in the latent space from which a decoder is trained |
|
to predict the segmentation map. Dorent et al. [3], inspired by HeMIS, proposed |
|
U-HVED where they introduced skip-connections by considering intermediate |
|
layers, before each down-sampling step, as a feature map. This network outper- |
|
formed HeMIS on BraTS 2018 dataset. In [2], instead of fusing the layers by |
|
computing mean and variance, the authors learned a mapping function from the |
|
multiple feature maps to the latent space. They claimed that computing the |
|
moments to fuse the maps is not satisfactory since it makes each modality con- |
|
tribute equally to the nal result which is inconsistent with the fact that each |
|
modality highlights dierent zones. They obtained better results than HeMIS |
|
on BraTS 2015 dataset. This second strategy has good results only when one |
|
or two modalities are missing, however, when only one modality is available, it |
|
has worse results than a model trained on this specic modality. This kind of |
|
methods is therefore not suitable for a clinical setting where only one modality |
|
is usually acquired, such as pre-operative neurosurgery or radiotherapy. |
|
In this paper, in contrast to the previously presented methods, we propose a |
|
framework to transfer knowledge from a multi-modal network to a mono-modal |
|
one. The proposed method is based on generalized knowledge distillation [9] |
|
which is a combination of distillation [5] and privileged information [14]. Distil- |
|
lation has originally been designed for classication problems to make a small |
|
network (Student) learn from an ensemble of networks or from a large network |
|
(Teacher). It has been applied to image segmentation in [8,15] where the same in- |
|
put modalities have been used for the Teacher network and the Student network. |
|
In [15], the Student learns from the Teacher only thanks to a loss term between |
|
their outputs. In [8], the authors also constrained the intermediate layers of the |
|
Student to be similar to the ones of the Teacher. With a dierent perspective, |
|
the framework of privileged information was designed to boost the performance |
|
of a Student model by learning from both the training data and a Teacher model |
|
with privileged and additional information. In generalized knowledge distillation,KD-Net 3 |
|
one uses distillation to extract useful knowledge from the privileged information |
|
of the Teacher [9]. In our case, Teacher and Student have the same architec- |
|
ture (i.e. same number of parameters) but the Teacher can learn from multiple |
|
input modalities (additional information) whereas the Student from only one. |
|
The proposed framework is based on two encoder-decoder networks, which have |
|
demonstrated to work well in image segmentation [7], one for the Student and |
|
one for the Teacher. Importantly, the proposed framework is generic since it |
|
works for dierent architectures of the encoder-decoder networks. Each encoder |
|
summarizes its input space to a latent representation that captures important |
|
information for the segmentation. Since the Teacher and the Student process |
|
dierent inputs but aim at extracting the same information, we make the as- |
|
sumption that their rst layers should be dierent, whereas the last layers and |
|
especially the latent representations (i.e. bottleneck) should be similar. By forc- |
|
ing the latent space of the Student to resemble the one of the Teacher, we make |
|
the hypothesis that the Student should learn from the additional information |
|
of the Teacher. To the best of our knowledge, this is the rst time that the |
|
generalized knowledge distillation strategy is adapted to guide the learning of |
|
a mono-modal student network using a multi-modal teacher network. We show |
|
the eectiveness of the proposed method using the BraTS 2018 dataset [11] for |
|
brain tumor segmentation. |
|
The paper is organized as follows. First, we present the proposed framework, |
|
called KD-Net and illustrated in Figure 1, and how the Student learns from the |
|
Teacher and the reference segmentation. Then, we present the implementation |
|
details and the results on the BraTS 2018 dataset [11]. |
|
KD loss GT loss KL loss |
|
128×128×128×1 |
|
128×128×128×4 |
|
MaxPool3d Trilinear interpolation Softmax Conv3d InstanceNorm3d LeakyReLUReference |
|
segmentation |
|
Teacher |
|
Student |
|
Fig. 1. Illustration of the proposed framework. Both Teacher and Student have the |
|
same architecture adapted from nnUNet [7]. First, the Teacher is trained using only |
|
the reference segmentation (GT loss). Then, the student network is trained using all |
|
proposed losses: KL loss, KD loss and GT loss.4 M.Hu et al. |
|
2 KD-Net |
|
The goal of the proposed framework is to train a mono-modal segmentation |
|
network (Student) by leveraging the knowledge from a well-trained multi-modal |
|
segmentation network (Teacher). Except for the number of input channels, both |
|
networks have the same encoder-decoder architecture with skip connections. The |
|
multi-modal input xi=fxi |
|
n;n= 1:::Ngis the concatenation of the Nmodalities |
|
for theithsample of the dataset. Let EtandDt(resp.EsandDs) denote the |
|
encoder and decoder parts of the Teacher (resp. Student). The Teacher network |
|
ft(xi) =DtEt(xi) receives as input multiple modalities whereas the student |
|
networkfs(xi |
|
k) =DsEs(xi |
|
k) only one modality xi |
|
k,kbeing a xed integer |
|
between 1 and N. |
|
We rst train the Teacher, using only the reference segmentation as target. |
|
Then, we train the Student using three dierent losses: the knowledge distillation |
|
term, the dissimilarity between the latent spaces, and the reference segmentation |
|
loss. Note that the weights of the Teacher are frozen during the training of the |
|
Student and the error of the Student is not back-propagated to the Teacher. |
|
The rst two terms allow the Student to learn from the Teacher by using the |
|
soft prediction of the latter as target and by forcing the encoded information |
|
(i.e. bottleneck) of the Student to be similar to the one of the Teacher. The last |
|
term makes the predicted segmentation of the Student similar to the reference |
|
segmentation. |
|
2.1 Generalized knowledge distillation |
|
Following the strategy of generalized knowledge distillation [9], we transfer useful |
|
knowledge from the additional information of the Teacher to the Student using |
|
the soft label targets of the Teacher. These are computed as follows: |
|
si=(ft(xi)=T) (1) |
|
whereis the softmax function and T, the temperature parameter, is a strictly |
|
positive value. The parameter Tcontrols the softness of the target, and the higher |
|
it is, the softer the target. The idea of using soft targets is to uncover relations |
|
between classes that would be harder to detect with hard labels. The eectiveness |
|
of using a temperature parameter to soften the labels was demonstrated in [5]. |
|
The knowledge distillation loss is dened as: |
|
LKD=X |
|
i |
|
(1 Dice (si;(fs(xi |
|
k)))) +BCE (s |
|
i;(fs(xi |
|
k)) |
|
(2) |
|
whereDice is the Dice score, BCE the binary cross-entropy measure and s |
|
i |
|
the binary prediction of the teacher. We need to binarize sisince the soft labels |
|
cannot be used in the binary cross-entropy. The dice score ( Dice ) measures the |
|
similarity of the shape of two ensembles. Hence, it globally measures how the |
|
Teacher and Student's segmentation maps are close to each other. By contrast, |
|
the binary cross-entropy ( BCE ) is computed for each pixel independently andKD-Net 5 |
|
therefore it is a local measure. We use the combination of these two terms to |
|
globally and locally measure the distance between the Student prediction and |
|
the Teacher soft labels. |
|
2.2 Latent space |
|
We speculate that Teacher and Student, having dierent inputs, should also |
|
encode dierently the information in the rst layers, the ones related to low- |
|
level image properties, such as color, texture and edges. By contrast, the deepest |
|
layers closer to the bottleneck, and related to higher level properties, should be |
|
more similar. Furthermore, we make the assumption that an encoder-decoder |
|
network encodes the information to correctly segment the input images in its |
|
latent space. Based on that, we propose to force the Student to learn from the |
|
additional information of the Teacher encoded in its bottleneck (and partially in |
|
the deepest layers) by making their latent representations as close as possible. |
|
To this end, we apply the Kullback-Leibler (KL) divergence as a loss function |
|
between the teacher and student's bottlenecks: |
|
LKL(p;q) =X |
|
iX |
|
jqi(j) logqi(j) |
|
pi(j) |
|
(3) |
|
wherepi(resp.qi) are the
attened and normalized vector of the bottleneck |
|
Es(xi |
|
k) (respEt(xi)). Note that this function is not symmetric and we put the |
|
vectors in that order because we want the distribution of the Student's bottleneck |
|
to be similar to the one of the Teacher. |
|
2.3 Objective function |
|
We add a third term to the objective function to make the predicted segmen- |
|
tation as close as possible to the reference segmentation. It is the sum of the |
|
Dice loss (Dice ) and the binary cross-entropy ( BCE ) for the same reasons as in |
|
Section 2.1. We call it LGT: |
|
LGT=X |
|
i |
|
(1 Dice (yi;(fs(xi |
|
k)))) +BCE (yi;(fs(xi |
|
k)) |
|
: (4) |
|
whereyidenotes the reference segmentation of the ithsample in the dataset. |
|
The complete objective function is then: |
|
L=LKD+ (1 )LGT+LKL (5) |
|
with2[0;1] and2R+. The imitation parameter balances the in
uence |
|
of the reference segmentation with the one of the Teacher's soft labels. The |
|
greater the value, the greater the in
uence of the Teacher's soft labels. The |
|
parameter is instead needed to balance the magnitude of the KL loss with |
|
respect to the other two losses.6 M.Hu et al. |
|
3 Results and Discussion |
|
3.1 Dataset |
|
We evaluate the performance of the proposed framework on a publicly avail- |
|
able dataset from the BraTS 2018 Challenge [11]. It contains MR scans from |
|
285 patients with four modalities: T1, T2, T1 contrasted-enhanced (T1ce) and |
|
Flair. The goal of the challenge is to segment three sub-regions of brain tumors: |
|
whole tumor (WT), tumor core (TC) and enhancing tumor (ET). We apply a |
|
central crop of size 128 128128 and a random
ip along each axis for data |
|
augmentation. For each modality, only non-zero voxels have been normalized by |
|
subtracting the mean and dividing by standard deviation. Due to memory and |
|
time constraint, we subsample the images to the size 64 6464. |
|
3.2 Implementation details |
|
We adopt the encoder-decoder architecture described in Figure 1. Empirically, |
|
we found that the best parameters for the objective function are = 0:75,T= 5 |
|
and= 10. We used Adam optimizer for 500 epochs with a learning rate equal |
|
to 0.0001 that is multiplied by 0.2 when the validation loss has not decreased |
|
for 50 epochs. We run a three fold cross validation on the 285 training cases |
|
of BraTS 2018. The training of the baseline, the Teacher or the Student takes |
|
approximately 12 hours on a NVIDIA P100 GPU. |
|
3.3 Results |
|
In our experiments, the Teacher uses all four modalities (T1, T2, T1ce and |
|
Flair concatenated) and the Student uses only T1ce. We choose T1ce for the |
|
Student since this is the standard modality used in pre-operative neurosurgery |
|
or radiotherapy. |
|
Model comparison: To demonstrate the eectiveness of the proposed frame- |
|
work, we rst compare it to a baseline model. Its architecture is the same as the |
|
encoder-decoder network in Figure 1 and it is trained using only the T1ce modal- |
|
ity as input. We also compare it to two other models, U-HVED and HeMIS, using |
|
only T1ce as input. Results were directly taken from [3]. The results are visible |
|
in Table 1. Our method outperforms U-HVED and HeMIS in the segmentation |
|
of all three tumor components. KD-Net also seems to obtain better results than |
|
the method proposed in [2] (again when using only T1ce as input). The authors |
|
show results on the BraTS 2015 dataset and therefore they are not directly |
|
comparable to KD-Net. Furthermore, we could not nd online their code. Nev- |
|
ertheless, the results of HeMIS [4] on BraTS 2015 (in [2]) and on BraTS 2018 |
|
(in [3]) suggest that the observations of BraTS 2018 seem to be more dicult |
|
to segment. Since the method proposed in [2] has worst results than ours on a |
|
dataset that seems easier to segment, this should also be the case for the BraTS |
|
2018 dataset. However, this should be conrmed.KD-Net 7 |
|
Table 1. Comparison of 3 models using the dice score on the tumor regions. Results |
|
of U-HVED and HeMIS are taken from the article [3], where the standard deviations |
|
were not provided. |
|
Model ET TC WT |
|
Baseline (nnUnet [7]) 68:11:27 80 :282:44 77 :061:47 |
|
Teacher (4 modalities) 69:471:86 80 :771:18 88 :480:79 |
|
U-HVED 65:5 66 :7 62 :4 |
|
HeMIS 60:8 58 :5 58 :5 |
|
Ours 71 :671:22 81 :451:25 76:981:54 |
|
Ablation study: To evaluate the contribution of each loss term, we did an |
|
ablation study by removing each term from the objective function dened in |
|
Eq. 5. Table 2 shows the results using either 0 or 4 skip-connections both in the |
|
Student and Teacher networks. We observe that both the KL and KD loss im- |
|
proves the results with respect to the baseline model, especially for the enhanced |
|
tumor and tumor core. This also demonstrates that the proposed framework is |
|
generic and it works with dierent encoder-decoder architectures. More results |
|
can be found in the supplementary material. |
|
Table 2. Ablation study of the loss terms. We compare the results of the model |
|
trained with 3 dierent objective functions: the baseline using only the GT loss, KD- |
|
Net trained with only the KL term and KD-Net with the complete objective function. |
|
We also tested it with 0 or 4 skip-connections for both the Student and the Teacher. |
|
Skip |
|
connectionsModel Loss ET TC WT |
|
4 Baseline GT 68:11:27 80:282:44 77:061:47 |
|
4 Teacher GT 69:471:86 80:771:18 88:480:79 |
|
4 KD-Net GT+KL 70:001:51 80:851:82 77 :081:29 |
|
4 KD-Net GT+KD 69:221:19 80:541:66 76:831:36 |
|
4 KD-Net GT+KL+KD 71 :671:2281 :451:25 76:981:54 |
|
0 Baseline GT 42:953:42 69:441:37 69:411:52 |
|
0 Teacher GT 42:592:54 69:791:63 75:930:33 |
|
0 KD-Net GT+KL 47 :590:98 70:961:73 71:411:2 |
|
0 KD-Net GT+KD 44:81:1 70:122:42 70:191:4 |
|
0 KD-Net GT+KL+KD 46:232:91 70:732:47 71 :931:26 |
|
Qualitative results: In Figure 2, we show some qualitative results of the |
|
proposed framework and compare them with the ones obtained using the base- |
|
line method. We can see that the proposed framework allows the Student to8 M.Hu et al. |
|
discard some outliers and predict segmentation labels of higher quality. In the |
|
experiments, the student uses as input only T1ce, which clearly highlights the |
|
enhancing tumor. Remarkably, it seems that the Student learns more in this |
|
region (see Figure 2 and Table 1). The knowledge distilled from the Teacher |
|
seems to help the Student learn more where it is supposed to be \stronger". |
|
More qualitative results can be found in the supplementary material. |
|
Fig. 2. Qualitative results obtained using the the baseline and the proposed framework |
|
(Student). We show the slice of a subject with the corresponding 3 segmentation labels. |
|
Observations: It is important to remark that we also tried to expand the |
|
Student network by rst synthesizing another modality, such as the Flair, from |
|
the T1ce and then using it, together with the T1ce, for segmenting the tumor |
|
labels. Results were actually worse than the baseline and the computational |
|
time quite prohibitive. We also tried sharing the weights between the Teacher |
|
and the Student in the deepest layers of the networks to help transferring the |
|
knowledge. The intuition behind it was that since the bottlenecks should be the |
|
same, the information in the deepest layers should be handled identically. The |
|
results were almost identical, but slightly worse, to the ones obtained with the |
|
proposed framework presented in Figure 1. In this paper, we used the nnUNet[7] |
|
as network for the Student and Teacher, but theoretically any other encoder- |
|
decoder architecture, such as the one in [6], could be used.KD-Net 9 |
|
4 Conclusions |
|
We present a novel framework to transfer knowledge from a multi-modal segmen- |
|
tation network to a mono-modal one. To this end, we propose to use a twofold |
|
approach. We employ the strategy of generalized knowledge distillation and, in |
|
addition, we also constrain the latent representation of the Student to be similar |
|
to the one of the Teacher. We validate our method in brain tumor segmen- |
|
tation, achieving better results than state-of-the-art methods when using only |
|
T1ce on Brats 2018. The proposed framework is generic and can be applied to |
|
any encoder-decoder segmentation network. The gain in segmentation accuracy |
|
and robustness to errors produced by the proposed framework makes it highly |
|
valuable for real-world clinical scenarios where only one modality is available at |
|
test time. |
|
5 Acknowledgment |
|
M.Hu is grateful for nancial support from China Scholarship Council.This work |
|
is supported by SHEITC (No. 2018-RGZN-02046), 111 plan (No. BP0719010), |
|
and STCSM (No. 18DZ2270700). M. Maillard was supported by a grant of IMT, |
|
Fondation Mines-T el ecom and Institut Carnot TSN, through the \Futur & Rup- |
|
tures" program. |
|
References |
|
1. Ben-Cohen, A., Klang, E., Raskin, S., Soer, S., Ben-Haim, S., Konen, E., Amitai, |
|
M., Greenspan, H.: Cross-modality synthesis from CT to PET using FCN and |
|
GAN networks for improved automated lesion detection. Engineering Applications |
|
of Articial Intelligence 78, 186{194 (2018) |
|
2. Chen, C., Dou, Q., Jin, Y., Chen, H., Qin, J., Heng, P.A.: Robust Multimodal |
|
Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion. In: |
|
MICCAI. vol. LNCS 11766, pp. 447{456. Springer, Cham (2019) |
|
3. Dorent, R., Joutard, S., Modat, M., Ourselin, S., Vercauteren, T.: Hetero-Modal |
|
Variational Encoder-Decoder for Joint Modality Completion and Segmentation. |
|
In: MICCAI. vol. LNCS 11765, pp. 74{82. Springer (2019) |
|
4. Havaei, M., Guizard, N., Chapados, N., Bengio, Y.: HeMIS: Hetero-Modal Image |
|
Segmentation. In: MICCAI. vol. LNCS 9901, pp. 469{477. Springer (2016) |
|
5. Hinton, G., Vinyals, O., Dean, J.: Distilling the Knowledge in a Neural Network. |
|
Deep Learning and Representation Learning Workshop: NIPS 2015 (2015) |
|
6. Ibtehaz, N., Rahman, M.S.: MultiResUNet : Rethinking the U-Net Architecture for |
|
Multimodal Biomedical Image Segmentation. Neural Networks 121, 74{87 (2020) |
|
7. Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: No New- |
|
Net. In: BrainLes - MICCAI Workshop. vol. LNCS 11384, pp. 234{244. Springer |
|
(2019) |
|
8. Liu, Y., Chen, K., Liu, C., Qin, Z., Luo, Z., Wang, J.: Structured Knowledge |
|
Distillation for Semantic Segmentation. In: CVPR. pp. 2604{2613 (2019) |
|
9. Lopez-Paz, D., Bottou, L., Sch olkopf, B., Vapnik, V.: Unifying distillation and |
|
privileged information. In: ICLR (2016)10 M.Hu et al. |
|
10. Maier, O., et al.: ISLES 2015 - A public evaluation benchmark for ischemic stroke |
|
lesion segmentation from multispectral MRI. Medical Image Analysis 35, 250{269 |
|
(2017) |
|
11. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark |
|
(BRATS). IEEE Transactions on Medical Imaging 34(10), 1993{2024 (2015) |
|
12. Orbes-Arteaga, M., Cardoso, M.J., Srensen, L., Modat, M., Ourselin, S., Nielsen, |
|
M., Pai, A.: Simultaneous synthesis of FLAIR and segmentation of white matter |
|
hypointensities from T1 MRIs. In: MIDL (2018) |
|
13. van Tulder, G., de Bruijne, M.: Why Does Synthesized Data Improve Multi- |
|
sequence Classication? In: MICCAI. vol. LNCS 9349, pp. 531{538. Springer, |
|
Cham (2015) |
|
14. Vapnik, V., Izmailov, R.: Learning using privileged information: Similarity control |
|
and knowledge transfer. Journal of Machine Learning Research 16(61), 2023{2049 |
|
(2015) |
|
15. Xie, J., Shuai, B., Hu, J.F., Lin, J., Zheng, W.S.: Improving Fast Segmentation |
|
With Teacher-student Learning. In: British Machine Vision Conference (BMVC) |
|
(2018) |